Updates from: 02/24/2022 02:11:51
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Customize Ui With Html https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/customize-ui-with-html.md
Previously updated : 10/14/2021 Last updated : 02/23/2022
Instead of creating your custom page content from scratch, you can customize Azu
The following table lists the default page content provided by Azure AD B2C. Download the files and use them as a starting point for creating your own custom pages.
-| Default page | Description | Content definition ID<br/>(custom policy only) |
+| Page | Description | Templates |
|:--|:--|-|
-| [exception.html](https://login.microsoftonline.com/static/tenant/default/exception.cshtml) | **Error page**. This page is displayed when an exception or an error is encountered. | *api.error* |
-| [selfasserted.html](https://login.microsoftonline.com/static/tenant/default/selfAsserted.cshtml) | **Self-Asserted page**. Use this file as a custom page content for a social account sign-up page, a local account sign-up page, a local account sign-in page, password reset, and more. The form can contain various input controls, such as: a text input box, a password entry box, a radio button, single-select drop-down boxes, and multi-select check boxes. | *api.localaccountsignin*, *api.localaccountsignup*, *api.localaccountpasswordreset*, *api.selfasserted* |
-| [multifactor-1.0.0.html](https://login.microsoftonline.com/static/tenant/default/multifactor-1.0.0.cshtml) | **Multi-factor authentication page**. On this page, users can verify their phone numbers (by using text or voice) during sign-up or sign-in. | *api.phonefactor* |
-| [updateprofile.html](https://login.microsoftonline.com/static/tenant/default/updateProfile.cshtml) | **Profile update page**. This page contains a form that users can access to update their profile. This page is similar to the social account sign-up page, except for the password entry fields. | *api.selfasserted.profileupdate* |
-| [unified.html](https://login.microsoftonline.com/static/tenant/default/unified.cshtml) | **Unified sign-up or sign-in page**. This page handles the user sign-up and sign-in process. Users can use enterprise identity providers, social identity providers such as Facebook or Google+, or local accounts. | *api.signuporsignin* |
+| Unified sign-up or sign-in | This page handles the user sign-up and sign-in process. Users can use enterprise identity providers, social identity providers such as Facebook, Microsoft account, or local accounts. | [Classic](https://login.microsoftonline.com/static/tenant/default/unified.cshtml), [Ocean Blue](https://login.microsoftonline.com/static/tenant/templates/AzureBlue/unified.cshtml), and [Slate Gray](https://login.microsoftonline.com/static/tenant/templates/MSA/unified.cshtml). |
+| Sign-in (only)| The sign-in page is also known as the *Identity provider selection*. It handles the user sign-in with local account, or federated identity providers. Use this page to allow sign-in without the ability to sign-up. For example before user can edit their profile. | [Classic](https://login.microsoftonline.com/static/tenant/default/idpSelector.cshtml), [Ocean Blue](https://login.microsoftonline.com/static/tenant/templates/AzureBlue/idpSelector.cshtml), and [Slate Gray](https://login.microsoftonline.com/static/tenant/templates/MSA/idpSelector.cshtml).
+| Self-Asserted | Most interactions in Azure AD B2C where the user is expected to provide input are self-asserted. For example, a sign-up page, sign-in page, or password reset page. Use this template as a custom page content for a social account sign-up page, a local account sign-up page, a local account sign-in page, password reset, edit profile, block page and more. The self-asserted page can contain various input controls, such as: a text input box, a password entry box, a radio button, single-select drop-down boxes, and multi-select check boxes. | [Classic](https://login.microsoftonline.com/static/tenant/default/selfAsserted.cshtml), [Ocean Blue](https://login.microsoftonline.com/static/tenant/templates/AzureBlue/selfAsserted.cshtml), and [Slate Gray](https://login.microsoftonline.com/static/tenant/templates/MSA/selfAsserted.cshtml). |
+| Multi-factor authentication | On this page, users can verify their phone numbers (by using text or voice) during sign-up or sign-in. | [Classic](https://login.microsoftonline.com/static/tenant/default/multifactor-1.0.0.cshtml), [Ocean Blue](https://login.microsoftonline.com/static/tenant/templates/AzureBlue/multifactor-1.0.0.cshtml), and [Slate Gray](https://login.microsoftonline.com/static/tenant/templates/MSA/multifactor-1.0.0.cshtml). |
+| Error | This page is displayed when an exception or an error is encountered. | [Classic](https://login.microsoftonline.com/static/tenant/default/exception.cshtml), [Ocean Blue](https://login.microsoftonline.com/static/tenant/templates/AzureBlue/exception.cshtml), and [Slate Gray](https://login.microsoftonline.com/static/tenant/templates/MSA/exception.cshtml). |
++ ## Hosting the page content
When using your own HTML and CSS files to customize the UI, host your UI content
You localize your HTML content by enabling [language customization](language-customization.md) in your Azure AD B2C tenant. Enabling this feature allows Azure AD B2C to set the HTML page language attribute and pass the OpenID Connect parameter `ui_locales` to your endpoint.
-#### Single-template approach
+### Single-template approach
During page load, Azure AD B2C sets the HTML page language attribute with the current language. For example, `<html lang="en">`. To render different styles per the current language, use the CSS `:lang` selector along with your CSS definition.
To host your HTML content in Blob storage, perform the following steps:
1. **Redundancy** can remain **Geo-redundant storage (GRS)** 1. Select **Review + create** and wait a few seconds for Azure AD to run a validation. 1. Select **Create** to create the storage account. After the deployment is completed, the storage account page opens automatically or select **Go to resource**.+ #### 2.1 Create a container To create a public container in Blob storage, perform the following steps:
To use [company branding](customize-ui.md#configure-company-branding) assets in
## Next steps
-Learn how to enable [client-side JavaScript code](javascript-and-page-layout.md).
+Learn how to enable [client-side JavaScript code](javascript-and-page-layout.md).
active-directory-b2c Partner Dynamics 365 Fraud Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-dynamics-365-fraud-protection.md
In the provided [custom policies](https://github.com/azure-ad-b2c/partner-integr
For additional information, review the following articles: -- [Microsoft DFP samples](https://github.com/Microsoft/Dynamics-365-Fraud-Protection-Samples)
+- [Microsoft DFP samples](https://github.com/azure-ad-b2c/partner-integrations/tree/master/samples/Dynamics-Fraud-Protection)
- [Custom policies in Azure AD B2C](./custom-policy-overview.md)
active-directory How To Authentication Find Coverage Gaps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-authentication-find-coverage-gaps.md
Previously updated : 11/03/2021 Last updated : 02/22/2022
# Find and address gaps in strong authentication coverage for your administrators
-Requiring multi-factor authentication (MFA) for the administrators in your tenant is one of the first steps you can take to increase the security of your tenant. In this article, we'll cover how to make sure all of your administrators are covered by multi-factor authentication.
+Requiring multifactor authentication (MFA) for the administrators in your tenant is one of the first steps you can take to increase the security of your tenant. In this article, we'll cover how to make sure all of your administrators are covered by multifactor authentication.
## Detect current usage for Azure AD Built-in administrator roles
The [Azure AD Secure Score](../fundamentals/identity-secure-score.md) provides a
There are different ways to check if your admins are covered by an MFA policy. -- To troubleshoot sign-in for a specific administrator, you can use the sign-in logs. The sign-in logs let you filter **Authentication requirement** for specific users. Any sign-in where **Authentication requirement** is **Single-factor authentication** means there was no multi-factor authentication policy that was required for the sign-in.
+- To troubleshoot sign-in for a specific administrator, you can use the sign-in logs. The sign-in logs let you filter **Authentication requirement** for specific users. Any sign-in where **Authentication requirement** is **Single-factor authentication** means there was no multifactor authentication policy that was required for the sign-in.
![Screenshot of the sign-in log.](./media/how-to-authentication-find-coverage-gaps/auth-requirement.png)
There are different ways to check if your admins are covered by an MFA policy.
- To choose which policy to enable based on your user licenses, we have a new MFA enablement wizard to help you [compare MFA policies](concept-mfa-licensing.md#compare-multi-factor-authentication-policies) and see which steps are right for your organization. The wizard shows administrators who were protected by MFA in the last 30 days.
- ![Screenshot of the Multi-factor authentication enablement wizard.](./media/how-to-authentication-find-coverage-gaps/wizard.png)
+ ![Screenshot of the multifactor authentication enablement wizard.](./media/how-to-authentication-find-coverage-gaps/wizard.png)
-- To programmatically create a report listing all users with Admins roles in your tenant and their strong authentication status, you can run a [PowerShell script](https://github.com/microsoft/AzureADToolkit/blob/main/src/Find-AADToolkitUnprotectedUsersWithAdminRoles.ps1). This script enumerates all permanent and eligible built-in and custom role assignments as well as groups with roles assigned, and finds users that are either not registered for MFA or not signing in with MFA by evaluating their authentication methods and their sign-in activity.
+- You can run [this script](https://github.com/microsoft/AzureADToolkit/blob/main/src/Find-AADToolkitUnprotectedUsersWithAdminRoles.ps1) to programmatically generate a report of all users with directory role assignments who have signed in with or without MFA in the last 30 days. This script will enumerate all active built-in and custom role assignments, all eligible built-in and custom role assignments, and groups with roles assigned.
-## Enforce multi-factor authentication on your administrators
+## Enforce multifactor authentication on your administrators
-Based on gaps you found, require administrators to use multi-factor authentication in one of the following ways:
+If you find administrators who aren't protected by multifactor authentication, you can protect them in one of the following ways:
- If your administrators are licensed for Azure AD Premium, you can [create a Conditional Access policy](tutorial-enable-azure-mfa.md) to enforce MFA for administrators. You can also update this policy to require MFA from users who are in custom roles. - Run the [MFA enablement wizard](https://aka.ms/MFASetupGuide) to choose your MFA policy. -- If you assign custom or built-in admin roles in [Privileged Identity Management](../privileged-identity-management/pim-configure.md), require multi-factor authentication upon role activation.
+- If you assign custom or built-in admin roles in [Privileged Identity Management](../privileged-identity-management/pim-configure.md), require multifactor authentication upon role activation.
## Use Passwordless and phishing resistant authentication methods for your administrators
-After your admins are enforced for multi-factor authentication and have been using it for a while, it is time to raise the bar on strong authentication and use Passwordless and phishing resistant authentication method:
+After your admins are enforced for multifactor authentication and have been using it for a while, it is time to raise the bar on strong authentication and use Passwordless and phishing resistant authentication method:
- [Phone Sign-in (with Microsoft Authenticator)](concept-authentication-authenticator-app.md) - [FIDO2](concept-authentication-passwordless.md#fido2-security-keys)
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
description: Learn how to use number matching in MFA notifications
Previously updated : 11/17/2021 Last updated : 02/23/2022
Number matching is available for the following scenarios. When enabled, all scen
- [AD FS adapter](howto-mfaserver-adfs-windows-server.md) - [NPS extension](howto-mfa-nps-extension.md)
+>[!NOTE]
+>For passwordless users, enabling number matching has no impact because it's already part of the passwordless experience.
+ ### Multifactor authentication When a user responds to an MFA push notification using Microsoft Authenticator, they will be presented with a number. They need to type that number into the app to complete the approval.
To enable number matching in the Azure AD portal, complete the following steps:
![Screenshot of enabling number match.](media/howto-authentication-passwordless-phone/enable-number-matching.png)
-## Known issues
--- Number matching for admin roles during SSPR is pending and unavailable for a couple days.- ## Next steps [Authentication methods in Azure Active Directory - Microsoft Authenticator app](concept-authentication-authenticator-app.md)
active-directory Cloudknox All Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-all-reports.md
+
+ Title: View a list and description of all system reports available in CloudKnox Permissions Management reports
+description: View a list and description of all system reports available in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View a list and description of system reports
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+CloudKnox Permissions Management (CloudKnox) has various types of system reports that capture specific sets of data. These reports allow management, auditors, and administrators to:
+
+- Make timely decisions.
+- Analyze trends and system/user performance.
+- Identify trends in data and high risk areas so that management can address issues more quickly and improve their efficiency.
+
+This article provides you with a list and description of the system reports available in CloudKnox. Depending on the report, you can download it in comma-separated values (**CSV**) format, portable document format (**PDF**), or Microsoft Excel Open XML Spreadsheet (**XLSX**) format.
+
+## Download a system report
+
+1. In the CloudKnox home page, select the **Reports** tab, and then select the **Systems reports** subtab.
+1. In the **Report Name** column, find the report you want, and then select the down arrow to the right of the report name to download the report.
+
+ Or, from the ellipses **(...)** menu, select **Download**.
+
+ The following message displays: **Successfully started to generate on demand report.**
++
+## Summary of available system reports
+
+| Report name | Type of the report | File format | Description | Availability | Collated report? |
+|-|--|--|| -|-|
+| Access Key Entitlements and Usage Report | Summary </p>Detailed | CSV | This report displays: </p> - Access key age, last rotation date, and last usage date availability in the summary report. Use this report to decide when to rotate access keys. </p> - Granted task and Permissions creep index (PCI) score. This report provides supporting information when you want to take the action on the keys. | AWS</p>Azure</p>GCP | Yes |
+| All Permissions for Identity | Detailed | CSV | This report lists all the assigned permissions for the selected identities. | Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) | N/A |
+| Group Entitlements and Usage | Summary | CSV | This report tracks all group level entitlements and the permission assignment, PCI. The number of members is also listed as part of this report. | AWS, Azure, or GCP | Yes |
+| Identity Permissions | Summary | CSV | This report tracks any, or specific, task usage per **User**, **Group**, **Role**, or **App**. | AWS, Azure, or GCP | No |
+| NIST 800-53 | Detailed </p>Summary </p>Dashboard | CSV </p>PDF | **Dashboard**: This report helps track the overall progress of the NIST 800-53 benchmark. It lists the percentage passing, overall pass or fail of test control along with the breakup of L1/L2 per Auth system. </p>**Summary**: For each authorized system, this report lists the test control pass or fail per authorized system and the number of resources evaluated for each test control. </p>**Detailed**: This report helps auditors and administrators to track the resource level pass or fail per test control. | AWS, Azure, or GCP | Yes |
+| PCI DSS | Detailed </p>Summary </p>Dashboard | CSV | **Dashboard**: This report helps track the overall progress of the PCI-DSS benchmark. It lists the percentage passing, overall pass or fail of test control along with the breakup of L1/L2 per Auth system. </p>**Summary**: For each authorized system, this report lists the test control pass or fail per authorized system and the number of resources evaluated for each test control. </p>**Detailed**: This report helps auditors and administrators to track the resource level pass or fail per test control. | AWS, Azure, or GCP | Yes |
+| PCI History | Summary | CSV | This report helps track **Monthly PCI History** for each authorized system. It can be used to plot the trend of the PCI. | AWS, Azure, or GCP | Yes |
+| Permissions Analytics Report (PAR) | Summary | PDF | This report helps monitor the **Identity Privilege** related activity across the authorized systems. It captures any Identity permission change. </p>This report has the following main sections: **User Summary**, **Group Summary**, **Role Summary & Delete Task Summary**. </p>The **User Summary** lists the current granted permissions along with high-risk permissions and resources accessed in 1-day, 7-day, or 30-days durations. There are subsections for newly added or deleted users, users with PCI change, high-risk active/inactive users. </p>The **Group Summary** lists the administrator level groups with the current granted permissions along with high-risk permissions and resources accessed in 1-day, 7-day, or 30-day durations. There are subsections for newly added or deleted groups, groups with PCI change, High-risk active/inactive groups. </p>The **Role Summary** and the **Group Summary** list similar details. </p>The **Delete Task** summary section lists the number of times the **Delete Task** has been executed in the given period. | AWS, Azure, or GCP | No |
+| Permissions Analytics Report (PAR) | Detailed | CSV | This report lists the different key findings in the selected authorized systems. The key findings include **Super identities**, **Inactive identities**, **Over-provisioned active identities**, **Storage bucket hygiene**, **Access key age (AWS)**, and so on. </p>This report helps administrators to visualize the findings across the organization and make decisions. | AWS, Azure, or GCP | Yes |
+| Role/Policy Details | Summary | CSV | This report captures **Assigned/Unassigned** and **Custom/system policy with used/unused condition** for specific or all AWS accounts. </p>Similar data can be captured for Azure and GCP for assigned and unassigned roles. | AWS, Azure, or GCP | No |
+| User Entitlements and Usage | Detailed <p>Summary | CSV | This report provides a summary and details of **User entitlements and usage**. </p>**Data displayed on Usage Analytics** screen is downloaded as part of the **Summary** report. </p>**Detailed permissions usage per User** is listed in the Detailed report. | AWS, Azure, or GCP | Yes |
++
+## Next steps
+
+- For information on how to view system reports in the **Reports** dashboard, see [View system reports in the Reports dashboard](cloudknox-product-reports.md).
+- For information about how to create, view, and share a system report, see [Create, view, and share a custom report](cloudknox-report-view-system-report.md).
+- For information about how to create and view a custom report, see [Generate and view a custom report](cloudknox-report-create-custom-report.md).
+- For information about how to create and view the Permissions analytics report, see [Generate and download the Permissions analytics report](cloudknox-product-permissions-analytics-reports.md).
active-directory Cloudknox Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-faqs.md
+
+ Title: Frequently asked questions (FAQs) about CloudKnox Permissions Management
+description: Frequently asked questions (FAQs) about CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Frequently asked questions (FAQs)
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
++
+This article answers frequently asked questions (FAQs) about CloudKnox Permissions Management (CloudKnox).
+
+## What's CloudKnox Permissions Management?
+
+CloudKnox is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. For example, over-privileged workload and user identities, actions, and resources across multi-cloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). CloudKnox detects, automatically right-sizes, and continuously monitors unused and excessive permissions. It deepens the Zero Trust security strategy by augmenting the least privilege access principle.
++
+## What are the prerequisites to use CloudKnox?
+
+CloudKnox supports data collection from AWS, GCP, and/or Microsoft Azure. For data collection and analysis, customers are required to have an Azure Active Directory (Azure AD) account to use CloudKnox, however, an Azure subscription or Azure AD P1 or P2 license aren't required to use CloudKnox for AWS or GCP.
+
+## Can a customer use CloudKnox if they have other identities with access to their IaaS platform that arenΓÇÖt yet in Azure AD (for example, if part of their business has Okta or AWS Identity & Access Management (IAM))?
+
+Yes, a customer can detect, mitigate, and monitor the risk of ΓÇÿbackdoorΓÇÖ accounts that are local to AWS IAM, GCP, or from other identity providers such as Okta or AWS IAM.
+
+## Where can customers access CloudKnox?
+
+Customers can access the CloudKnox interface with a link from the Azure AD extension in the Azure portal.
+
+## Can non-cloud customers use CloudKnox on-premises?
+
+No, CloudKnox is a hosted cloud offering.
+
+## Can non-Azure customers use CloudKnox?
+
+Yes, non-Azure customers can use our solution. CloudKnox is a multi-cloud solution so even customers who have no subscription to Azure can benefit from it.
+
+## If IΓÇÖm already using Azure AD Privileged Identity Management (PIM) for Azure, what value does CloudKnox provide?
+
+CloudKnox complements Azure AD PIM. Azure AD PIM provides just-in-time access for admin roles in Azure (as well as Microsoft Online Services and apps that use groups), while CloudKnox allows multi-cloud discovery, remediation, and monitoring of privileged access across Azure, AWS, and GCP.
+
+## What languages does CloudKnox support?
+
+CloudKnox currently supports English.
+
+## What public cloud infrastructures are supported by CloudKnox?
+
+CloudKnox currently supports the three major public clouds: Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure.
+
+## Does CloudKnox support hybrid environments?
+
+CloudKnox currently doesnΓÇÖt support hybrid environments.
+
+## What types of identities are supported by CloudKnox?
+
+CloudKnox supports user identities (for example, employees, customers, external partners) and workload identities (for example, virtual machines, containers, web apps, serverless functions).
+
+<!## Is CloudKnox General Data Protection Regulation (GDPR) compliant?
+
+CloudKnox is currently not GDPR compliant.>
+
+## Is CloudKnox available in Government Cloud?
+
+No, CloudKnox is currently not available in Government clouds.
+
+## Is CloudKnox available for sovereign clouds?
+
+No, CloudKnox is currently not available in sovereign Clouds.
+
+## How does CloudKnox collect insights about permissions usage?
+
+CloudKnox has a data collector that collects access permissions assigned to various identities, activity logs, and resources metadata. This gathers full visibility into permissions granted to all identities to access the resources and details on usage of granted permissions.
+
+## How does CloudKnox evaluate cloud permissions risk?
+
+CloudKnox offers granular visibility into all identities and their permissions granted versus used, across cloud infrastructures to uncover any action performed by any identity on any resource. This isn't limited to just user identities, but also workload identities such as virtual machines, access keys, containers, and scripts. The dashboard gives an overview of permission profile to locate the riskiest identities and resources.
+
+## What is the Permissions Creep Index?
+
+The Permissions Creep Index (PCI) is a quantitative measure of risk associated with an identity or role determined by comparing permissions granted versus permissions exercised. It allows users to instantly evaluate the level of risk associated with the number of unused or over-provisioned permissions across identities and resources. It measures how much damage identities can cause based on the permissions they have.
+
+## How can customers use CloudKnox to delete unused or excessive permissions?
+
+CloudKnox allows users to right-size excessive permissions and automate least privilege policy enforcement with just a few clicks. The solution continuously analyzes historical permission usage data for each identity and gives customers the ability to right-size permissions of that identity to only the permissions that are being used for day-to-day operations. All unused and other risky permissions can be automatically removed.
+
+## How can customers grant permissions on-demand with CloudKnox?
+
+For any break-glass or one-off scenarios where an identity needs to perform a specific set of actions on a set of specific resources, the identity can request those permissions on-demand for a limited period with a self-service workflow. Customers can either use the built-in workflow engine or their IT service management (ITSM) tool. The user experience is the same for any identity type, identity source (local, enterprise directory, or federated) and cloud.
+
+## What is the difference between permissions on-demand and just-in-time access?
+
+Just-in-time (JIT) access is a method used to enforce the principle of least privilege to ensure identities are given the minimum level of permissions to perform the task at hand. Permissions on-demand are a type of JIT access that allows the temporary elevation of permissions, enabling identities to access resources on a by-request, timed basis.
+
+## How can customers monitor permissions usage with CloudKnox?
+
+Customers only need to track the evolution of their Permission Creep Index to monitor permissions usage. They can do this in the ΓÇ£AnalyticsΓÇ¥ tab in their CloudKnox dashboard where they can see how the PCI of each identity or resource is evolving over time.
+
+## Can customers generate permissions usage reports?
+
+Yes, CloudKnox has various types of system report available that capture specific data sets. These reports allow customers to:
+- Make timely decisions
+- Analyze usage trends and system/user performance
+- Identify high-risk areas
+
+For information about permissions usage reports, see [Generate and download the Permissions analytics report](cloudknox-product-permissions-analytics-reports.md).
+
+## Does CloudKnox integrate with third-party ITSM (Information Technology Security Management) tools?
+
+CloudKnox integrates with ServiceNow.
++
+## How is CloudKnox being deployed?
+
+Customers with Global Admin role have first to onboard CloudKnox on their Azure AD tenant, and then onboard their AWS accounts, GCP projects, and Azure subscriptions. More details about onboarding can be found in our product documentation.
+
+## How long does it take to deploy CloudKnox?
+
+It depends on each customer and how many AWS accounts, GCP projects, and Azure subscriptions they have.
+
+## Once CloudKnox is deployed, how fast can I get permissions insights?
+
+Once fully onboarded with data collection set up, customers can access permissions usage insights within hours. Our machine-learning engine refreshes the Permission Creep Index every hour so that customers can start their risk assessment right away.
+
+## Is CloudKnox collecting and storing sensitive personal data?
+
+No, CloudKnox doesnΓÇÖt have access to sensitive personal data.
+
+## Where can I find more information about CloudKnox?
+
+You can read our blog and visit our web page. You can also get in touch with your Microsoft point of contact to schedule a demo.
+
+## Resources
+
+- [Public Preview announcement blog](https://www.aka.ms/CloudKnox-Public-Preview-Blog)
+- [CloudKnox Permissions Management web page](https://microsoft.com/security/business/identity-access-management/permissions-management)
+++
+## Next steps
+
+- For an overview of CloudKnox, see [What's CloudKnox Permissions Management?](cloudknox-overview.md).
+- For information on how to onboard CloudKnox in your organization, see [Enable CloudKnox in your organization](cloudknox-onboard-enable-tenant.md).
active-directory Cloudknox Howto Add Remove Role Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-add-remove-role-task.md
+
+ Title: Add and remove roles and tasks for groups, users, and service accounts for Microsoft Azure and Google Cloud Platform (GCP) identities in the Remediation dashboard in CloudKnox Permissions Management
+description: How to attach and detach permissions for groups, users, and service accounts for Microsoft Azure and Google Cloud Platform (GCP) identities in the Remediation dashboard in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Add and remove roles and tasks for Microsoft Azure and Google Cloud Platform (GCP) identities
++
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how you can add and remove roles and tasks for Microsoft Azure and Google Cloud Platform (GCP) identities using the **Remediation** dashboard.
+
+> [!NOTE]
+> To view the **Remediation** tab, your must have **Viewer**, **Controller**, or **Administrator** permissions. To make changes on this tab, you must have **Controller** or **Administrator** permissions. If you donΓÇÖt have these permissions, contact your system administrator.
+
+## View permissions
+
+1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Permissions** subtab.
+1. From the **Select an authorization system type** dropdown, select **Azure** or **GCP**.
+1. From the **Select an authorization system** dropdown, select the accounts you want to access.
+1. From the **Search for** dropdown, select **Group**, **User**, or **APP**.
+1. To search for more parameters, you can make a selection from the **User States**, **Privilege Creep Index**, and **Task usage** dropdowns.
+1. Select **Apply**.
+ CloudKnox displays a list of groups, users, and service accounts that match your criteria.
+1. In **Enter a username**, enter or select a user.
+1. In **Enter a group name**, enter or select a group, then select **Apply**.
+1. Make a selection from the results list.
+
+ The table displays the **Username** **Domain/Account**, **Source**, **Resource** and **Current role**.
++
+## Add a role
+
+1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Permissions** subtab.
+1. From the **Select an authorization system type** dropdown, select **Azure** or **GCP**.
+1. From the **Select an authorization system** dropdown, select the accounts you want to access.
+1. From the **Search for** dropdown, select **Group**, **User**, or **APP**, and then select **Apply**.
+1. Make a selection from the results list.
+
+1. To attach a role, select **Add role**.
+1. In the **Add role** page, from the **Available roles** list, select the plus sign **(+)** to move the role to the **Selected roles** list.
+1. When you have finished adding roles, select **Submit**.
+1. When the following message displays: **Are you sure you want to change permissions?**, select:
+ - **Generate Script** to generate a script where you can manually add/remove the permissions you selected.
+ - **Execute** to change the permission.
+ - **Close** to cancel the action.
+
+## Remove a role
+
+1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Permissions** subtab.
+1. From the **Select an authorization system type** dropdown, select **Azure** or **GCP**.
+1. From the **Select an authorization system** dropdown, select the accounts you want to access.
+1. From the **Search for** dropdown, select **Group**, **User**, or **APP**, and then select **Apply**.
+1. Make a selection from the results list.
+
+1. To remove a role, select **Remove role**.
+1. In the **Remove role** page, from the **Available roles** list, select the plus sign **(+)** to move the role to the **Selected roles** list.
+1. When you have finished selecting roles, select **Submit**.
+1. When the following message displays: **Are you sure you want to change permissions?**, select:
+ - **Generate Script** to generate a script where you can manually add/remove the permissions you selected.
+ - **Execute** to change the permission.
+ - **Close** to cancel the action.
+
+## Add a task
+
+1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Permissions** subtab.
+1. From the **Select an authorization system type** dropdown, select **Azure** or **GCP**.
+1. From the **Select an authorization system** dropdown, select the accounts you want to access.
+1. From the **Search for** dropdown, select **Group**, **User**, or **APP**, and then select **Apply**.
+1. Make a selection from the results list.
+
+1. To attach a role, select **Add tasks**.
+1. In the **Add tasks** page, from the **Available tasks** list, select the plus sign **(+)** to move the task to the **Selected tasks** list.
+1. When you have finished adding tasks, select **Submit**.
+1. When the following message displays: **Are you sure you want to change permissions?**, select:
+ - **Generate Script** to generate a script where you can manually add/remove the permissions you selected.
+ - **Execute** to change the permission.
+ - **Close** to cancel the action.
+
+## Remove a task
+
+1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Permissions** subtab.
+1. From the **Select an authorization system type** dropdown, select **Azure** or **GCP**.
+1. From the **Select an authorization system** dropdown, select the accounts you want to access.
+1. From the **Search for** dropdown, select **Group**, **User**, or **APP**, and then select **Apply**.
+1. Make a selection from the results list.
+
+1. To remove a task, select **Remove tasks**.
+1. In the **Remove tasks** page, from the **Available tasks** list, select the plus sign **(+)** to move the task to the **Selected tasks** list.
+1. When you have finished selecting tasks, select **Submit**.
+1. When the following message displays: **Are you sure you want to change permissions?**, select:
+ - **Generate Script** to generate a script where you can manually add/remove the permissions you selected.
+ - **Execute** to change the permission.
+ - **Close** to cancel the action.
+
+## Next steps
++
+- For information on how to view existing roles/policies, requests, and permissions, see [View roles/policies, requests, and permission in the Remediation dashboard](cloudknox-ui-remediation.md).
+- For information on how to create a role/policy, see [Create a role/policy](cloudknox-howto-create-role-policy.md).
+- For information on how to clone a role/policy, see [Clone a role/policy](cloudknox-howto-clone-role-policy.md).
+- For information on how to delete a role/policy, see [Delete a role/policy](cloudknox-howto-delete-role-policy.md).
+- For information on how to modify a role/policy, see Modify a role/policy](cloudknox-howto-modify-role-policy.md).
+- To view information about roles/policies, see [View information about roles/policies](cloudknox-howto-view-role-policy.md).
+- For information on how to attach and detach permissions for Amazon Web Services (AWS) identities, see [Attach and detach policies for AWS identities](cloudknox-howto-attach-detach-permissions.md).
+- For information on how to revoke high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities, see [Revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities](cloudknox-howto-revoke-task-readonly-status.md)
+For information on how to create or approve a request for permissions, see [Create or approve a request for permissions](cloudknox-howto-create-approve-privilege-request.md).
active-directory Cloudknox Howto Attach Detach Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-attach-detach-permissions.md
+
+ Title: Attach and detach permissions for users, roles, and groups for Amazon Web Services (AWS) identities in the Remediation dashboard in CloudKnox Permissions Management
+description: How to attach and detach permissions for users, roles, and groups for Amazon Web Services (AWS) identities in the Remediation dashboard in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Attach and detach policies for Amazon Web Services (AWS) identities
++
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how you can attach and detach permissions for users, roles, and groups for Amazon Web Services (AWS) identities using the **Remediation** dashboard.
+
+> [!NOTE]
+> To view the **Remediation** tab, your must have **Viewer**, **Controller**, or **Administrator** permissions. To make changes on this tab, you must have **Controller** or **Administrator** permissions. If you donΓÇÖt have these permissions, contact your system administrator.
+
+## View permissions
+
+1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Permissions** subtab.
+1. From the **Select an authorization system type** dropdown, select **AWS**.
+1. From the **Select an authorization system** dropdown, select the accounts you want to access.
+1. From the **Search For** dropdown, select **Group**, **User**, or **Role**.
+1. To search for more parameters, you can make a selection from the **User States**, **Privilege Creep Index**, and **Task usage** dropdowns.
+1. Select **Apply**.
+ CloudKnox displays a list of users, roles, or groups that match your criteria.
+1. In **Enter a username**, enter or select a user.
+1. In **Enter a group name**, enter or select a group, then select **Apply**.
+1. Make a selection from the results list.
+
+ The table displays the related **Username** **Domain/Account**, **Source** and **Policy name**.
++
+## Attach policies
+
+1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Permissions** subtab.
+1. From the **Select an authorization system type** dropdown, select **AWS**.
+1. In **Enter a username**, enter or select a user.
+1. In **Enter a group name**, enter or select a group, then select **Apply**.
+1. Make a selection from the results list.
+1. To attach a policy, select **Attach policies**.
+1. In the **Attach policies** page, from the **Available policies** list, select the plus sign **(+)** to move the policy to the **Selected policies** list.
+1. When you have finished adding policies, select **Submit**.
+1. When the following message displays: **Are you sure you want to change permissions?**, select:
+ - **Generate Script** to generate a script where you can manually add/remove the permissions you selected.
+ - **Execute** to change the permission.
+ - **Close** to cancel the action.
+
+## Detach policies
+
+1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Permissions** subtab.
+1. From the **Select an authorization system type** dropdown, select **AWS**.
+1. In **Enter a username**, enter or select a user.
+1. In **Enter a group name**, enter or select a group, then select **Apply**.
+1. Make a selection from the results list.
+1. To remove a policy, select **Detach policies**.
+1. In the **Detach policies** page, from the **Available policies** list, select the plus sign **(+)** to move the policy to the **Selected policies** list.
+1. When you have finished selecting policies, select **Submit**.
+1. When the following message displays: **Are you sure you want to change permissions?**, select:
+ - **Generate Script** to generate a script where you can manually add/remove the permissions you selected.
+ - **Execute** to change the permission.
+ - **Close** to cancel the action.
+
+## Next steps
++
+- For information on how to view existing roles/policies, requests, and permissions, see [View roles/policies, requests, and permission in the Remediation dashboard](cloudknox-ui-remediation.md).
+- For information on how to create a role/policy, see [Create a role/policy](cloudknox-howto-create-role-policy.md).
+- For information on how to clone a role/policy, see [Clone a role/policy](cloudknox-howto-clone-role-policy.md).
+- For information on how to delete a role/policy, see [Delete a role/policy](cloudknox-howto-delete-role-policy.md).
+- For information on how to modify a role/policy, see Modify a role/policy](cloudknox-howto-modify-role-policy.md).
+- To view information about roles/policies, see [View information about roles/policies](cloudknox-howto-view-role-policy.md).
+- For information on how to revoke high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities, see [Revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities](cloudknox-howto-revoke-task-readonly-status.md)
+For information on how to create or approve a request for permissions, see [Create or approve a request for permissions](cloudknox-howto-create-approve-privilege-request.md).
+
active-directory Cloudknox Howto Audit Trail Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-audit-trail-results.md
+
+ Title: Generate an on-demand report from a query in the Audit dashboard in CloudKnox Permissions Management
+description: How to generate an on-demand report from a query in the **Audit** dashboard in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Generate an on-demand report from a query
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how you can generate an on-demand report from a query in the **Audit** dashboard in CloudKnox Permissions Management (CloudKnox). You can:
+
+- Run a report on-demand.
+- Schedule and run a report as often as you want.
+- Share a report with other members of your team and management.
+
+## Generate a custom report on-demand
+
+1. In the CloudKnox home page, select the **Audit** tab.
+
+ CloudKnox displays the query options available to you.
+1. In the **Audit** dashboard, select **Search** to run the query.
+1. Select **Export**.
+
+ CloudKnox generates the report and exports it in comma-separated values (**CSV**) format, portable document format (**PDF**), or Microsoft Excel Open XML Spreadsheet (**XLSX**) format.
+
+<!
+## Create a schedule to automatically generate and share a report
+
+1. In the **Audit** tab, load the query you want to use to generate your report.
+2. Select **Settings** (the gear icon).
+3. In **Repeat on**, select on which days of the week you want the report to run.
+4. In **Date**, select the date when you want the query to run.
+5. In **hh mm** (time), select the time when you want the query to run.
+6. In **Request file format**, select the file format you want for your report.
+7. In **Share report with people**, enter email addresses for people to whom you want to send the report.
+8. Select **Schedule**.
+
+ CloudKnox generates the report as set in Steps 3 to 6, and emails it to the recipients you specified in Step 7.
++
+## Delete the schedule for a report
+
+1. In the **Audit** tab, load the query whose report schedule you want to delete.
+2. Select the ellipses menu **(…)** on the far right, and then select **Delete schedule**.
+
+ CloudKnox deletes the schedule for running the query. The query itself isn't deleted.
+>
++
+## Next steps
+
+- For information on how to view how users access information, see [Use queries to see how users access information](cloudknox-ui-audit-trail.md).
+- For information on how to filter and view user activity, see [Filter and query user activity](cloudknox-product-audit-trail.md).
+- For information on how to create a query,see [Create a custom query](cloudknox-howto-create-custom-queries.md).
active-directory Cloudknox Howto Clone Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-clone-role-policy.md
+
+ Title: Clone a role/policy in the Remediation dashboard in CloudKnox Permissions Management
+description: How to clone a role/policy in the Just Enough Permissions (JEP) Controller.
+++++++ Last updated : 02/23/2022+++
+# Clone a role/policy in the Remediation dashboard
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how you can use the **Remediation** dashboard in CloudKnox Permissions Management (CloudKnox) to clone roles/policies for the Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) authorization systems.
+
+> [!NOTE]
+> To view the **Remediation** tab, you must have **Viewer**, **Controller**, or **Administrator** permissions. To make changes on this tab, you must have **Controller** or **Administrator** permissions. If you donΓÇÖt have these permissions, contact your system administrator.
+
+> [!NOTE]
+> Microsoft Azure uses the term *role* for what other Cloud providers call *policy*. CloudKnox automatically makes this terminology change when you select the authorization system type. In the user documentation, we use *role/policy* to refer to both.
+
+## Clone a role/policy
+
+1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Role/Policies** tab.
+1. Select the role/policy you want to clone, and from the **Actions** column, select **Clone**.
+1. **(AWS Only)** In the **Clone** box, the **Clone Resources** and **Clone Conditions** checkboxes are automatically selected.
+ Deselect the boxes if the resources and conditions are different from what is displayed.
+1. Enter a name for each authorization system that was selected in the **Policy Name** boxes, and then select **Next**.
+
+1. If the data collector hasn't been given controller privileges, the following message displays: **Only online/controller-enabled authorization systems can be submitted for cloning.**
+
+ To clone this role manually, download the script and JSON file.
+
+1. Select **Submit**.
+1. Refresh the **Role/Policies** tab to see the role/policy you cloned.
+
+## Next steps
++
+- For information on how to view existing roles/policies, requests, and permissions, see [View roles/policies, requests, and permission in the Remediation dashboard](cloudknox-ui-remediation.md).
+- For information on how to create a role/policy, see [Create a role/policy](cloudknox-howto-create-role-policy.md).
+- For information on how to delete a role/policy, see [Delete a role/policy](cloudknox-howto-delete-role-policy.md).
+- For information on how to modify a role/policy, see Modify a role/policy](cloudknox-howto-modify-role-policy.md).
+- To view information about roles/policies, see [View information about roles/policies](cloudknox-howto-view-role-policy.md).
+- For information on how to attach and detach permissions for AWS identities, see [Attach and detach policies for AWS identities](cloudknox-howto-attach-detach-permissions.md).
+- For information on how to revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities, see [Revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities](cloudknox-howto-revoke-task-readonly-status.md)
+- For information on how to create or approve a request for permissions, see [Create or approve a request for permissions](cloudknox-howto-create-approve-privilege-request.md).
+- For information on how to view information about roles/policies, see [View information about roles/policies](cloudknox-howto-view-role-policy.md)
active-directory Cloudknox Howto Create Alert Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-create-alert-trigger.md
+
+ Title: Create and view activity alerts and alert triggers in CloudKnox Permissions Management
+description: How to create and view activity alerts and alert triggers in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Create and view activity alerts and alert triggers
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how you can create and view activity alerts and alert triggers in CloudKnox Permissions Management (CloudKnox).
+
+## Create an activity alert trigger
+
+1. In the CloudKnox home page, select **Activity Triggers** (the bell icon).
+1. In the **Activity** tab, select **Create Activity Trigger**.
+1. In the **Alert Name** box, enter a name for your alert.
+1. In **Authorization System Type**, select your authorization system: Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
+1. In **Authorization System**, select **Is** or **In**, and then select one or more accounts and folders.
+1. From the **Select a Type** dropdown, select: **Access Key ID**, **Identity Tag Key**, **Identity Tag Key Value**, **Resource Name**, **Resource Tag Key**, **Resource Tag Key Value**, **Role Name**, **Role Session Name**, **State**, **Task Name**, or **Username**.
+1. From the **Operator** dropdown, select an option:
+
+ - **Is**/**Is Not**: Select in the value field to view a list of all available values. You can either select or enter the required value.
+ - **Contains**/**Not Contains**: Enter any text that the query parameter should or shouldn't contain, for example *CloudKnox*.
+ - **In**/**Not In**: Select in the value field to view list of all available values. Select the required multiple values.
+
+1. To add another parameter, select the plus sign **(+)**, then select an operator, and then enter a value.
+
+ To remove a parameter, select the minus sign **(-)**.
+1. To add another activity type, select **Add**, and then enter your parameters.
+1. To save your alert, select **Save**.
+
+ A message displays to confirm your activity trigger has been created.
+
+ The **Triggers** table in the **Alert Triggers** subtab displays your alert trigger.
+
+## View an activity alert
+
+1. In the CloudKnox home page, select **Activity Triggers** (the bell icon).
+1. In the **Activity** tab, select the **Alerts** subtab.
+1. From the **Alert Name** dropdown, select an alert.
+1. From the **Date** dropdown, select **Last 24 Hours**, **Last 2 Days**, **Last Week**, or **Custom Range**.
+
+ If you select **Custom range**, select date and time settings, and then select **Apply**.
+1. To view the alert, select **Apply**
+
+ The **Alerts** table displays information about your alert.
+++
+## View activity alert triggers
+
+1. In the CloudKnox home page, select **Activity triggers** (the bell icon).
+1. In the **Activity** tab, select the **Alert triggers** subtab.
+1. From the **Status** dropdown, select **All**, **Activated** or **Deactivated**, then select **Apply**.
+
+ The **Triggers** table displays the following information:
+
+ - **Alerts**: The name of the alert trigger.
+ - **# of users subscribed**: The number of users who have subscribed to a specific alert trigger.
+
+ - Select a number in this column to view information about the user.
+
+ - **Created by**: The email address of the user who created the alert trigger.
+ - **Modified by**: The email address of the user who last modified the alert trigger.
+ - **Last updated**: The date and time the alert trigger was last updated.
+ - **Subscription**: A switch that displays if the alert is **On** or **Off**.
+
+ - If the column displays **Off**, the current user isn't subscribed to that alert. Switch the toggle to **On** to subscribe to the alert.
+ - The user who creates an alert trigger is automatically subscribed to the alert, and will receive emails about the alert.
+
+1. To see only activated or only deactivated triggers, from the **Status** dropdown, select **Activated** or **Deactivated**, and then select **Apply**.
+
+1. To view other options available to you, select the ellipses (**...**), and then select from the available options.
+
+ If the **Subscription** is **On**, the following options are available:
+
+ - **Edit**: Enables you to modify alert parameters
+
+ > [!NOTE]
+ > Only the user who created the alert can perform the following actions: edit the trigger screen, rename an alert, deactivate an alert, and delete an alert. Changes made by other users aren't saved.
+
+ - **Duplicate**: Create a duplicate of the alert called "**Copy of XXX**".
+ - **Rename**: Enter the new name of the query, and then select **Save.**
+ - **Deactivate**: The alert will still be listed, but will no longer send emails to subscribed users.
+ - **Activate**: Activate the alert trigger and start sending emails to subscribed users.
+ - **Notification settings**: View the **Email** of users who are subscribed to the alert trigger and their **User status**.
+ - **Delete**: Delete the alert.
+
+ If the **Subscription** is **Off**, the following options are available:
+ - **View**: View details of the alert trigger.
+ - **Notification settings**: View the **Email** of users who are subscribed to the alert trigger and their **User status**.
+ - **Duplicate**: Create a duplicate copy of the selected alert trigger.
++++
+## Next steps
+
+- For an overview on activity triggers, see [View information about activity triggers](cloudknox-ui-triggers.md).
+- For information on rule-based anomalies and anomaly triggers, see [Create and view rule-based anomalies and anomaly triggers](cloudknox-product-rule-based-anomalies.md).
+- For information on finding outliers in identity's behavior, see [Create and view statistical anomalies and anomaly triggers](cloudknox-product-statistical-anomalies.md).
+- For information on permission analytics triggers, see [Create and view permission analytics triggers](cloudknox-product-permission-analytics.md).
active-directory Cloudknox Howto Create Approve Privilege Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-create-approve-privilege-request.md
+
+ Title: Create or approve a request for permissions in the Remediation dashboard in CloudKnox Permissions Management
+description: How to create or approve a request for permissions in the Remediation dashboard.
+++++++ Last updated : 02/23/2022+++
+# Create or approve a request for permissions
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how to create or approve a request for permissions in the **Remediation** dashboard in CloudKnox Permissions Management (CloudKnox). You can create and approve requests for the Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) authorization systems.
+
+The **Remediation** dashboard has two privilege-on-demand (POD) workflows you can use:
+- **New Request**: The workflow used by a user to create a request for permissions for a specified duration.
+- **Approver**: The workflow used by an approver to review and approve or reject a userΓÇÖs request for permissions.
++
+> [!NOTE]
+> To view the **Remediation** dashboard, you must have **Viewer**, **Controller**, or **Administrator** permissions. To make changes on this tab, you must have **Controller** or **Administrator** permissions. If you donΓÇÖt have these permissions, contact your system administrator.
+
+## Create a request for permissions
+
+1. On the CloudKnox home page, select the **Remediation** tab, and then select the **My requests** subtab.
+
+ The **My requests** subtab displays the following options:
+ - **Pending**: A list of requests youΓÇÖve made but haven't yet been reviewed.
+ - **Approved**: A list of requests that have been reviewed and approved by the approver. These requests have either already been activated or are in the process of being activated.
+ - **Processed**: A summary of the requests youΓÇÖve created that have been approved (**Done**), **Rejected**, and requests that have been **Canceled**.
+
+1. To create a request for permissions, select **New request**.
+1. In the **Roles/Tasks** page:
+ 1. From the **Select an authorization system type** dropdown, select the authorization system type you want to access: **AWS**, **Azure** or **GCP**.
+ 1. From the **Select an authorization system** dropdown, select the accounts you want to access.
+ 1. From the **Identity** dropdown, select the identity on whose behalf youΓÇÖre requesting access.
+
+ - If the identity you select is a Security Assertions Markup Language (SAML) user, and since a SAML user accesses the system through assumption of a role, select the userΓÇÖs role in **Role**.
+
+ - If the identity you select is a local user, to select the policies you want:
+ 1. Select **Request policy(s)**.
+ 1. In **Available policies**, select the policies you want.
+ 1. To select a specific policy, select the plus sign, and then find and select the policy you want.
+
+ The policies youΓÇÖve selected appear in the **Selected policies** box.
+
+ - If the identity you select is a local user, to select the tasks you want:
+ 1. Select **Request Task(s)**.
+ 1. In **Available Tasks**, select the tasks you want.
+ 1. To select a specific task, select the plus sign, and then select the task you want.
+
+ The tasks youΓÇÖve selected appear in the **Selected Tasks** box.
+
+ If the user already has existing policies, they're displayed in **Existing Policies**.
+1. Select **Next**.
+
+1. If you selected **AWS**, the **Scope** page appears.
+
+ 1. In **Select scope**, select:
+ - **All Resources**
+ - **Specific Resources**, and then select the resources you want.
+ - **No Resources**
+ 1. In **Request Conditions**:
+ 1. Select **JSON** to add a JSON block of code.
+ 1. Select **Done** to accept the code youΓÇÖve entered, or **Clear** to delete what youΓÇÖve entered and start again.
+ 1. In **Effect**, select **Allow** or **Deny.**
+ 1. Select **Next**.
+
+1. The **Confirmation** page appears.
+1. In **Request Summary**, enter a summary for your request.
+1. Optional: In **Note**, enter a note for the approver.
+1. In **Schedule**, select when (how quickly) you want your request to be processed:
+ - **ASAP**
+ - **Once**
+ - In **Create Schedule**, select the **Frequency**, **Date**, **Time**, and **For** the required duration, then select **Schedule**.
+ - **Daily**
+ - **Weekly**
+ - **Monthly**
+1. Select **Submit**.
+
+ The following message appears: **Your request has been successfully submitted.**
+
+ The request you submitted is now listed in **Pending Requests**.
+
+## Approve or reject a request for permissions
+
+1. On the CloudKnox home page, select the **Remediation** tab, and then select the **My requests** subtab.
+1. To view a list of requests that haven't yet been reviewed, select **Pending Requests**.
+1. In the **Request Summary** list, select the ellipses **(…)** menu on the right of a request, and then select:
+
+ - **Details** to view the details of the request.
+ - **Approve** to approve the request.
+ - **Reject** to reject the request.
+
+1. (Optional) add a note to the requestor, and then select **Confirm.**
+
+ The **Approved** subtab displays a list of requests that have been reviewed and approved by the approver. These requests have either already been activated or are in the process of being activated.
+ The **Processed** subtab displays a summary of the requests that have been approved or rejected, and requests that have been canceled.
++
+## Next steps
++
+- For information on how to view existing roles/policies, requests, and permissions, see [View roles/policies, requests, and permission in the Remediation dashboard](cloudknox-ui-remediation.md).
+- For information on how to create a role/policy, see [Create a role/policy](cloudknox-howto-create-role-policy.md).
+- For information on how to clone a role/policy, see [Clone a role/policy](cloudknox-howto-clone-role-policy.md).
+- For information on how to delete a role/policy, see [Delete a role/policy](cloudknox-howto-delete-role-policy.md).
+- For information on how to modify a role/policy, see Modify a role/policy](cloudknox-howto-modify-role-policy.md).
+- To view information about roles/policies, see [View information about roles/policies](cloudknox-howto-view-role-policy.md).
+- For information on how to attach and detach permissions for Amazon Web Services (AWS) identities, see [Attach and detach policies for AWS identities](cloudknox-howto-attach-detach-permissions.md).
+- For information on how to add and remove roles and tasks for Microsoft Azure and Google Cloud Platform (GCP) identities, see [Add and remove roles and tasks for Azure and GCP identities](cloudknox-howto-attach-detach-permissions.md).
+- For information on how to revoke high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities, see [Revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities](cloudknox-howto-revoke-task-readonly-status.md)
active-directory Cloudknox Howto Create Custom Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-create-custom-queries.md
+
+ Title: Create a custom query in CloudKnox Permissions Management
+description: How to create a custom query in the Audit dashboard in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Create a custom query
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how you can use the **Audit** dashboard in CloudKnox Permissions Management (CloudKnox) to create custom queries that you can modify, save, and run as often as you want.
+
+## Open the Audit dashboard
+
+- In the CloudKnox home page, select the **Audit** tab.
+
+ CloudKnox displays the query options available to you.
+
+## Create a custom query
+
+1. In the **Audit** dashboard, in the **New Query** subtab, select **Authorization system type**, and then select the authorization systems you want to search: Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
+1. Select the authorization systems you want to search from the **List** and **Folders** box, and then select **Apply**.
+
+1. In the **New Query** box, enter your query parameters, and then select **Add**.
+ For example, to query by a date, select **Date** in the first box. In the second and third boxes, select the down arrow, and then select one of the date-related options.
+
+1. To add parameters, select **Add**, select the down arrow in the first box to display a dropdown of available selections. Then select the parameter you want.
+1. To add more parameters to the same query, select **Add** (the plus sign), and from the first box, select **And** or **Or**.
+
+ Repeat this step for the second and third box to complete entering the parameters.
+1. To change your query as you're creating it, select **Edit** (the pencil icon), and then change the query parameters.
+1. To change the parameter options, select the down arrow in each box to display a dropdown of available selections. Then select the option you want.
+1. To discard your selections, select **Reset query** for the parameter you want to change, and then make your selections again.
+1. When youΓÇÖre ready to run your query, select **Search**.
+1. To save the query, select **Save**.
+
+ CloudKnox saves the query and adds it to the **Saved queries** list.
+
+## Save the query under a new name
+
+1. In the **Audit** dashboard, select the ellipses menu **(…)** on the far right and select **Save as**.
+2. Enter a new name for the query, and then select **Save**.
+
+ CloudKnox saves the query under the new name. Both the new query and the original query display in the **Saved queries** list.
+
+## View a saved query
+
+1. In the **Audit** dashboard, select the down arrow next to **Saved queries**.
+
+ A list of saved queries appears.
+2. Select the query you want to open.
+3. To open the query with the authorization systems you saved with the query, select **Load with the saved authorization systems**.
+4. To open the query with the authorization systems you have currently selected (which may be different from the ones you originally saved), select **Load with the currently selected authorization systems**.
+5. Select **Load Queries**.
+
+ CloudKnox displays details of the query in the **Activity** table. Select a query to see its details:
+
+ - The **Identity details**.
+ - The **Domain** name.
+ - The **Resource name** and **Resource type**.
+ - The **Task name**.
+ - The **Date**.
+ - The **IP address**.
+ - The **Authorization system**.
+
+## View a raw events summary
+
+1. In the **Audit** dashboard, select **View** (the eye icon) to open the **Raw events summary** box.
+
+ The **Raw events summary** box displays **Identity details**, the **Task name**, and the script for your query.
+1. Select **Copy** to copy the script.
+1. Select **X** to close the **Raw events summary** box.
++
+## Run a saved query
+
+1. In the **Audit** dashboard, select the query you want to run.
+
+ CloudKnox displays the results of the query in the **Activity** table.
+
+## Delete a query
+
+1. In the **Audit** dashboard, load the query you want to delete.
+2. Select **Delete**.
+
+ CloudKnox deletes the query. Deleted queries don't display in the **Saved queries** list.
+
+## Rename a query
+
+1. In the **Audit** dashboard, load the query you want to rename.
+2. Select the ellipses menu **(…)** on the far right, and select **Rename**.
+3. Enter a new name for the query, and then select **Save**.
+
+ CloudKnox saves the query under the new name. Both the new query and the original query display in the **Saved queries** list.
+
+## Duplicate a query
+
+1. In the **Audit** dashboard, load the query you want to duplicate.
+2. Select the ellipses menu **(…)** on the far right, and then select **Duplicate**.
+
+ CloudKnox creates a copy of the query. Both the copy of the query and the original query display in the **Saved queries** list.
+
+ You can rename the original or copy of the query, change it, and save it without changing the other query.
+++
+## Next steps
+
+- For information on how to view how users access information, see [Use queries to see how users access information](cloudknox-ui-audit-trail.md).
+- For information on how to filter and view user activity, see [Filter and query user activity](cloudknox-product-audit-trail.md).
+- For information on how to generate an on-demand report from a query, see [Generate an on-demand report from a query](cloudknox-howto-audit-trail-results.md).
active-directory Cloudknox Howto Create Group Based Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-create-group-based-permissions.md
+
+ Title: Select group-based permissions settings in CloudKnox Permissions Management with the User management dashboard
+description: How to select group-based permissions settings in CloudKnox Permissions Management with the User management dashboard.
+++++++ Last updated : 02/23/2022+++
+# Select group-based permissions settings
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how you can create and manage group-based permissions in CloudKnox Permissions Management (CloudKnox) with the User management dashboard.
+
+[!NOTE] The CloudKnox Administrator for all authorization systems will be able to create the new group based permissions.
+
+## Select administrative permissions settings for a group
+
+1. To display the **User Management** dashboard, select **User** (your initials) in the upper right of the screen, and then select **User Management**.
+1. Select the **Groups** tab, and then press the **Create Permission** button in the upper right of the table.
+1. In the **Set Group Permission** box, begin typing the name of an **Azure Active Directory Security Group** in your tenant.
+
+1. Select the permission setting you want:
+2.
+ - **Admin for all authorization system types** provides **View**, **Control**, and **Approve** permissions for all authorization system types.
+ - **Admin for selected authorization system types** provides **View**, **Control**, and **Approve** permissions for selected authorization system types.
+ - **Custom** allows you to set **View**, **Control**, and **Approve** permissions for the authorization system types that you select.
+1. Select **Next**
+
+1. If you selected **Admin for all authorization system types**
+ - Select Identities for each Authorization System that you would like members of this group to Request on.
+
+1. If you selected **Admin for selected authorization system types**
+ - Select **Viewer**, **Controller**, or **Approver** for the **Authorization system types** you want.
+ - Select **Next** and then select Identities for each Authorization System that you would like members of this group to Request on.
+
+1. If you select **Custom**, select the **Authorization system types** you want.
+ - Select **Viewer**, **Controller**, or **Approver** for the **Authorization Systems** you want.
+ - Select **Next** and then select Identities for each Authorization System that you would like members of this group to Request on.
+
+1. Select **Save**, The following message appears: **New group has been created successfully.**
+1. To see the group you created in the **Groups** table, refresh the page.
+
+## Next steps
+
+- For information about how to manage user information, see [Manage users and groups with the User management dashboard](cloudknox-ui-user-management.md).
+- For information about how to view information about active and completed tasks, see [View information about active and completed tasks](cloudknox-ui-tasks.md).
+- For information about how to view personal and organization information, see [View personal and organization information](cloudknox-product-account-settings.md).
+
active-directory Cloudknox Howto Create Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-create-role-policy.md
+
+ Title: Create a role/policy in the Remediation dashboard in CloudKnox Permissions Management
+description: How to create a role/policy in the Remediation dashboard in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Create a role/policy in the Remediation dashboard
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how you can use the **Remediation** dashboard in CloudKnox Permissions Management (CloudKnox) to create roles/policies for the Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) authorization systems.
+
+> [!NOTE]
+> To view the **Remediation** tab, you must have **Viewer**, **Controller**, or **Administrator** permissions. To make changes on this tab, you must have **Controller** or **Administrator** permissions. If you donΓÇÖt have these permissions, contact your system administrator.
+
+> [!NOTE]
+> Microsoft Azure uses the term *role* for what other cloud providers call *policy*. CloudKnox automatically makes this terminology change when you select the authorization system type. In the user documentation, we use *role/policy* to refer to both.
+
+## Create a policy for AWS
+
+1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Role/Policies** tab.
+1. Use the dropdown lists to select the **Authorization System Type** and **Authorization System**.
+1. Select **Create policy**.
+1. On the **Details** page, the **Authorization System Type** and **Authorization System** are pre-populated from your previous settings.
+ - To change the settings, make a selection from the dropdown.
+1. Under **How would you like to create the policy?**, select the required option:
+
+ - **Activity of user(s)**: Allows you to create a policy based on user activity.
+ - **Activity of group(s)**: Allows you to create a policy based on the aggregated activity of all the users belonging to the group(s).
+ - **Activity of resource(s)**: Allows you to create a policy based on the activity of a resource, for example, an EC2 instance.
+ - **Activity of role**: Allows you to create a policy based on the aggregated activity of all the users that assumed the role.
+ - **Activity of tag(s)**: Allows you to create a policy based on the aggregated activity of all the tags.
+ - **Activity of Lambda function**: Allows you to create a new policy based on the Lambda function.
+ - **From existing policy**: Allows you to create a new policy based on an existing policy.
+ - **New policy**: Allows you to create a new policy from scratch.
+1. In **Tasks performed in the last**, select the duration: **90 days**, **60 days**, **30 days**, **7 days**, or **1 day**.
+1. Depending on your preference, select or deselect **Include Access Advisor data.**
+1. In **Settings**, from the **Available** column, select the plus sign **(+)** to move the identity into the **Selected** column, and then select **Next**.
+
+1. On the **Tasks** page, from the **Available** column, select the plus sign **(+)** to move the task into the **Selected** column.
+ - To add a whole category, select a category.
+ - To add individual items from a category, select the down arrow on the left of the category name, and then select individual items.
+1. In **Resources**, select **All Resources** or **Specific Resources**.
+
+ If you select **Specific Resources**, a list of available resources appears. Find the resources you want to add, and then select **Add**.
+1. In **Request Conditions**, select **JSON** .
+1. In **Effect**, select **Allow** or **Deny**, and then select **Next**.
+1. In **Policy name:**, enter a name for your policy.
+1. To add another statement to your policy, select **Add statement**, and then, from the list of **Statements**, select a statement.
+1. Review your **Task**, **Resources**, **Request Conditions**, and **Effect** settings, and then select **Next**.
++
+1. On the **Preview** page, review the script to confirm it's what you want.
+1. If your controller isn't enabled, select **Download JSON** or **Download Script** to download the code and run it yourself.
+
+ If your controller is enabled, skip this step.
+1. Select **Split policy**, and then select **Submit**.
+
+ A message confirms that your policy has been submitted for creation
+
+1. The [**CloudKnox Tasks**](cloudknox-ui-tasks.md) pane appears on the right.
+ - The **Active** tab displays a list of the policies CloudKnox is currently processing.
+ - The **Completed** tab displays a list of the policies CloudKnox has completed.
+1. Refresh the **Role/Policies** tab to see the policy you created.
+++
+## Create a role for Azure
+
+1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Role/Policies** tab.
+1. Use the dropdown lists to select the **Authorization System Type** and **Authorization System**.
+1. Select **Create role**.
+1. On the **Details** page, the **Authorization System Type** and **Authorization System** are pre-populated from your previous settings.
+ - To change the settings, select the box and make a selection from the dropdown.
+1. Under **How would you like to create the role?**, select the required option:
+
+ - **Activity of user(s)**: Allows you to create a role based on user activity.
+ - **Activity of group(s)**: Allows you to create a role based on the aggregated activity of all the users belonging to the group(s).
+ - **Activity of app(s)**: Allows you to create a role based on the aggregated activity of all apps.
+ - **From existing role**: Allows you to create a new role based on an existing role.
+ - **New role**: Allows you to create a new role from scratch.
+
+1. In **Tasks performed in the last**, select the duration: **90 days**, **60 days**, **30 days**, **7 days**, or **1 day**.
+1. Depending on your preference:
+ - Select or deselect **Ignore non-Microsoft read actions**.
+ - Select or deselect **Include read-only tasks**.
+1. In **Settings**, from the **Available** column, select the plus sign **(+)** to move the identity into the **Selected** column, and then select **Next**.
+
+1. On the **Tasks** page, in **Role name:**, enter a name for your role.
+1. From the **Available** column, select the plus sign **(+)** to move the task into the **Selected** column.
+ - To add a whole category, select a category.
+ - To add individual items from a category, select the down arrow on the left of the category name, and then select individual items.
+1. Select **Next**.
+
+1. On the **Preview** page, review:
+ - The list of selected **Actions** and **Not actions**.
+ - The **JSON** or **Script** to confirm it's what you want.
+1. If your controller isn't enabled, select **Download JSON** or **Download Script** to download the code and run it yourself.
+
+ If your controller is enabled, skip this step.
+
+1. Select **Submit**.
+
+ A message confirms that your role has been submitted for creation
+
+1. The [**CloudKnox Tasks**](cloudknox-ui-tasks.md) pane appears on the right.
+ - The **Active** tab displays a list of the policies CloudKnox is currently processing.
+ - The **Completed** tab displays a list of the policies CloudKnox has completed.
+1. Refresh the **Role/Policies** tab to see the role you created.
+
+## Create a role for GCP
+
+1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Role/Policies** tab.
+1. Use the dropdown lists to select the **Authorization System Type** and **Authorization System**.
+1. Select **Create role**.
+1. On the **Details** page, the **Authorization System Type** and **Authorization System** are pre-populated from your previous settings.
+ - To change the settings, select the box and make a selection from the dropdown.
+1. Under **How would you like to create the role?**, select the required option:
+
+ - **Activity of user(s)**: Allows you to create a role based on user activity.
+ - **Activity of group(s)**: Allows you to create a role based on the aggregated activity of all the users belonging to the group(s).
+ - **Activity of service account(s)**: Allows you to create a role based on the aggregated activity of all service accounts.
+ - **From existing role**: Allows you to create a new role based on an existing role.
+ - **New role**: Allows you to create a new role from scratch.
+
+1. In **Tasks performed in the last**, select the duration: **90 days**, **60 days**, **30 days**, **7 days**, or **1 day**.
+1. If you selected **Activity of service account(s)** in the previous step, select or deselect **Collect activity across all GCP authorization systems.**
+1. From the **Available** column, select the plus sign **(+)** to move the identity into the **Selected** column, and then select **Next**.
++
+1. On the **Tasks** page, in **Role name:**, enter a name for your role.
+1. From the **Available** column, select the plus sign **(+)** to move the task into the **Selected** column.
+ - To add a whole category, select a category.
+ - To add individual items from a category, select the down arrow on the left of the category name, and then select individual items.
+1. Select **Next**.
+1. In **Role name:**, enter a name for your role.
+1. To add another statement to your role, select **Add statement**, and then, from the list of **Statements**, select a statement.
+1. Review your **Task**, **Resources**, **Request Conditions**, and **Effect** settings, and then select **Next**.
++
+1. On the **Preview** page, review:
+ - The list of selected **Actions**.
+ - The **YAML** or **Script** to confirm it's what you want.
+1. If your controller isn't enabled, select **Download YAML** or **Download Script** to download the code and run it yourself.
+1. Select **Submit**.
+ A message confirms that your role has been submitted for creation
+
+1. The [**CloudKnox Tasks**](cloudknox-ui-tasks.md) pane appears on the right.
+
+ - The **Active** tab displays a list of the policies CloudKnox is currently processing.
+ - The **Completed** tab displays a list of the policies CloudKnox has completed.
+1. Refresh the **Role/Policies** tab to see the role you created.
++
+## Next steps
+
+- For information on how to view existing roles/policies, requests, and permissions, see [View roles/policies, requests, and permission in the Remediation dashboard](cloudknox-ui-remediation.md).
+- For information on how to modify a role/policy, see [Modify a role/policy](cloudknox-howto-modify-role-policy.md).
+- For information on how to clone a role/policy, see [Clone a role/policy](cloudknox-howto-clone-role-policy.md).
+- For information on how to delete a role/policy, see [Delete a role/policy](cloudknox-howto-delete-role-policy.md).
+- To view information about roles/policies, see [View information about roles/policies](cloudknox-howto-view-role-policy.md).
+- For information on how to attach and detach permissions for AWS identities, see [Attach and detach policies for AWS identities](cloudknox-howto-attach-detach-permissions.md).
+- For information on how to revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities, see [Revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities](cloudknox-howto-revoke-task-readonly-status.md)
+- For information on how to create or approve a request for permissions, see [Create or approve a request for permissions](cloudknox-howto-create-approve-privilege-request.md).
+- For information on how to view information about roles/policies, see [View information about roles/policies](cloudknox-howto-view-role-policy.md)
active-directory Cloudknox Howto Create Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-create-rule.md
+
+ Title: Create a rule in the Autopilot dashboard in CloudKnox Permissions Management
+description: How to create a rule in the Autopilot dashboard in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Create a rule in the Autopilot dashboard
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how to create a rule in the CloudKnox Permissions Management (CloudKnox) **Autopilot** dashboard.
+
+> [!NOTE]
+> Only users with **Administrator** permissions can view and make changes on the Autopilot tab. If you donΓÇÖt have these permissions, contact your system administrator.
+
+## Create a rule
+
+1. In the CloudKnox home page, select the **Autopilot** tab.
+1. In the **Autopilot** dashboard, from the **Authorization system types** dropdown, select Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
+1. From the **Authorization system** dropdown, in the **List** and **Folders** box, select the account and folder names that you want, and then select **Apply**.
+1. In the **Autopilot** dashboard, select **New rule**.
+1. In the **Rule name** box, enter a name for your rule.
+1. Select **AWS**, **Azure**, **GCP**, and then select **Next**.
+
+1. Select **Authorization systems**, and then select **All** or the account names that you want.
+1. From the **Folders** dropdown, select a folder, and then select **Apply**.
+
+ To change your folder settings, select **Reset**.
+
+ - The **Status** column displays if the authorization system is **Online** or **Offline**.
+ - The **Controller** column displays if the controller is **Enabled** or **Not enabled**.
++
+1. Select **Configure** , and then select the following parameters for your rule:
+
+ - **Role created on is**: Select the duration in days.
+ - **Role last used on is**: Select the duration in days when the role was last used.
+ - **Cross account role**: Select **True** or **False**.
+
+1. Select **Mode**, and then, if you want recommendations to be generated and applied manually, select **On-demand**.
+1. Select **Save**
+
+ The following information displays in the **Autopilot rules** table:
+
+ - **Rule Name**: The name of the rule.
+ - **State**: The status of the rule: idle (not being use) or active (being used).
+ - **Rule Type**: The type of rule being applied.
+ - **Mode**: The status of the mode: on-demand or not.
+ - **Last Generated**: The date and time the rule was last generated.
+ - **Created By**: The email address of the user who created the rule.
+ - **Last Modified On**: The date and time the rule was last modified.
+ - **Subscription**: Provides an **On** or **Off** switch that allows you to receive email notifications when recommendations have been generated, applied, or unapplied.
++++
+## Next steps
+
+- For more information about viewing rules, see [View roles in the Autopilot dashboard](cloudknox-ui-autopilot.md).
+- For information about generating, viewing, and applying rule recommendations for rules, see [Generate, view, and apply rule recommendations for rules](cloudknox-howto-recommendations-rule.md).
+- For information about notification settings for rules, see [View notification settings for a rule](cloudknox-howto-notifications-rule.md).
active-directory Cloudknox Howto Delete Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-delete-role-policy.md
+
+ Title: Delete a role/policy in the Remediation dashboard in CloudKnox Permissions Management
+description: How to delete a role/policy in the Just Enough Permissions (JEP) Controller.
+++++++ Last updated : 02/23/2022+++
+# Delete a role/policy in the Remediation dashboard
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how you can use the **Remediation** dashboard in CloudKnox Permissions Management (CloudKnox) to delete roles/policies for the Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) authorization systems.
+
+> [!NOTE]
+> To view the **Remediation** dashboard, you must have **Viewer**, **Controller**, or **Administrator** permissions. To make changes on this tab, you must have **Controller** or **Administrator** permissions. If you donΓÇÖt have these permissions, contact your system administrator.
+
+> [!NOTE]
+> Microsoft Azure uses the term *role* for what other Cloud providers call *policy*. CloudKnox automatically makes this terminology change when you select the authorization system type. In the user documentation, we use *role/policy* to refer to both.
+
+## Delete a role/policy
+
+1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Role/Policies** subtab.
+1. Select the role/policy you want to delete, and from the **Actions** column, select **Delete**.
+
+ You can only delete a role/policy if it isn't assigned to an identity.
+
+ You can't delete system roles/policies.
+
+1. On the **Preview** page, review the role/policy information to make sure you want to delete it, and then select **Submit**.
+
+## Next steps
++
+- For information on how to view existing roles/policies, requests, and permissions, see [View roles/policies, requests, and permission in the Remediation dashboard](cloudknox-ui-remediation.md).
+- For information on how to create a role/policy, see [Create a role/policy](cloudknox-howto-create-role-policy.md).
+- For information on how to clone a role/policy, see [Clone a role/policy](cloudknox-howto-clone-role-policy.md).
+- For information on how to modify a role/policy, see Modify a role/policy](cloudknox-howto-modify-role-policy.md).
+- To view information about roles/policies, see [View information about roles/policies](cloudknox-howto-view-role-policy.md).
+- For information on how to attach and detach permissions for AWS identities, see [Attach and detach policies for AWS identities](cloudknox-howto-attach-detach-permissions.md).
+- For information on how to revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities, see [Revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities](cloudknox-howto-revoke-task-readonly-status.md)
+- For information on how to create or approve a request for permissions, see [Create or approve a request for permissions](cloudknox-howto-create-approve-privilege-request.md).
+- For information on how to view information about roles/policies, see [View information about roles/policies](cloudknox-howto-view-role-policy.md)
active-directory Cloudknox Howto Modify Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-modify-role-policy.md
+
+ Title: Modify a role/policy in the Remediation dashboard in CloudKnox Permissions Management
+description: How to modify a role/policy in the Remediation dashboard in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Modify a role/policy in the Remediation dashboard
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how you can use the **Remediation** dashboard in CloudKnox Permissions Management (CloudKnox) to modify roles/policies for the Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) authorization systems.
+
+> [!NOTE]
+> To view the **Remediation** tab, you must have **Viewer**, **Controller**, or **Administrator** permissions. To make changes on this tab, you must have **Controller** or **Administrator** permissions. If you donΓÇÖt have these permissions, contact your system administrator.
+
+> [!NOTE]
+> Microsoft Azure uses the term *role* for what other cloud providers call *policy*. CloudKnox automatically makes this terminology change when you select the authorization system type. In the user documentation, we use *role/policy* to refer to both.
+
+## Modify a role/policy
+
+1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Role/Policies** tab.
+1. Select the role/policy you want to modify, and from the **Actions** column, select **Modify**.
+
+ You can't modify **System** policies and roles.
+
+1. On the **Statements** page, make your changes to the **Tasks**, **Resources**, **Request conditions**, and **Effect** sections as required, and then select **Next**.
+
+1. Review the changes to the JSON or script on the **Preview** page, and then select **Submit**.
+
+## Next steps
+
+- For information on how to view existing roles/policies, requests, and permissions, see [View roles/policies, requests, and permission in the Remediation dashboard](cloudknox-ui-remediation.md).
+- For information on how to create a role/policy, see [Create a role/policy](cloudknox-howto-create-role-policy.md).
+- For information on how to clone a role/policy, see [Clone a role/policy](cloudknox-howto-clone-role-policy.md).
+- For information on how to delete a role/policy, see [Delete a role/policy](cloudknox-howto-delete-role-policy.md).
+- To view information about roles/policies, see [View information about roles/policies](cloudknox-howto-view-role-policy.md).
+- For information on how to attach and detach permissions for AWS identities, see [Attach and detach policies for AWS identities](cloudknox-howto-attach-detach-permissions.md).
+- For information on how to revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities, see [Revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities](cloudknox-howto-revoke-task-readonly-status.md)
+- For information on how to create or approve a request for permissions, see [Create or approve a request for permissions](cloudknox-howto-create-approve-privilege-request.md).
+- For information on how to view information about roles/policies, see [View information about roles/policies](cloudknox-howto-view-role-policy.md)
active-directory Cloudknox Howto Notifications Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-notifications-rule.md
+
+ Title: View notification settings for a rule in the Autopilot dashboard in CloudKnox Permissions Management
+description: How to view notification settings for a rule in the Autopilot dashboard in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View notification settings for a rule in the Autopilot dashboard
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how to view notification settings for a rule in the CloudKnox Permissions Management (CloudKnox) **Autopilot** dashboard.
+
+> [!NOTE]
+> Only users with **Administrator** permissions can view and make changes on the Autopilot tab. If you donΓÇÖt have these permissions, contact your system administrator.
+
+## View notification settings for a rule
+
+1. In the CloudKnox home page, select the **Autopilot** tab.
+1. In the **Autopilot** dashboard, from the **Authorization system types** dropdown, select Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
+1. From the **Authorization system** dropdown, in the **List** and **Folders** box, select the account and folder names that you want, and then select **Apply**.
+1. In the **Autopilot** dashboard, select a rule.
+1. In the far right of the row, select the ellipses **(...)**
+1. To view notification settings for a rule, select **Notification settings**.
+
+ CloudKnox displays a list of subscribed users. These users are signed up to receive notifications for the selected rule.
+
+1. To close the **Notification settings** box, select **Close**.
++
+## Next steps
+
+- For more information about viewing rules, see [View roles in the Autopilot dashboard](cloudknox-ui-autopilot.md).
+- For information about creating rules, see [Create a rule](cloudknox-howto-create-rule.md).
+- For information about generating, viewing, and applying rule recommendations for rules, see [Generate, view, and apply rule recommendations for rules](cloudknox-howto-recommendations-rule.md).
active-directory Cloudknox Howto Recommendations Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-recommendations-rule.md
+
+ Title: Generate, view, and apply rule recommendations in the Autopilot dashboard in CloudKnox Permissions Management
+description: How to generate, view, and apply rule recommendations in the Autopilot dashboard in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Generate, view, and apply rule recommendations in the Autopilot dashboard
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how to generate and view rule recommendations in the CloudKnox Permissions Management (CloudKnox) **Autopilot** dashboard.
+
+> [!NOTE]
+> Only users with **Administrator** permissions can view and make changes on the Autopilot tab. If you donΓÇÖt have these permissions, contact your system administrator.
+
+## Generate rule recommendations
+
+1. In the CloudKnox home page, select the **Autopilot** tab.
+1. In the **Autopilot** dashboard, from the **Authorization system types** dropdown, select Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
+1. From the **Authorization system** dropdown, in the **List** and **Folders** box, select the account and folder names that you want, and then select **Apply**.
+1. In the **Autopilot** dashboard, select a rule.
+1. In the far right of the row, select the ellipses **(...)**.
+1. To generate recommendations for each user and the authorization system, select **Generate recommendations**.
+
+ Only the user who created the selected rule can generate a recommendation.
+1. View your recommendations in the **Recommendations** subtab.
+1. Select **Close** to close the **Recommendations** subtab.
+
+## View rule recommendations
+
+1. In the CloudKnox home page, select the **Autopilot** tab.
+1. In the **Autopilot** dashboard, from the **Authorization system types** dropdown, select Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
+1. From the **Authorization system** dropdown, in the **List** and **Folders** box, select the account and folder names that you want, and then select **Apply**.
+1. In the **Autopilot** dashboard, select a rule.
+1. In the far right of the row, select the ellipses **(...)**
+
+1. To view recommendations for each user and the authorization system, select **View recommendations**.
+
+ CloudKnox displays the recommendations for each user and authorization system in the **Recommendations** subtab.
+
+1. Select **Close** to close the **Recommendations** subtab.
+
+## Apply rule recommendations
+
+1. In the CloudKnox home page, select the **Autopilot** tab.
+1. In the **Autopilot** dashboard, from the **Authorization system types** dropdown, select Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
+1. From the **Authorization system** dropdown, in the **List** and **Folders** box, select the account and folder names that you want, and then select **Apply**.
+1. In the **Autopilot** dashboard, select a rule.
+1. In the far right of the row, select the ellipses **(...)**
+
+1. To view recommendations for each user and the authorization system, select **View recommendations**.
+
+ CloudKnox displays the recommendations for each user and authorization system in the **Recommendations** subtab.
+
+1. To apply a recommendation, select the **Apply recommendations** subtab, and then select a recommendation.
+1. Select **Close** to close the **Recommendations** subtab.
+
+## Unapply rule recommendations
+
+1. In the CloudKnox home page, select the **Autopilot** tab.
+1. In the **Autopilot** dashboard, from the **Authorization system types** dropdown, select Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
+1. From the **Authorization system** dropdown, in the **List** and **Folders** box, select the account and folder names that you want, and then select **Apply**.
+1. In the **Autopilot** dashboard, select a rule.
+1. In the far right of the row, select the ellipses **(...)**
+
+1. To view recommendations for each user and the authorization system, select **View recommendations**.
+
+ CloudKnox displays the recommendations for each user and authorization system in the **Recommendations** subtab.
+
+1. To remove a recommendation, select the **Unapply recommendations** subtab, and then select a recommendation.
+1. Select **Close** to close the **Recommendations** subtab.
++
+## Next steps
+
+- For more information about viewing rules, see [View roles in the Autopilot dashboard](cloudknox-ui-autopilot.md).
+- For information about creating rules, see [Create a rule](cloudknox-howto-create-rule.md).
+- For information about notification settings for rules, see [View notification settings for a rule](cloudknox-howto-notifications-rule.md).
active-directory Cloudknox Howto Revoke Task Readonly Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-revoke-task-readonly-status.md
+
+ Title: Revoke access to high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities in the Remediation dashboard in CloudKnox Permissions Management
+description: How to revoke access to high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities in the Remediation dashboard in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Revoke access to high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities
++
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how you can revoke high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities using the **Remediation** dashboard.
+
+> [!NOTE]
+> To view the **Remediation** tab, your must have **Viewer**, **Controller**, or **Administrator** permissions. To make changes on this tab, you must have **Controller** or **Administrator** permissions. If you donΓÇÖt have these permissions, contact your system administrator.
+
+## View an identity's permissions
+
+1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Permissions** subtab.
+1. From the **Select an authorization system type** dropdown, select **Azure** or **GCP**.
+1. From the **Select an authorization system** dropdown, select the accounts you want to access.
+1. From the **Search for** dropdown, select **Group**, **User**, or **APP**.
+1. To search for more parameters, you can make a selection from the **User States**, **Privilege Creep Index**, and **Task usage** dropdowns.
+1. Select **Apply**.
+
+ CloudKnox displays a list of groups, users, and service accounts that match your criteria.
+1. In **Enter a username**, enter or select a user.
+1. In **Enter a group name**, enter or select a group, then select **Apply**.
+1. Make a selection from the results list.
+
+ The table displays the **Username** **Domain/Account**, **Source**, **Resource** and **Current role**.
++
+## Revoke an identity's access to unused tasks
+
+1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Permissions** subtab.
+1. From the **Select an authorization system type** dropdown, select **Azure** or **GCP**.
+1. From the **Select an authorization system** dropdown, select the accounts you want to access.
+1. From the **Search for** dropdown, select **Group**, **User**, or **APP**, and then select **Apply**.
+1. Make a selection from the results list.
+
+1. To revoke an identity's access to tasks they aren't using, select **Revoke unused tasks**.
+1. When the following message displays: **Are you sure you want to change permissions?**, select:
+ - **Generate Script** to generate a script where you can manually add/remove the permissions you selected.
+ - **Execute** to change the permission.
+ - **Close** to cancel the action.
+
+## Revoke an identity's access to high-risk tasks
+
+1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Permissions** subtab.
+1. From the **Select an authorization system type** dropdown, select **Azure** or **GCP**.
+1. From the **Select an authorization system** dropdown, select the accounts you want to access.
+1. From the **Search for** dropdown, select **Group**, **User**, or **APP**, and then select **Apply**.
+1. Make a selection from the results list.
+
+1. To revoke an identity's access to high-risk tasks, select **Revoke high-risk tasks**.
+1. When the following message displays: **Are you sure you want to change permissions?**, select:
+ - **Generate Script** to generate a script where you can manually add/remove the permissions you selected.
+ - **Execute** to change the permission.
+ - **Close** to cancel the action.
+
+## Revoke an identity's ability to delete tasks
+
+1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Permissions** subtab.
+1. From the **Select an authorization system type** dropdown, select **Azure** or **GCP**.
+1. From the **Select an authorization system** dropdown, select the accounts you want to access.
+1. From the **Search for** dropdown, select **Group**, **User**, or **APP**, and then select **Apply**.
+1. Make a selection from the results list.
+
+1. To revoke an identity's ability to delete tasks, select **Revoke delete tasks**.
+1. When the following message displays: **Are you sure you want to change permissions?**, select:
+ - **Generate Script** to generate a script where you can manually add/remove the permissions you selected.
+ - **Execute** to change the permission.
+ - **Close** to cancel the action.
+
+## Assign read-only status to an identity
+
+1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Permissions** subtab.
+1. From the **Select an authorization system type** dropdown, select **Azure** or **GCP**.
+1. From the **Select an authorization system** dropdown, select the accounts you want to access.
+1. From the **Search for** dropdown, select **Group**, **User**, or **APP**, and then select **Apply**.
+1. Make a selection from the results list.
+
+1. To assign read-only status to an identity, select **Assign read-only status**.
+1. When the following message displays: **Are you sure you want to change permissions?**, select:
+ - **Generate Script** to generate a script where you can manually add/remove the permissions you selected.
+ - **Execute** to change the permission.
+ - **Close** to cancel the action.
+
+
+## Next steps
+
+- For information on how to view existing roles/policies, requests, and permissions, see [View roles/policies, requests, and permission in the Remediation dashboard](cloudknox-ui-remediation.md).
+- For information on how to create a role/policy, see [Create a role/policy](cloudknox-howto-create-role-policy.md).
+- For information on how to clone a role/policy, see [Clone a role/policy](cloudknox-howto-clone-role-policy.md).
+- For information on how to delete a role/policy, see [Delete a role/policy](cloudknox-howto-delete-role-policy.md).
+- For information on how to modify a role/policy, see Modify a role/policy](cloudknox-howto-modify-role-policy.md).
+- To view information about roles/policies, see [View information about roles/policies](cloudknox-howto-view-role-policy.md).
+- For information on how to attach and detach permissions for AWS identities, see [Attach and detach policies for AWS identities](cloudknox-howto-attach-detach-permissions.md).
+- For information on how to add and remove roles and tasks for Azure and GCP identities, see [Add and remove roles and tasks for Azure and GCP identities](cloudknox-howto-attach-detach-permissions.md).
+- For information on how to create or approve a request for permissions, see [Create or approve a request for permissions](cloudknox-howto-create-approve-privilege-request.md).
active-directory Cloudknox Howto View Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-howto-view-role-policy.md
+
+ Title: View information about roles/ policies in the Remediation dashboard in CloudKnox Permissions Management
+description: How to view and filter information about roles/ policies in the Remediation dashboard in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View information about roles/ policies in the Remediation dashboard
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+The **Remediation** dashboard in CloudKnox Permissions Management (CloudKnox) enables system administrators to view, adjust, and remediate excessive permissions based on a user's activity data. You can use the **Roles/Policies** subtab in the dashboard to view information about roles and policies in the Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) authorization systems.
+
+> [!NOTE]
+> To view the **Remediation dashboard** tab, you must have **Viewer**, **Controller**, or **Administrator** permissions. To make changes on this tab, you must have **Controller** or **Administrator** permissions. If you donΓÇÖt have these permissions, contact your system administrator.
+
+> [!NOTE]
+> Microsoft Azure uses the term *role* for what other Cloud providers call *policy*. CloudKnox automatically makes this terminology change when you select the authorization system type. In the user documentation, we use *role/policy* to refer to both.
++
+## View information about roles/policies
+
+1. On the CloudKnox home page, select the **Remediation** tab, and then select the **Role/Policies** subtab.
+
+ The **Role/Policies list** displays a list of existing roles/policies and the following information about each role/policy
+ - **Role/Policy name**: The name of the roles/policies available to you.
+ - **Role/Policy type**: **Custom**, **System**, or **CloudKnox only**
+ - **Actions**: The type of action you can perform on the role/policy, **Clone**, **Modify**, or **Delete**
++
+1. To display details about the role/policy and view its assigned tasks and identities, select the arrow to the left of the role/policy name.
+
+ The **Tasks** list appears, displaying:
+ - A list of **Tasks**.
+ - **For AWS:**
+ - The **Users**, **Groups**, and **Roles** the task is **Directly assigned to**.
+ - The **Group members** and **Role identities** the task is **Indirectly assessable by**.
+
+ - **For Azure:**
+ - The **Users**, **Groups**, **Enterprise applications** and **Managed identities** the task is **Directly assigned to**.
+ - The **Group members** the task is **Indirectly assessable by**.
+
+ - **For GCP:**
+ - The **Users**, **Groups**, and **Service accounts** the task is **Directly assigned to**.
+ - The **Group members** the task is **Indirectly assessable by**.
+
+1. To close the role/policy details, select the arrow to the left of the role/policy name.
+
+## Export information about roles/policies
+
+- **Export CSV**: Select this option to export the displayed list of roles/policies as a comma-separated values (CSV) file.
+
+ When the file is successfully exported, a message appears: **Exported successfully.**
+
+ - Check your email for a message from the CloudKnox Customer Success Team. This email contains a link to:
+ - The **Role Policy Details** report in CSV format.
+ - The **Reports** dashboard where you can configure how and when you can automatically receive reports.
++++
+## Filter information about roles/policies
+
+1. On the CloudKnox home page, select the **Remediation** dashboard, and then select the **Role/Policies** tab.
+1. To filter the roles/policies, select from the following options:
+
+ - **Authorization system type**: Select **AWS**, **Azure**, or **GCP**.
+ - **Authorization system**: Select the accounts you want.
+ - **Role/Policy type**: Select from the following options:
+
+ - **All**: All managed roles/policies.
+ - **Custom**: A customer-managed role/policy.
+ - **System**: A cloud service provider-managed role/policy.
+ - **CloudKnox only**: A role/policy created by CloudKnox.
+
+ - **Role/Policy status**: Select **All**, **Assigned**, or **Unassigned**.
+ - **Role/Policy usage**: Select **All** or **Unused**.
+1. Select **Apply**.
+
+ To discard your changes, select **Reset filter**.
++
+## Next steps
+
+- For information on how to view existing roles/policies, requests, and permissions, see [View roles/policies, requests, and permission in the Remediation dashboard](cloudknox-ui-remediation.md).
+- For information on how to create a role/policy, see [Create a role/policy](cloudknox-howto-create-role-policy.md).
+- For information on how to clone a role/policy, see [Clone a role/policy](cloudknox-howto-clone-role-policy.md).
+- For information on how to delete a role/policy, see [Delete a role/policy](cloudknox-howto-delete-role-policy.md).
+- For information on how to modify a role/policy, see Modify a role/policy](cloudknox-howto-modify-role-policy.md).
+- For information on how to attach and detach permissions AWS identities, see [Attach and detach policies for AWS identities](cloudknox-howto-attach-detach-permissions.md).
+- For information on how to revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities, see [Revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities](cloudknox-howto-revoke-task-readonly-status.md)
+- For information on how to create or approve a request for permissions, see [Create or approve a request for permissions](cloudknox-howto-create-approve-privilege-request.md).
+- For information on how to view information about roles/policies, see [View information about roles/policies](cloudknox-howto-view-role-policy.md)
active-directory Cloudknox Integration Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-integration-api.md
+
+ Title: Set and view configuration settings in CloudKnox Permissions Management
+description: How to view the CloudKnox Permissions Management API integration settings and create service accounts and roles.
+++++++ Last updated : 02/23/2022+++
+# Set and view configuration settings
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This topic describes how to view configuration settings, create and delete a service account, and create a role in CloudKnox Permissions Management (CloudKnox).
+
+## View configuration settings
+
+The **Integrations** dashboard displays the authorization systems available to you.
+
+1. To display the **Integrations** dashboard, select **User** (your initials) in the upper right of the screen, and then select **Integrations.**
+
+ The **Integrations** dashboard displays a tile for each available authorization system.
+
+1. Select an authorization system tile to view the following integration information:
+
+ 1. To find out more about the CloudKnox API, select **CloudKnox API**, and then select documentation.
+ <!Add Link: [documentation](https://developer.cloudknox.io/)>
+
+ 1. To view information about service accounts, select **Integration**:
+ - **Email**: Lists the email address of the user who created the integration.
+ - **Created By**: Lists the first and last name of the user who created the integration.
+ - **Created On**: Lists the date and time the integration was created.
+ - **Recent Activity**: Lists the date and time the integration was last used, or notes if the integration was never used.
+ - **Service Account ID**: Lists the service account ID.
+ - **Access Key**: Lists the access key code.
+
+ 1. To view settings information, select **Settings**:
+ - **Roles can create service account**: Lists the type of roles you can create.
+ - **Access Key Rotation Policy**: Lists notifications and actions you can set.
+ - **Access Key Usage Policy**: Lists notifications and actions you can set.
+
+## Create a service account
+
+1. On the **Integrations** dashboard, select **User**, and then select **Integrations.**
+2. Click **Create Service Account**. The following information is pre-populated on the page:
+ - **API Endpoint**
+ - **Service Account ID**
+ - **Access Key**
+ - **Secret Key**
+
+3. To copy the codes, select the **Duplicate** icon next to the respective information.
+
+ > [!NOTE]
+ > The codes are time sensitive and will regenerate after the box is closed.
+
+4. To regenerate the codes, at the bottom of the column, select **Regenerate**.
+
+## Delete a service account
+
+1. On the **Integrations** dashboard, select **User**, and then select **Integrations.**
+
+1. On the right of the email address, select **Delete Service Account**.
+
+ On the **Validate OTP To Delete [Service Name] Integration** box, a message displays asking you to check your email for a code sent to the email address on file.
+
+ If you don't receive the code, select **Resend OTP**.
+
+1. In the **Enter OTP** box, enter the code from the email.
+
+1. Click **Verify**.
+
+## Create a role
+
+1. On the **Integrations** dashboard, select **User**, and then select **Settings**.
+2. Under **Roles can create service account**, select the role you want:
+ - **Super Admin**
+ - **Viewer**
+ - **Controller**
+
+3. In the **Access Key Rotation Policy** column, select options for the following:
+
+ - **How often should the users rotate their access keys?**: Select **30 days**, **60 days**, **90 days**, or **Never**.
+ - **Notification**: Enter a whole number in the blank space within **Notify "X" days before the selected period**, or select **Don't Notify**.
+ - **Action (after the key rotation period ends)**: Select **Disable Action Key** or **No Action**.
+
+4. In the **Access Key Usage Policy** column, select options for the following:
+
+ - **How often should the users go without using their access keys?**: Select **30 days**, **60 days**, **90 days**, or **Never**.
+ - **Notification**: Enter a whole number in the blank space within **Notify "X" days before the selected period**, or select **Don't Notify**.
+ - **Action (after the key rotation period ends)**: Select **Disable Action Key** or **No Action**.
+
+5. Click **Save**.
+
+<!## Next steps>
+
+<!View integrated authorization systems](cloudknox-product-integrations)>
+<![Installation overview](cloudknox-installation.md)>
+<![Sign up and deploy FortSentry registration](cloudknox-fortsentry-registration.md)>
active-directory Cloudknox Multi Cloud Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-multi-cloud-glossary.md
+
+ Title: CloudKnox Permissions Management - The CloudKnox glossary
+description: CloudKnox Permissions Management glossary
+++++++ Last updated : 02/23/2022+++
+# The CloudKnox glossary
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This glossary provides a list of some of the commonly used cloud terms in CloudKnox Permissions Management (CloudKnox). These terms will help CloudKnox users navigate through cloud-specific terms and cloud-generic terms.
+
+## Commonly-used acronyms and terms
+
+| Term | Definition |
+|--|--|
+| ACL | Access control list. A list of files or resources that contain information about which users or groups have permission to access those resources or modify those files. |
+| ARN | Azure Resource Notification |
+| ASIM | Azure Sentinel Information Model |
+| Cloud security | A form of cybersecurity that protects data stored online on cloud computing platforms from theft, leakage, and deletion. Includes firewalls, penetration testing, obfuscation, tokenization, virtual private networks (VPN), and avoiding public internet connections. |
+| CASB | Cloud Access Security Broker. Products and services that address security gaps in an organizationΓÇÖs use of cloud services. Designed to protect and control access to data thatΓÇÖs stored in someone elseΓÇÖs systems. Deliver differentiated, cloud-specific capabilities that may not be available as features in traditional security products. They provide a central location for policy and governance concurrently across multiple cloud services. They also provide granular visibility into and control over user activities and sensitive data from both inside and outside the enterprise perimeter, including cloud-to-cloud access. |
+| Cloud storage | A service model in which data is maintained, managed, and backed up remotely. Available to users over a network. |
+| CIAM | Cloud Infrastructure Access Management |
+| CIEM | Cloud Infrastructure Entitlement Management. The next generation of solutions for enforcing least privilege in the cloud. It addresses cloud-native security challenges of managing identity access management in cloud environments. |
+| CIS | Cloud infrastructure security |
+| CWP | Cloud Workload Protection. A workload-centric security solution that targets the unique protection requirements of workloads in modern enterprise environments. |
+| CNAPP | Cloud-Native Application Protection. The convergence of cloud security posture management (CSPM), cloud workload protection (CWP), cloud infrastructure entitlement management (CIEM), and cloud applications security broker (CASB). An integrated security approach that covers the entire lifecycle of cloud-native applications. |
+| CSPM | Cloud Security Posture Management. Addresses risks of compliance violations and misconfigurations in enterprise cloud environments. Also focuses on the resource level to identify deviations from best practice security settings for cloud governance and compliance. |
+| CWPP | Cloud Workload Protection Platform |
+| DRI | Data risk index. A comprehensive, integrated representation of data risk. |
+| Data risk management | The process an organization uses when acquiring, storing, transforming, and using its data, from creation to retirement, to eliminate data risk. |
+| Delete task | A high-risk task that allows users to permanently delete a resource. |
+| Entitlement | An abstract attribute that represents different forms of user permissions in a range of infrastructure systems and business applications.|
+| Entitlement management | Technology that grants, resolves, enforces, revokes, and administers fine-grained access entitlements (that is, authorizations, privileges, access rights, permissions and rules). Its purpose is to execute IT access policies to structured/unstructured data, devices, and services. It can be delivered by different technologies, and is often different across platforms, applications, network components, and devices. |
+| High-risk task | A task in which a user can cause data leakage, service disruption, or service degradation. |
+| Hybrid cloud | Sometimes called a cloud hybrid. A computing environment that combines an on-premises data center (a private cloud) with a public cloud. It allows data and applications to be shared between them. |
+| hybrid cloud storage | A private or public cloud used to store an organization's data. |
+| ICM | Incident Case Management |
+| IDS | Intrusion Detection Service |
+| Identity analytics | Includes basic monitoring and remediation, dormant and orphan account detection and removal, and privileged account discovery. |
+| Identity lifecycle management | Maintain digital identities, their relationships with the organization, and their attributes during the entire process from creation to eventual archiving, using one or more identity life cycle patterns. |
+| IGA | Identity governance and administration. Technology solutions that conduct identity management and access governance operations. IGA includes the tools, technologies, reports, and compliance activities required for identity lifecycle management. It includes every operation from account creation and termination to user provisioning, access certification, and enterprise password management. It looks at automated workflow and data from authoritative sources capabilities, self-service user provisioning, IT governance, and password management. |
+| ITSM | Information Technology Security Management. Tools that enable IT operations organizations (infrastructure and operations managers), to better support the production environment. Facilitate the tasks and workflows associated with the management and delivery of quality IT services. |
+| JIT | Just in Time access can be seen as a way to enforce the principle of least privilege to ensure users and non-human identities are given the minimum level of privileges. It also ensures that privileged activities are conducted in accordance with an organizationΓÇÖs Identity Access Management (IAM), IT Service Management (ITSM), and Privileged Access Management (PAM) policies, with its entitlements and workflows. JIT access strategy enables organizations to maintain a full audit trail of privileged activities so they can easily identify who or what gained access to which systems, what they did at what time, and for how long. |
+| Least privilege | Ensures that users only gain access to the specific tools they need to complete a task. |
+| Multi-tenant | A single instance of the software and its supporting infrastructure serves multiple customers. Each customer shares the software application and also shares a single database. |
+| OIDC | OpenID Connect. An authentication protocol that verifies user identity when a user is trying to access a protected HTTPs end point. OIDC is an evolutionary development of ideas implemented earlier in OAuth. |
+| PAM | Privileged access management. Tools that offer one or more of these features: discover, manage, and govern privileged accounts on multiple systems and applications; control access to privileged accounts, including shared and emergency access; randomize, manage, and vault credentials (password, keys, etc.) for administrative, service, and application accounts; single sign-on (SSO) for privileged access to prevent credentials from being revealed; control, filter, and orchestrate privileged commands, actions, and tasks; manage and broker credentials to applications, services, and devices to avoid exposure; and monitor, record, audit, and analyze privileged access, sessions, and actions. |
+| PASM | Privileged accounts are protected by vaulting their credentials. Access to those accounts is then brokered for human users, services, and applications. Privileged session management (PSM) functions establish sessions with possible credential injection and full session recording. Passwords and other credentials for privileged accounts are actively managed and changed at definable intervals or upon the occurrence of specific events. PASM solutions may also provide application-to-application password management (AAPM) and zero-install remote privileged access features for IT staff and third parties that don't require a VPN. |
+| PEDM | Specific privileges are granted on the managed system by host-based agents to logged-in users. PEDM tools provide host-based command control (filtering); application allow, deny, and isolate controls; and/or privilege elevation. The latter is in the form of allowing particular commands to be run with a higher level of privileges. PEDM tools execute on the actual operating system at the kernel or process level. Command control through protocol filtering is explicitly excluded from this definition because the point of control is less reliable. PEDM tools may also provide file integrity monitoring features. |
+| Permission | Rights and privileges. Details given by users or network administrators that define access rights to files on a network. Access controls attached to a resource dictating which identities can access it and how. Privileges are attached to identities and are the ability to perform certain actions. An identity having the ability to perform an action on a resource. |
+| POD | Permission on Demand. A type of JIT access that allows the temporary elevation of permissions, enabling identities to access resources on a by-request, timed basis. |
+| Permissions creep index (PCI) | A number from 0 to 100 that represents the incurred risk of users with access to high-risk privileges. PCI is a function of users who have access to high-risk privileges but aren't actively using them. |
+| Policy and role management | Maintain rules that govern automatic assignment and removal of access rights. Provides visibility of access rights for selection in access requests, approval processes, dependencies, and incompatibilities between access rights, and more. Roles are a common vehicle for policy management. |
+| Privilege | The authority to make changes to a network or computer. Both people and accounts can have privileges, and both can have different levels of privilege. |
+| Privileged account | A login credential to a server, firewall, or other administrative account. Often referred to as admin accounts. Comprised of the actual username and password; these two things together make up the account. A privileged account is allowed to do more things than a normal account. |
+| Public Cloud | Computing services offered by third-party providers over the public Internet, making them available to anyone who wants to use or purchase them. They may be free or sold on-demand, allowing customers to pay only per usage for the CPU cycles, storage, or bandwidth they consume. |
+| Resource | Any entity that uses compute capabilities can be accessed by users and services to perform actions. |
+| Role | An IAM identity that has specific permissions. Instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. A role doesn't have standard long-term credentials such as a password or access keys associated with. |
+| SCIM | System for CrossΓÇôdomain Identity Management |
+| SCIΓÇôM | Security Compliance Identity and Management |
+| SIEM | Security Information and Event Management. Technology that supports threat detection, compliance and security incident management through the collection and analysis (both near real time and historical) of security events, as well as a wide variety of other event and contextual data sources. The core capabilities are a broad scope of log event collection and management, the ability to analyze log events and other data across disparate sources, and operational capabilities (such as incident management, dashboards, and reporting). |
+| SOAR | Security orchestration, automation and response (SOAR). Technologies that enable organizations to take inputs from various sources (mostly from security information and event management [SIEM] systems) and apply workflows aligned to processes and procedures. These workflows can be orchestrated via integrations with other technologies and automated to achieve the desired outcome and greater visibility. Other capabilities include case and incident management features; the ability to manage threat intelligence, dashboards and reporting; and analytics that can be applied across various functions. SOAR tools significantly enhance security operations activities like threat detection and response by providing machine-powered assistance to human analysts to improve the efficiency and consistency of people and processes. |
+| Super user / Super identity | A powerful account used by IT system administrators that can be used to make configurations to a system or application, add or remove users, or delete data. |
+| Tenant | A dedicated instance of the services and organization data stored within a specific default location. |
+| UUID | Universally unique identifier. A 128-bit label used for information in computer systems. The term globally unique identifier (GUID) is also used.|
+| Zero trust security | The three foundational principles: explicit verification, breach assumption, and least privileged access.|
+| ZTNA | Zero trust network access. A product or service that creates an identity- and context-based, logical access boundary around an application or set of applications. The applications are hidden from discovery, and access is restricted via a trust broker to a set of named entities. The broker verifies the identity, context and policy adherence of the specified participants before allowing access and prohibits lateral movement elsewhere in the network. It removes application assets from public visibility and significantly reduces the surface area for attack.|
+
+## Next steps
+
+- For an overview of CloudKnox, see [What's CloudKnox Permissions Management?](cloudknox-overview.md).
active-directory Cloudknox Onboard Add Account After Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-add-account-after-onboarding.md
+
+ Title: Add an account/ subscription/ project to Microsoft CloudKnox Permissions Management after onboarding is complete
+description: How to add an account/ subscription/ project to Microsoft CloudKnox Permissions Management after onboarding is complete.
+++++++ Last updated : 02/23/2022+++
+# Add an account/ subscription/ project after onboarding is complete
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how to add an Amazon Web Services (AWS) account, Microsoft Azure subscription, or Google Cloud Platform (GCP) project in Microsoft CloudKnox Permissions Management (CloudKnox) after you've completed the onboarding process.
+
+## Add an AWS account after onboarding is complete
+
+1. In the CloudKnox home page, select **Settings** (the gear icon), and then select the **Data collectors** tab.
+1. On the **Data collectors** dashboard, select **AWS**.
+1. Select the ellipses **(...)** at the end of the row, and then select **Edit Configuration**.
+
+ The **M-CIEM Onboarding - Summary** page displays.
+
+1. Go to **AWS Account IDs**, and then select **Edit** (the pencil icon).
+
+ The **M-CIEM On Boarding - AWS Member Account Details** page displays.
+
+1. Go to **Enter Your AWS Account IDs**, and then select **Add** (the plus **+** sign).
+1. Copy your account ID from AWS and paste it into the **Enter Account ID** box.
+
+ The AWS account ID is automatically added to the script.
+
+ If you want to add more account IDs, repeat steps 5 and 6 to add up to a total of 10 account IDs.
+
+1. Copy the script.
+1. Go to AWS and start the Cloud Shell.
+1. Create a new script for the new account and press the **Enter** key.
+1. Paste the script you copied.
+1. Locate the account line, delete the original account ID (the one that was previously added), and then run the script.
+1. Return to CloudKnox, and the new account ID you added will be added to the list of account IDs displayed in the **M-CIEM Onboarding - Summary** page.
+1. Select **Verify now & save**.
+
+ When your changes are saved, the following message displays: **Successfully updated configuration.**
++
+## Add an Azure subscription after onboarding is complete
+
+1. In the CloudKnox home page, select **Settings** (the gear icon), and then select the **Data collectors** tab.
+1. On the **Data collectors** dashboard, select **Azure**.
+1. Select the ellipses **(...)** at the end of the row, and then select **Edit Configuration**.
+
+ The **M-CIEM Onboarding - Summary** page displays.
+
+1. Go to **Azure subscription IDs**, and then select **Edit** (the pencil icon).
+1. Go to **Enter your Azure Subscription IDs**, and then select **Add subscription** (the plus **+** sign).
+1. Copy and paste your subscription ID from Azure and paste it into the subscription ID box.
+
+ The subscription ID is automatically added to the subscriptions line in the script.
+
+ If you want to add more subscription IDs, repeat steps 4 and 5 to add up to a total of 10 subscriptions.
+
+1. Copy the script.
+1. Go to Azure and start the Cloud Shell.
+1. Create a new script for the new subscription and press enter.
+1. Paste the script you copied.
+1. Locate the subscription line and delete the original subscription ID (the one that was previously added), and then run the script.
+1. Return to CloudKnox, and the new subscription ID you added will be added to the list of subscription IDs displayed in the **M-CIEM Onboarding - Summary** page.
+1. Select **Verify now & save**.
+
+ When your changes are saved, the following message displays: **Successfully updated configuration.**
+
+## Add a GCP project after onboarding is complete
+
+1. In the CloudKnox home page, select **Settings** (the gear icon), and then select the **Data collectors** tab.
+1. On the **Data collectors** dashboard, select **GCP**.
+1. Select the ellipses **(...)** at the end of the row, and then select **Edit Configuration**.
+
+ The **M-CIEM Onboarding - Summary** page displays.
+
+1. Go to **GCP Project IDs**, and then select **Edit** (the pencil icon).
+1. Go to **Enter your GCP Project IDs**, and then select **Add Project ID** (the plus **+** sign).
+1. Copy and paste your project ID from Azure and paste it into the **Project ID** box.
+
+ The project ID is automatically added to the **Project ID** line in the script.
+
+ If you want to add more project IDs, repeat steps 4 and 5 to add up to a total of 10 project IDs.
+
+1. Copy the script.
+1. Go to GCP and start the Cloud Shell.
+1. Create a new script for the new project ID and press enter.
+1. Paste the script you copied.
+1. Locate the project ID line and delete the original project ID (the one that was previously added), and then run the script.
+1. Return to CloudKnox, and the new project ID you added will be added to the list of project IDs displayed in the **M-CIEM Onboarding - Summary** page.
+1. Select **Verify now & save**.
+
+ When your changes are saved, the following message displays: **Successfully updated configuration.**
+++
+## Next steps
+
+- For information on how to onboard an Amazon Web Services (AWS) account, see [Onboard an AWS account](cloudknox-onboard-aws.md).
+ - For information on how to onboard a Microsoft Azure subscription, see [Onboard a Microsoft Azure subscription](cloudknox-onboard-azure.md).
+- For information on how to onboard a Google Cloud Platform (GCP) project, see [Onboard a GCP project](cloudknox-onboard-gcp.md).
+- For information on how to enable or disable the controller after onboarding is complete, see [Enable or disable the controller](cloudknox-onboard-enable-controller-after-onboarding.md).
active-directory Cloudknox Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-aws.md
+
+ Title: Onboard an Amazon Web Services (AWS) account on CloudKnox Permissions Management
+description: How to onboard an Amazon Web Services (AWS) account on CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Onboard an Amazon Web Services (AWS) account
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+> [!Note]
+> Sign up for the CloudKnox Permissions Management public preview by filling [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR9AT7gfYe2NPtdIbYxQQX45UNEpIVjY4WUJNSUhMVjcyNzdYOFY2NFhISi4u).
+
+This article describes how to onboard an Amazon Web Services (AWS) account on CloudKnox Permissions Management (CloudKnox).
+
+> [!NOTE]
+> A *global administrator* or *super admin* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable CloudKnox on your Azure Active Directory tenant](cloudknox-onboard-enable-tenant.md).
+
+## Onboard an AWS account
+
+1. If the **Data Collectors** dashboard isn't displayed when CloudKnox launches:
+
+ - In the CloudKnox home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab.
+
+1. On the **Data Collectors** dashboard, select **AWS**, and then select **Create Configuration**.
+
+### 1. Create an Azure AD OIDC App.
+
+1. On the **CloudKnox Onboarding - Azure AD OIDC App Creation** page, enter the **OIDC Azure app name**.
+
+ This app is used to set up an OpenID Connect (OIDC) connection to your AWS account. OIDC is an interoperable authentication protocol based on the OAuth 2.0 family of specifications. The scripts generated on this page create the app of this specified name in your Azure AD tenant with the right configuration.
+
+1. To create the app registration, copy the script and run it in your Azure command-line app.
+
+ > [!NOTE]
+ > 1. To confirm that the app was created, open **App registrations** in Azure and, on the **All applications** tab, locate your app.
+ > 1. Select the app name to open the **Expose an API** page. The **Application ID URI** displayed in the **Overview** page is the *audience value* used while making an OIDC connection with your AWS account.
+
+1. Return to CloudKnox, and in the **CloudKnox Onboarding - Azure AD OIDC App Creation**, select **Next**.
+
+### 2. Set up an AWS OIDC account.
+
+1. In the **CloudKnox Onboarding - AWS OIDC Account Setup** page, enter the **AWS OIDC account ID** where the OIDC provider is created. You can change the role name to your requirements.
+1. Open another browser window and sign in to the AWS account where you want to create the OIDC provider.
+1. Select **Launch Template**. This link takes you to the **AWS CloudFormation create stack** page.
+1. Scroll to the bottom of the page, and in the **Capabilities** box, select **I acknowledge that AWS CloudFormation might create IAM resources with custom names**. Then select **Create Stack.**
+
+ This AWS CloudFormation stack creates an OIDC Identity Provider (IdP) representing Azure AD STS and an AWS IAM role with a trust policy that allows external identities from Azure AD to assume it via the OIDC IdP. These entities are listed on the **Resources** page.
+
+1. Return to CloudKnox, and in the **CloudKnox Onboarding - AWS OIDC Account Setup** page, select **Next**.
+
+### 3. Set up an AWS master account. (Optional)
+
+1. If your organization has Service Control Policies (SCPs) that govern some or all of the member accounts, set up the master account connection in the **CloudKnox Onboarding - AWS Master Account Details** page.
+
+ Setting up the master account connection allows CloudKnox to auto-detect and onboard any AWS member accounts that have the correct CloudKnox role.
+
+ - In the **CloudKnox Onboarding - AWS Master Account Details** page, enter the **Master Account ID** and **Master Account Role**.
+
+1. Open another browser window and sign in to the AWS console for your master account.
+
+1. Return to CloudKnox, and in the **CloudKnox Onboarding - AWS Master Account Details** page, select **Launch Template**.
+
+ The **AWS CloudFormation create stack** page opens, displaying the template.
+
+1. Review the information in the template, make changes, if necessary, then scroll to the bottom of the page.
+
+1. In the **Capabilities** box, select **I acknowledge that AWS CloudFormation might create IAM resources with custom names**. Then select **Create stack**.
+
+ This AWS CloudFormation stack creates a role in the master account with the necessary permissions (policies) to collect SCPs and list all the accounts in your organization.
+
+ A trust policy is set on this role to allow the OIDC role created in your AWS OIDC account to access it. These entities are listed in the **Resources** tab of your CloudFormation stack.
+
+1. Return to CloudKnox, and in **CloudKnox Onboarding - AWS Master Account Details**, select **Next**.
+
+### 4. Set up an AWS Central logging account. (Optional but recommended)
+
+1. If your organization has a central logging account where logs from some or all of your AWS account are stored, in the **CloudKnox Onboarding - AWS Central Logging Account Details** page, set up the logging account connection.
+
+ In the **CloudKnox Onboarding - AWS Central Logging Account Details** page, enter the **Logging Account ID** and **Logging Account Role**.
+
+1. In another browser window, sign in to the AWS console for the AWS account you use for central logging.
+
+1. Return to CloudKnox, and in the **CloudKnox Onboarding - AWS Central Logging Account Details** page, select **Launch Template**.
+
+ The **AWS CloudFormation create stack** page opens, displaying the template.
+
+1. Review the information in the template, make changes, if necessary, then scroll to the bottom of the page.
+
+1. In the **Capabilities** box, select **I acknowledge that AWS CloudFormation might create IAM resources with custom names**, and then select **Create stack**.
+
+ This AWS CloudFormation stack creates a role in the logging account with the necessary permissions (policies) to read S3 buckets used for central logging. A trust policy is set on this role to allow the OIDC role created in your AWS OIDC account to access it. These entities are listed in the **Resources** tab of your CloudFormation stack.
+
+1. Return to CloudKnox, and in the **CloudKnox Onboarding - AWS Central Logging Account Details** page, select **Next**.
+
+### 5. Set up an AWS member account.
+
+1. In the **CloudKnox Onboarding - AWS Member Account Details** page, enter the **Member Account Role** and the **Member Account IDs**.
+
+ You can enter up to 10 account IDs. Click the plus icon next to the text box to add more account IDs.
+
+ > [!NOTE]
+ > Perform the next 5 steps for each account ID you add.
+
+1. Open another browser window and sign in to the AWS console for the member account.
+
+1. Return to the **CloudKnox Onboarding - AWS Member Account Details** page, select **Launch Template**.
+
+ The **AWS CloudFormation create stack** page opens, displaying the template.
+
+1. In the **CloudTrailBucketName** page, enter a name.
+
+ You can copy and paste the **CloudTrailBucketName** name from the **Trails** page in AWS.
+
+ > [!NOTE]
+ > A *cloud bucket* collects all the activity in a single account that CloudKnox monitors. Enter the name of a cloud bucket here to provide CloudKnox with the access required to collect activity data.
+
+1. From the **Enable Controller** dropdown, select:
+
+ - **True**, if you want the controller to provide CloudKnox with read and write access so that any remediation you want to do from the CloudKnox platform can be done automatically.
+ - **False**, if you want the controller to provide CloudKnox with read-only access.
+
+1. Scroll to the bottom of the page, and in the **Capabilities** box, select **I acknowledge that AWS CloudFormation might create IAM resources with custom names**. Then select **Create stack**.
+
+ This AWS CloudFormation stack creates a collection role in the member account with necessary permissions (policies) for data collection.
+
+ A trust policy is set on this role to allow the OIDC role created in your AWS OIDC account to access it. These entities are listed in the **Resources** tab of your CloudFormation stack.
+
+1. Return to CloudKnox, and in the **CloudKnox Onboarding - AWS Member Account Details** page, select **Next**.
+
+ This step completes the sequence of required connections from Azure AD STS to the OIDC connection account and the AWS member account.
+
+### 6. Review and save.
+
+1. In **CloudKnox Onboarding ΓÇô Summary**, review the information youΓÇÖve added, and then select **Verify Now & Save**.
+
+ The following message appears: **Successfully created configuration.**
+
+ On the **Data Collectors** dashboard, the **Recently Uploaded On** column displays **Collecting**. The **Recently Transformed On** column displays **Processing.**
+
+ You have now completed onboarding AWS, and CloudKnox has started collecting and processing your data.
+
+### 7. View the data.
+
+1. To view the data, select the **Authorization Systems** tab.
+
+ The **Status** column in the table displays **Collecting Data.**
+
+ The data collection process may take some time, depending on the size of the account and how much data is available for collection.
++
+## Next steps
+
+- For information on how to onboard a Microsoft Azure subscription, see [Onboard a Microsoft Azure subscription](cloudknox-onboard-azure.md).
+- For information on how to onboard a Google Cloud Platform (GCP) project, see [Onboard a Google Cloud Platform (GCP) project](cloudknox-onboard-gcp.md).
+- For information on how to enable or disable the controller after onboarding is complete, see [Enable or disable the controller](cloudknox-onboard-enable-controller-after-onboarding.md).
+- For information on how to add an account/subscription/project after onboarding is complete, see [Add an account/subscription/project after onboarding is complete](cloudknox-onboard-add-account-after-onboarding.md).
active-directory Cloudknox Onboard Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-azure.md
+
+ Title: Onboard a Microsoft Azure subscription in CloudKnox Permissions Management
+description: How to a Microsoft Azure subscription on CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Onboard a Microsoft Azure subscription
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+> [!Note]
+> Sign up for the CloudKnox Permissions Management public preview by filling [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR9AT7gfYe2NPtdIbYxQQX45UNEpIVjY4WUJNSUhMVjcyNzdYOFY2NFhISi4u).
+
+This article describes how to onboard a Microsoft Azure subscription or subscriptions on CloudKnox Permissions Management (CloudKnox). Onboarding a subscription creates a new authorization system to represent the Azure subscription in CloudKnox.
+
+> [!NOTE]
+> A *global administrator* or *super admin* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable CloudKnox on your Azure Active Directory tenant](cloudknox-onboard-enable-tenant.md).
+
+## Prerequisites
+
+To add CloudKnox to your Azure AD tenant:
+- You must have an Azure AD user account and an Azure command-line interface (Azure CLI) on your system, or an Azure subscription. If you don't already have one, [create a free account](https://azure.microsoft.com/free/).
+- You must have **Microsoft.Authorization/roleAssignments/write** permission at the subscription or management group scope to perform these tasks. If you don't have this permission, you can ask someone who has this permission to perform these tasks for you.
+
+## Onboard an Azure subscription
+
+1. If the **Data Collectors** dashboard isn't displayed when CloudKnox launches:
+
+ - In the CloudKnox home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab.
+
+1. On the **Data Collectors** dashboard, select **Azure**, and then select **Create Configuration**.
+
+### 1. Add Azure subscription details
+
+1. On the **CloudKnox Onboarding - Azure Subscription Details** page, enter the **Subscription IDs** that you want to onboard.
+
+ > [!NOTE]
+ > To locate the Azure subscription IDs, open the **Subscriptions** page in Azure.
+ > You can enter up to 10 subscriptions IDs. Select the plus sign **(+)** icon next to the text box to enter more subscriptions.
+
+1. From the **Scope** dropdown, select **Subscription** or **Management Group**. The script box displays the role assignment script.
+
+ > [!NOTE]
+ > Select **Subscription** if you want to assign permissions separately for each individual subscription. The generated script has to be executed once per subscription.
+ > Select **Management Group** if all of your subscriptions are under one management group. The generated script must be executed once for the management group.
+
+1. To give this role assignment to the service principal, copy the script to a file on your system where Azure CLI is installed and execute it.
+
+ You can execute the script once for each subscription, or once for all the subscriptions in the management group.
+
+1. From the **Enable Controller** dropdown, select:
+
+ - **True**, if you want the controller to provide CloudKnox with read and write access so that any remediation you want to do from the CloudKnox platform can be done automatically.
+ - **False**, if you want the controller to provide CloudKnox with read-only access.
+
+1. Return to **CloudKnox Onboarding - Azure Subscription Details** page and select **Next**.
+
+### 2. Review and save.
+
+- In **CloudKnox Onboarding ΓÇô Summary** page, review the information youΓÇÖve added, and then select **Verify Now & Save**.
+
+ The following message appears: **Successfully Created Configuration.**
+
+ On the **Data Collectors** tab, the **Recently Uploaded On** column displays **Collecting**. The **Recently Transformed On** column displays **Processing.**
+
+ You have now completed onboarding Azure, and CloudKnox has started collecting and processing your data.
+
+### 3. View the data.
+
+- To view the data, select the **Authorization Systems** tab.
+
+ The **Status** column in the table displays **Collecting Data.**
+
+ The data collection process will take some time, depending on the size of the account and how much data is available for collection.
++
+## Next steps
+
+- For information on how to onboard an Amazon Web Services (AWS) account, see [Onboard an Amazon Web Services (AWS) account](cloudknox-onboard-aws.md).
+- For information on how to onboard a Google Cloud Platform (GCP) project, see [Onboard a Google Cloud Platform (GCP) project](cloudknox-onboard-gcp.md).
+- For information on how to enable or disable the controller after onboarding is complete, see [Enable or disable the controller](cloudknox-onboard-enable-controller-after-onboarding.md).
+- For information on how to add an account/subscription/project after onboarding is complete, see [Add an account/subscription/project after onboarding is complete](cloudknox-onboard-add-account-after-onboarding.md).
+- For an overview on CloudKnox, see [What's CloudKnox Permissions Management?](cloudknox-overview.md).
+- For information on how to start viewing information about your authorization system in CloudKnox, see [View key statistics and data about your authorization system](cloudknox-ui-dashboard.md).
active-directory Cloudknox Onboard Enable Controller After Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-enable-controller-after-onboarding.md
+
+ Title: Enable or disable the controller in Microsoft CloudKnox Permissions Management after onboarding is complete
+description: How to enable or disable the controller in Microsoft CloudKnox Permissions Management after onboarding is complete.
+++++++ Last updated : 02/23/2022+++
+# Enable or disable the controller after onboarding is complete
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how to enable or disable the controller in Microsoft Azure and Google Cloud Platform (GCP) after onboarding is complete.
+
+This article also describes how to enable the controller in Amazon Web Services (AWS) if you disabled it during onboarding. You can only enable the controller in AWS at this time; you can't disable it.
+
+## Enable the controller in AWS
+
+> [!NOTE]
+> You can only enable the controller in AWS; you can't disable it at this time.
+
+1. Sign in to the AWS console of the member account in a separate browser window.
+1. Go to the CloudKnox home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab.
+1. On the **Data Collectors** dashboard, select **AWS**, and then select **Create Configuration**.
+1. On the **CloudKnox Onboarding - AWS Member Account Details** page, select **Launch Template**.
+
+ The **AWS CloudFormation create stack** page opens, displaying the template.
+1. In the **CloudTrailBucketName** box, enter a name.
+
+ You can copy and paste the **CloudTrailBucketName** name from the **Trails** page in AWS.
+
+ > [!NOTE]
+ > A *cloud bucket* collects all the activity in a single account that CloudKnox monitors. Enter the name of a cloud bucket here to provide CloudKnox with the access required to collect activity data.
+
+1. In the **EnableController** box, from the drop-down list, select **True** to provide CloudKnox with read and write access so that any remediation you want to do from the CloudKnox platform can be done automatically.
+
+1. Scroll to the bottom of the page, and in the **Capabilities** box and select **I acknowledge that AWS CloudFormation might create IAM resources with custom names**. Then select **Create stack**.
+
+ This AWS CloudFormation stack creates a collection role in the member account with necessary permissions (policies) for data collection. A trust policy is set on this role to allow the OIDC role created in your AWS OIDC account to access it. These entities are listed in the **Resources** tab of your CloudFormation stack.
+
+1. Return to CloudKnox, and on the CloudKnox **Onboarding - AWS Member Account Details** page, select **Next**.
+1. On **CloudKnox Onboarding ΓÇô Summary** page, review the information youΓÇÖve added, and then select **Verify Now & Save**.
+
+ The following message appears: **Successfully created configuration.**
+
+## Enable or disable the controller in Azure
++
+1. In Azure, open the **Access control (IAM)** page.
+1. In the **Check access** section, in the **Find** box, enter **Cloud Infrastructure Entitlement Management**.
+
+ The **Cloud Infrastructure Entitlement Management assignments** page appears, displaying the roles assigned to you.
+
+ - If you have read-only permission, the **Role** column displays **Reader**.
+ - If you have administrative permission, the **Role** column displays **User Access Administrative**.
+
+1. To add the administrative role assignment, return to the **Access control (IAM)** page, and then select **Add role assignment**.
+1. Add or remove the role assignment for Cloud Infrastructure Entitlement Management.
+
+1. Go to the CloudKnox home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab.
+1. On the **Data Collectors** dashboard, select **Azure**, and then select **Create Configuration**.
+1. On the **CloudKnox Onboarding - Azure Subscription Details** page, enter the **Subscription ID**, and then select **Next**.
+1. On **CloudKnox Onboarding ΓÇô Summary** page, review the controller permissions, and then select **Verify Now & Save**.
+
+ The following message appears: **Successfully Created Configuration.**
++
+## Enable or disable the controller in GCP
+
+1. Execute the **gcloud auth login**.
+1. Follow the instructions displayed on the screen to authorize access to your Google account.
+1. Execute the **sh mciem-workload-identity-pool.sh** to create the workload identity pool, provider, and service account.
+1. Execute the **sh mciem-member-projects.sh** to give CloudKnox permissions to access each of the member projects.
+
+ - If you want to manage permissions through CloudKnox, select **Y** to **Enable controller**.
+ - If you want to onboard your projects in read-only mode, select **N** to **Disable controller**.
+
+1. Optionally, execute **mciem-enable-gcp-api.sh** to enable all recommended GCP APIs.
+
+1. Go to the CloudKnox home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab.
+1. On the **Data Collectors** dashboard, select **GCP**, and then select **Create Configuration**.
+1. On the **CloudKnox Onboarding - Azure AD OIDC App Creation** page, select **Next**.
+1. On the **CloudKnox Onboarding - GCP OIDC Account Details & IDP Access** page, enter the **OIDC Project Number** and **OIDC Project ID**, and then select **Next**.
+1. On the **CloudKnox Onboarding - GCP Project IDs** page, enter the **Project IDs**, and then select **Next**.
+1. On the **CloudKnox Onboarding ΓÇô Summary** page, review the information youΓÇÖve added, and then select **Verify Now & Save**.
+
+ The following message appears: **Successfully Created Configuration.**
+
+## Next steps
+
+- For information on how to onboard an Amazon Web Services (AWS) account, see [Onboard an AWS account](cloudknox-onboard-aws.md).
+- For information on how to onboard a Microsoft Azure subscription, see [Onboard a Microsoft Azure subscription](cloudknox-onboard-azure.md).
+- For information on how to onboard a Google Cloud Platform (GCP) project, see [Onboard a GCP project](cloudknox-onboard-gcp.md).
+- For information on how to add an account/subscription/project after onboarding is complete, see [Add an account/subscription/project after onboarding is complete](cloudknox-onboard-add-account-after-onboarding.md).
+
active-directory Cloudknox Onboard Enable Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-enable-tenant.md
+
+ Title: Enable CloudKnox Permissions Management in your organization
+description: How to enable CloudKnox Permissions Management in your organization.
+++++++ Last updated : 02/23/2022+++
+# Enable CloudKnox in your organization
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+> [!Note]
+> Sign up for the CloudKnox Permissions Management public preview by filling [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR9AT7gfYe2NPtdIbYxQQX45UNEpIVjY4WUJNSUhMVjcyNzdYOFY2NFhISi4u).
+
+This article describes how to enable CloudKnox Permissions Management (CloudKnox) in your organization. Once you've enabled CloudKnox, you can connect it to your Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) platforms.
+
+> [!NOTE]
+> To complete this task, you must have *global administrator* permissions as a user in that tenant. You can't enable CloudKnox as a user from other tenant who has signed in via B2B or via Azure Lighthouse.
+
+## Prerequisites
+
+To enable CloudKnox in your organization, you must:
+
+- Have an Azure AD tenant. If you don't already have one, [create a free account](https://azure.microsoft.com/free/).
+- Be eligible for or have an active assignment to the global administrator role as a user in that tenant.
+
+> [!NOTE]
+> During public preview, CloudKnox doesn't perform a license check.
+
+## Enable CloudKnox on your Azure AD tenant
+
+1. In your browser:
+ 1. Go to [Azure services](https://portal.azure.com) and use your credentials to sign in to [Azure Active Directory](https://ms.portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview).
+ 1. If you aren't already authenticated, sign in as a global administrator user.
+ 1. If needed, activate the global administrator role in your Azure AD tenant.
+ 1. In the Azure AD portal, select **Features highlights**, and then select **CloudKnox Permissions Management**.
+
+ 1. If you're prompted to select a sign in account, sign in as a global administrator for a specified tenant.
+
+ The **Welcome to CloudKnox Permissions Management** screen appears, displaying information on how to enable CloudKnox on your tenant.
+
+1. To provide access to the CloudKnox application, create a service principal.
+
+ An Azure service principal is a security identity used by user-created apps, services, and automation tools to access specific Azure resources.
+
+ > [!NOTE]
+ > To complete this step, you must have Azure CLI or Azure PowerShell on your system, or an Azure subscription where you can run Cloud Shell.
+
+ - To create a service principal that points to the CloudKnox application via Cloud Shell:
+
+ 1. Copy the script on the **Welcome** screen:
+
+ `az ad ap create --id b46c3ac5-9da6-418f-a849-0a7a10b3c6c`
+
+ 1. If you have an Azure subscription, return to the Azure AD portal and select **Cloud Shell** on the navigation bar.
+ If you don't have an Azure subscription, open a command prompt on a Windows Server.
+ 1. If you have an Azure subscription, paste the script into Cloud Shell and press **Enter**.
+
+ - For information on how to create a service principal through the Azure portal, see [Create an Azure service principal with the Azure CLI](/cli/azure/create-an-azure-service-principal-azure-cli).
+
+ - For information on the **az** command and how to sign in with the no subscriptions flag, see [az login](/cli/azure/reference-index?view=azure-cli-latest#az-login&preserve-view=true).
+
+ - For information on how to create a service principal via Azure PowerShell, see [Create an Azure service principal with Azure PowerShell](/powershell/azure/create-azure-service-principal-azureps?view=azps-7.1.0&preserve-view=true).
+
+ 1. After the script runs successfully, the service principal attributes for CloudKnox display. Confirm the attributes.
+
+ The **Cloud Infrastructure Entitlement Management** application displays in the Azure AD portal under **Enterprise applications**.
+
+1. Return to the **Welcome to CloudKnox** screen and select **Enable CloudKnox Permissions Management**.
+
+ You have now completed enabling CloudKnox on your tenant. CloudKnox launches with the **Data Collectors** dashboard.
+
+## Configure data collection settings
+
+Use the **Data Collectors** dashboard in CloudKnox to configure data collection settings for your authorization system.
+
+1. If the **Data Collectors** dashboard isn't displayed when CloudKnox launches:
+
+ - In the CloudKnox home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab.
+
+1. Select the authorization system you want: **AWS**, **Azure**, or **GCP**.
+
+1. For information on how to onboard an AWS account, Azure subscription, or GCP project into CloudKnox, select one of the following articles and follow the instructions:
+
+ - [Onboard an AWS account](cloudknox-onboard-aws.md)
+ - [Onboard an Azure subscription](cloudknox-onboard-azure.md)
+ - [Onboard a GCP project](cloudknox-onboard-gcp.md)
+
+## Next steps
+
+- For an overview of CloudKnox, see [What's CloudKnox Permissions Management?](cloudknox-overview.md)
+- For a list of frequently asked questions (FAQs) about CloudKnox, see [FAQs](cloudknox-faqs.md).
+- For information on how to start viewing information about your authorization system in CloudKnox, see [View key statistics and data about your authorization system](cloudknox-ui-dashboard.md).
active-directory Cloudknox Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-gcp.md
+
+ Title: Onboard a Google Cloud Platform (GCP) project in CloudKnox Permissions Management
+description: How to onboard a Google Cloud Platform (GCP) project on CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Onboard a Google Cloud Platform (GCP) project
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+> [!Note]
+> Sign up for the CloudKnox Permissions Management public preview by filling [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR9AT7gfYe2NPtdIbYxQQX45UNEpIVjY4WUJNSUhMVjcyNzdYOFY2NFhISi4u).
+
+This article describes how to onboard a Google Cloud Platform (GCP) project on CloudKnox Permissions Management (CloudKnox).
+
+> [!NOTE]
+> A *global administrator* or *super admin* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable CloudKnox on your Azure Active Directory tenant](cloudknox-onboard-enable-tenant.md).
+
+## Onboard a GCP project
+
+1. If the **Data Collectors** dashboard isn't displayed when CloudKnox launches:
+
+ - In the CloudKnox home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab.
+
+1. On the **Data Collectors** tab, select **GCP**, and then select **Create Configuration**.
+
+### 1. Create an Azure AD OIDC app.
+
+1. On the **CloudKnox Onboarding - Azure AD OIDC App Creation** page, enter the **OIDC Azure App Name**.
+
+ This app is used to set up an OpenID Connect (OIDC) connection to your GCP project. OIDC is an interoperable authentication protocol based on the OAuth 2.0 family of specifications. The scripts generated will create the app of this specified name in your Azure AD tenant with the right configuration.
+
+1. To create the app registration, copy the script and run it in your command-line app.
+
+ > [!NOTE]
+ > 1. To confirm that the app was created, open **App registrations** in Azure and, on the **All applications** tab, locate your app.
+ > 1. Select the app name to open the **Expose an API** page. The **Application ID URI** displayed in the **Overview** page is the *audience value* used while making an OIDC connection with your AWS account.
+
+ 1. Return to CloudKnox, and in the **CloudKnox Onboarding - Azure AD OIDC App Creation**, select **Next**.
+
+### 2. Set up a GCP OIDC project.
+
+1. In the **CloudKnox Onboarding - GCP OIDC Account Details & IDP Access** page, enter the **OIDC Project ID** and **OIDC Project Number** of the GCP project in which the OIDC provider and pool will be created. You can change the role name to your requirements.
+
+ > [!NOTE]
+ > You can find the **Project number** and **Project ID** of your GCP project on the GCP **Dashboard** page of your project in the **Project info** panel.
+
+1. You can change the **OIDC Workload Identity Pool Id**, **OIDC Workload Identity Pool Provider Id** and **OIDC Service Account Name** to meet your requirements.
+
+ Optionally, specify **G-Suite IDP Secret Name** and **G-Suite IDP User Email** to enable G-Suite integration.
+
+ You can either download and run the script at this point or you can do it in the Google Cloud Shell, as described in [later in this article](cloudknox-onboard-gcp.md#4-run-scripts-in-cloud-shell-optional-if-not-already-executed).
+1. Select **Next**.
+
+### 3. Set up GCP member projects.
+
+1. In the **CloudKnox Onboarding - GCP Project Ids** page, enter the **Project IDs**.
+
+ You can enter up to 10 GCP project IDs. Select the plus icon next to the text box to insert more project IDs.
+
+1. You can choose to download and run the script at this point, or you can do it via Google Cloud Shell, as described in the [next step](cloudknox-onboard-gcp.md#4-run-scripts-in-cloud-shell-optional-if-not-already-executed).
+
+### 4. Run scripts in Cloud Shell. (Optional if not already executed.)
+
+1. In the **CloudKnox Onboarding - GCP Project Ids** page, select **Launch SSH**.
+1. To copy all your scripts into your current directory, in **Open in Cloud Shell**, select **Trust repo**, and then select **Confirm**.
+
+ The Cloud Shell provisions the Cloud Shell machine and makes a connection to your Cloud Shell instance.
+
+ > [!NOTE]
+ > Follow the instructions in the browser as they may be different from the ones given here.
+
+ The **Welcome to CloudKnox GCP onboarding** screen appears, displaying steps you must complete to onboard your GCP project.
+
+### 5. Paste the environment vars from the CloudKnox portal.
+
+1. Return to CloudKnox and select **Copy export variables**.
+1. In the GCP Onboarding shell editor, paste the variables you copied, and then press **Enter**.
+1. Execute the **gcloud auth login**.
+1. Follow instructions displayed on the screen to authorize access to your Google account.
+1. Execute the **sh mciem-workload-identity-pool.sh** to create the workload identity pool, provider, and service account.
+1. Execute the **sh mciem-member-projects.sh** to give CloudKnox permissions to access each of the member projects.
+
+ - If you want to manage permissions through CloudKnox, select **Y** to **Enable controller**.
+
+ - If you want to onboard your projects in read-only mode, select **N** to **Disable controller**.
+
+1. Optionally, execute **mciem-enable-gcp-api.sh** to enable all recommended GCP APIs.
+
+1. Return to **CloudKnox Onboarding - GCP Project Ids**, and then select **Next**.
+
+### 6. Review and save.
+
+1. In the **CloudKnox Onboarding ΓÇô Summary** page, review the information youΓÇÖve added, and then select **Verify Now & Save**.
+
+ The following message appears: **Successfully Created Configuration.**
+
+ On the **Data Collectors** tab, the **Recently Uploaded On** column displays **Collecting**. The **Recently Transformed On** column displays **Processing.**
+
+ You have now completed onboarding GCP, and CloudKnox has started collecting and processing your data.
+
+### 7. View the data.
+
+- To view the data, select the **Authorization Systems** tab.
+
+ The **Status** column in the table displays **Collecting Data.**
+
+ The data collection process may take some time, depending on the size of the account and how much data is available for collection.
+++
+## Next steps
+
+- For information on how to onboard an Amazon Web Services (AWS) account, see [Onboard an Amazon Web Services (AWS) account](cloudknox-onboard-aws.md).
+- For information on how to onboard a Microsoft Azure subscription, see [Onboard a Microsoft Azure subscription](cloudknox-onboard-azure.md).
+- For information on how to enable or disable the controller after onboarding is complete, see [Enable or disable the controller](cloudknox-onboard-enable-controller-after-onboarding.md).
+- For information on how to add an account/subscription/project after onboarding is complete, see [Add an account/subscription/project after onboarding is complete](cloudknox-onboard-add-account-after-onboarding.md).
active-directory Cloudknox Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-overview.md
+
+ Title: What's CloudKnox Permissions Management?
+description: An introduction to CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# What's CloudKnox Permissions Management?
++
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+> [!Note]
+> Sign up for the CloudKnox Permissions Management public preview by filling [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR9AT7gfYe2NPtdIbYxQQX45UNEpIVjY4WUJNSUhMVjcyNzdYOFY2NFhISi4u).
+
+## Overview
+
+CloudKnox Permissions Management (CloudKnox) is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. For example, over-privileged workload and user identities, actions, and resources across multi-cloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP).
+
+CloudKnox detects, automatically right-sizes, and continuously monitors unused and excessive permissions.
+
+Organizations have to consider permissions management as a central piece of their Zero Trust security to implement least privilege access across their entire infrastructure:
+
+- Organizations are increasingly adopting multi-cloud strategy and are struggling with the lack of visibility and the increasing complexity of managing access permissions.
+- With the proliferation of identities and cloud services, the number of high-risk cloud permissions is exploding, expanding the attack surface for organizations.
+- IT security teams are under increased pressure to ensure access to their expanding cloud estate is secure and compliant.
+- The inconsistency of cloud providersΓÇÖ native access management models makes it even more complex for Security and Identity to manage permissions and enforce least privilege access policies across their entire environment.
++
+## Key use cases
+
+CloudKnox allows customers to address three key use cases: *discover*, *remediate*, and *monitor*.
+
+### Discover
+
+Customers can assess permission risks by evaluating the gap between permissions granted and permissions used.
+
+- Cross-cloud permissions discovery: Granular and normalized metrics for key cloud platforms: AWS, Azure, and GCP.
+- Permission Creep Index (PCI): An aggregated metric that periodically evaluates the level of risk associated with the number of unused or excessive permissions across your identities and resources. It measures how much damage identities can cause based on the permissions they have.
+- Permission usage analytics: Multi-dimensional view of permissions risk for all identities, actions, and resources.
+
+### Remediate
+
+Customers can right-size permissions based on usage, grant new permissions on-demand, and automate just-in-time access for cloud resources.
+
+- Automated deletion of permissions unused for the past 90 days.
+- Permissions on-demand: Grant identities permissions on-demand for a time-limited period or an as-needed basis.
++
+### Monitor
+
+Customers can detect anomalous activities with machine language-powered (ML-powered) alerts and generate detailed forensic reports.
+
+- ML-powered anomaly detections.
+- Context-rich forensic reports around identities, actions, and resources to support rapid investigation and remediation.
+
+CloudKnox deepens Zero Trust security strategies by augmenting the least privilege access principle, allowing customers to:
+
+- Get comprehensive visibility: Discover which identity is doing what, where, and when.
+- Automate least privilege access: Use access analytics to ensure identities have the right permissions, at the right time.
+- Unify access policies across infrastructure as a service (IaaS) platforms: Implement consistent security policies across your cloud infrastructure.
+++
+## Next steps
+
+- For information on how to onboard CloudKnox in your organization, see [Enable CloudKnox in your organization](cloudknox-onboard-enable-tenant.md).
+- For a list of frequently asked questions (FAQs) about CloudKnox, see [FAQs](cloudknox-faqs.md).
active-directory Cloudknox Product Account Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-account-explorer.md
+
+ Title: The CloudKnox Permissions Management - View roles and identities that can access account information from an external account
+description: How to view information about identities that can access accounts from an external account in CloudKnox Permissions Management.
+++++ Last updated : 02/23/2022+++
+# View roles and identities that can access account information from an external account
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+You can view information about users, groups, and resources that can access account information from an external account in CloudKnox Permissions Management (CloudKnox).
+
+## Display information about users, groups, or tasks
+
+1. In CloudKnox, select the **Usage analytics** tab, and then, from the dropdown, select one of the following:
+
+ - **Users**
+ - **Group**
+ - **Active resources**
+ - **Active tasks**
+ - **Active resources**
+ - **Serverless functions**
+
+1. To choose an account from your authorization system, select the lock icon in the left panel.
+1. In the **Authorization systems** pane, select an account, then select **Apply**.
+1. To choose a user, role, or group, select the person icon.
+1. Select a user or group, then select **Apply**.
+1. To choose an account from your authorization system, select it from the Authorization Systems menu.
+1. In the user type filter, user, role, or group.
+1. In the **Task** filter, select **All** or **High-risk tasks**, then select **Apply**.
+1. To delete a task, select **Delete**, then select **Apply**.
+
+## Export information about users, groups, or tasks
+
+To export the data in comma-separated values (CSV) file format, select **Export** from the top-right hand corner of the table.
+
+## View users and roles
+1. To view users and roles, select the lock icon, and then select the person icon to open the **Users** pane.
+1. To view the **Role summary**, select the "eye" icon to the right of the role name.
+
+ The following details display:
+ - **Policies**: A list of all the policies attached to the role.
+ - **Trusted entities**: The identities from external accounts that can assume this role.
+
+1. To view all the identities from various accounts that can assume this role, select the down arrow to the left of the role name.
+1. To view a graph of all the identities that can access the specified account and through which role(s), select the role name.
+
+ If CloudKnox is monitoring the external account, it lists specific identities from the accounts that can assume this role. Otherwise, it lists the identities declared in the **Trusted entity** section.
+
+ **Connecting roles**: Lists the following roles for each account:
+ - *Direct roles* that are trusted by the account role.
+ - *Intermediary roles* that aren't directly trusted by the account role but are assumable by identities through role-chaining.
+
+1. To view all the roles from that account that are used to access the specified account, select the down arrow to the left of the account name.
+1. To view the trusted identities declared by the role, select the down arrow to the left of the role name.
+
+ The trusted identities for the role are listed only if the account is being monitored by CloudKnox.
+
+1. To view the role definition, select the "eye" icon to the right of the role name.
+
+ When you select the down arrow and expand details, a search box is displayed. Enter your criteria in this box to search for specific roles.
+
+ **Identities with access**: Lists the identities that come from external accounts:
+ - To view all the identities from that account can access the specified account, select the down arrow to the left of the account name.
+ - To view the **Role summary** for EC2 instances and Lambda functions, select the "eye" icon to the right of the identity name.
+ - To view a graph of how the identity can access the specified account and through which role(s), select the identity name.
+
+1. The **Info** tab displays the **Privilege creep index** and **Service control policy (SCP)** information about the account.
+
+For more information about the **Privilege creep index** and SCP information, see [View key statistics and data about your authorization system](cloudknox-ui-dashboard.md).
active-directory Cloudknox Product Account Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-account-settings.md
+
+ Title: View personal and organization information in CloudKnox Permissions Management
+description: How to view personal and organization information in the Account settings dashboard in CloudKnox Permissions Management.
+++++ Last updated : 02/23/2022+++
+# View personal and organization information
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+The **Account settings** dashboard in CloudKnox Permissions Management (CloudKnox) allows you to view personal information, passwords, and account preferences.
+This information can't be modified because the user information is pulled from Azure AD. Only **User Session Time(min)**
+
+## View personal information
+
+1. In the CloudKnox home page, select the down arrow to the right of the **User** (your initials) menu, and then select **Account settings**.
+
+ The **Personal information** box displays your **First name**, **Last name**, and the **Email address** that was used to register your account on CloudKnox.
+
+## View current organization information
+
+1. In the CloudKnox home page, select the down arrow to the right of the **User** (your initials) menu, and then select **Account settings**.
+
+ The **Current organization information** displays the **Name** of your organization, the **Tenant ID** box, and the **User session timeout (min)**.
+
+1. To change duration of the **User session timeout (min)**, select **Edit** (the pencil icon), and then enter the number of minutes before you want a user session to time out.
+1. Select the check mark to confirm your new setting.
++
+## Next steps
+
+- For information about how to manage user information, see [Manage users and groups with the User management dashboard](cloudknox-ui-user-management.md).
+- For information about how to view information about active and completed tasks, see [View information about active and completed tasks](cloudknox-ui-tasks.md).
+- For information about how to select group-based permissions settings, see [Select group-based permissions settings](cloudknox-howto-create-group-based-permissions.md).
active-directory Cloudknox Product Audit Trail https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-audit-trail.md
+
+ Title: Filter and query user activity in CloudKnox Permissions Management
+description: How to filter and query user activity in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Filter and query user activity
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+The **Audit** dashboard in CloudKnox Permissions Management (CloudKnox) details all user activity performed in your authorization system. It captures all high risk activity in a centralized location, and allows system administrators to query the logs. The **Audit** dashboard enables you to:
+
+- Create and save new queries so you can access key data points easily.
+- Query across multiple authorization systems in one query.
+
+## Filter information by authorization system
+
+If you haven't used filters before, the default filter is the first authorization system in the filter list.
+
+If you have used filters before, the default filter is last filter you selected.
+
+1. To display the **Audit** dashboard, on the CloudKnox home page, select **Audit**.
+
+1. To select your authorization system type, in the **Authorization system type** box, select Amazon Web Services (**AWS**), Microsoft Azure (**Azure**), or Google Cloud Platform (**GCP**).
+
+1. To select your authorization system, in the **Authorization system** box:
+
+ - From the **List** subtab, select the accounts you want to use.
+ - From the **Folders** subtab, select the folders you want to use.
+
+1. To view your query results, select **Apply**.
+
+## Create, view, modify, or delete a query
+
+There are several different query parameters you can configure individually or in combination. The query parameters and corresponding instructions are listed in the following sections.
+
+- To create a new query, select **New query**.
+- To view an existing query, select **View** (the eye icon).
+- To edit an existing query, select **Edit** (the pencil icon).
+- To delete a function line in a query, select **Delete** (the minus sign **-** icon).
+- To create multiple queries at one time, select **Add new tab** to the right of the **Query** tabs that are displayed.
+
+ You can open a maximum number of six query tab pages at the same time. A message will appear when you've reached the maximum.
+
+## Create a query with specific parameters
+
+### Create a query with a date
+
+1. In the **New query** section, the default parameter displayed is **Date In "Last day"**.
+
+ The first-line parameter always defaults to **Date** and can't be deleted.
+
+1. To edit date details, select **Edit** (the pencil icon).
+
+ To view query details, select **View** (the eye icon).
+
+1. Select **Operator**, and then select an option:
+ - **In**: Select this option to set a time range from the past day to the past year.
+ - **Is**: Select this option to choose a specific date from the calendar.
+ - **Custom**: Select this option to set a date range from the **From** and **To** calendars.
+
+1. To run the query on the current selection, select **Search**.
+
+1. To save your query, select **Save**.
+
+ To clear the recent selections, select **Reset**.
+
+### View operator options for identities
+
+The **Operator** menu displays the following options depending on the identity you select in the first dropdown:
+
+- **Is** / **Is Not**: View a list of all available usernames. You can either select or enter a username in the box.
+- **Contains** / **Not Contains**: Enter text that the **Username** should or shouldn't contain, for example, *CloudKnox*.
+- **In** / **Not In**: View a list all available usernames and select multiple usernames.
+
+### Create a query with a username
+
+1. In the **New query** section, select **Add**.
+
+1. From the menu, select **Username**.
+
+1. From the **Operator** menu, select the required option.
+
+1. To add criteria to this section, select **Add**.
+
+ You can change the operation between **And** / **Or** statements, and select other criteria. For example, the first set of criteria selected can be **Is** with the username **Test**.
+
+1. Select the plus (**+**) sign, select **Or** with **Contains**, and then enter a username, for example, *CloudKnox*.
+
+1. To remove a row of criteria, select **Remove** (the minus sign **-** icon).
+
+1. To run the query on the current selection, select **Search**.
+
+1. To clear the recent selections, select **Reset**.
+
+### Create a query with a resource name
+
+1. In the **New query** section, select **Add**.
+
+1. From the menu, select **Resource name**.
+
+1. From the **Operator** menu, select the required option.
+
+1. To add criteria to this section, select **Add**.
+
+ You can change the operation between **And** / **Or** statements, and select other criteria. For example, the first set of criteria selected can be **Is** with resource name **Test**.
+
+1. Select the plus (**+**) sign, select **Or** with **Contains**, and then enter a username, for example, *CloudKnox*.
+
+1. To remove a row of criteria, select **Remove** (the minus sign **-** icon).
+
+1. To run the query on the current selection, select **Search**.
+
+1. To clear the recent selections, select **Reset**.
+
+### Create a query with a resource type
+
+1. In the **New query** section, select **Add**.
+
+1. From the menu, select **Resource type**.
+
+1. From the **Operator** menu, select the required option.
+
+1. To add criteria to this section, select **Add**.
+
+1. Change the operation between **And** / **Or** statements, and select other criteria. For example, the first set of criteria selected can be **Is** with resource type **s3::bucket**.
+
+1. Select the plus (**+**) sign, select **Or** with **Is**, and then enter or select `ec2::instance`.
+
+1. To remove a row of criteria, select **Remove** (the minus sign **-** icon).
+
+1. To run the query on the current selection, select **Search**.
+
+1. To clear the recent selections, select **Reset**.
++
+### Create a query with a task name
+
+1. In the **New query** section, select **Add**.
+
+1. From the menu, select **Task name**.
+
+1. From the **Operator** menu, select the required option.
+
+1. To add criteria to this section, select **Add**.
+
+1. Change the operation between **And** / **Or** statements, and select other criteria. For example, the first set of criteria selected can be **Is** with task name **s3:CreateBucket**.
+
+1. Select **Add**, select **Or** with **Is**, and then enter or select `ec2:TerminateInstance`.
+
+1. To remove a row of criteria, select **Remove** (the minus sign **-** icon).
+
+1. To run the query on the current selection, select **Search**.
+
+1. To clear the recent selections, select **Reset**.
+
+### Create a query with a state
+
+1. In the **New query** section, select **Add**.
+
+1. From the menu, select **State**.
+
+1. From the **Operator** menu, select the required option.
+
+ - **Is** / **Is not**: Allows a user to select in the value field and select **Authorization failure**, **Error**, or **Success**.
+
+1. To add criteria to this section, select **Add**.
+
+1. Change the operation between **And** / **Or** statements, and select other criteria. For example, the first set of criteria selected can be **Is** with State **Authorization failure**.
+
+1. Select the **Add** icon, select **Or** with **Is**, and then select **Success**.
+
+1. To remove a row of criteria, select **Remove** (the minus sign **-** icon).
+
+1. To run the query on the current selection, select **Search**.
+
+1. To clear the recent selections, select **Reset**.
+
+### Create a query with a role name
+
+1. In the **New query** section, select **Add**.
+
+2. From the menu, select **Role Name**.
+
+3. From the **Operator** menu, select the required option.
+
+4. To add criteria to this section, select **Add**.
+
+5. Change the operation between **And** / **Or** statements, and select other criteria. For example, the first set of criteria selected can be **Contains** with free text **Test**.
+
+6. Select the **Add** icon, select **Or** with **Contains**, and then enter your criteria, for example *CloudKnox*.
+
+7. To remove a row of criteria, select **Remove** (the minus sign **-** icon).
+
+8. To run the query on the current selection, select **Search**.
+
+9. To clear the recent selections, select **Reset**.
+
+### Create a query with a role session name
+
+1. In the **New query** section, select **Add**.
+
+2. From the menu, select **Role session name**.
+
+3. From the **Operator** menu, select the required option.
+
+4. To add criteria to this section, select **Add**.
+
+5. Change the operation between **And** / **Or** statements, and select other criteria. For example, the first set of criteria selected can be **Contains** with free text **Test**.
+
+6. Select the **Add** icon, select **Or** with **Contains**, and then enter your criteria, for example *CloudKnox*.
+
+7. To remove a row of criteria, select **Remove** (the minus sign **-** icon).
+
+8. To run the query on the current selection, select **Search**.
+
+9. To clear the recent selections, select **Reset**.
+
+### Create a query with an access key ID
+
+1. In the **New query** section, select **Add**.
+
+2. From the menu, select **Access Key ID**.
+
+3. From the **Operator** menu, select the required option.
+
+4. To add criteria to this section, select **Add**.
+
+5. Change the operation between **And** / **Or** statements, and select other criteria. For example, the first set of criteria selected can be **Contains** with free `AKIAIFXNDW2Z2MPEH5OQ`.
+
+6. Select the **Add** icon, select **Or** with **Not** **Contains**, and then enter `AKIAVP2T3XG7JUZRM7WU`.
+
+7. To remove a row of criteria, select **Remove** (the minus sign **-** icon).
+
+8. To run the query on the current selection, select **Search**.
+
+9. To clear the recent selections, select **Reset**.
+
+### Create a query with a tag key
+
+1. In the **New query** section, select **Add**.
+
+2. From the menu, select **Tag Key**.
+
+3. From the **Operator** menu, select the required option.
+
+4. To add criteria to this section, select **Add**.
+
+5. Change the operation between **And** / **Or** statements, and select other criteria. For example, the first set of criteria selected can be **Is** and type in, or select **Test**.
+
+6. Select the **Add** icon, select **Or** with **Is**, and then enter your criteria, for example *CloudKnox*.
+
+7. To remove a row of criteria, select **Remove** (the minus sign **-** icon).
+
+8. To run the query on the current selection, select **Search**.
+
+9. To clear the recent selections, select **Reset**.
+
+### Create a query with a tag key value
+
+1. In the **New query** section, select **Add**.
+
+2. From the menu, select **Tag Key Value**.
+
+3. From the **Operator** menu, select the required option.
+
+4. To add criteria to this section, select **Add**.
+
+5. Change the operation between **And** / **Or** statements, and select other criteria. For example, the first set of criteria selected can be **Is** and type in, or select **Test**.
+
+6. Select the **Add** icon, select **Or** with **Is**, and then enter your criteria, for example *CloudKnox*.
+
+7. To remove a row of criteria, select **Remove** (the minus sign **-** icon).
+
+8. To run the query on the current selection, select **Search**.
+
+9. To clear the recent selections, select **Reset**.
+
+### View query results
+
+1. In the **Activity** table, your query results display in columns.
+
+ The results display all executed tasks that aren't read-only.
+
+1. To sort each column by ascending or descending value, select the up or down arrows next to the column name.
+
+ - **Identity details**: The name of the identity, for example the name of the role session performing the task.
+
+ - To view the **Raw events summary**, which displays the full details of the event, next to the **Name** column, select **View**.
+
+ - **Resource name**: The name of the resource on which the task is being performed.
+
+ If the column displays **Multiple**, it means multiple resources are listed in the column.
+
+1. To view a list of all resources, hover over **Multiple**.
+
+ - **Resource type**: Displays the type of resource, for example, *Key* (encryption key) or *Bucket* (storage).
+ - **Task name**: The name of the task that was performed by the identity.
+
+ An exclamation mark (**!**) next to the task name indicates that the task failed.
+
+ - **Date**: The date when the task was performed.
+
+ - **IP address**: The IP address from where the user performed the task.
+
+ - **Authorization system**: The authorization system name in which the task was performed.
+
+1. To download the results in comma-separated values (CSV) file format, select **Download**.
+
+## Save a query
+
+1. After you complete your query selections from the **New query** section, select **Save**.
+
+2. In the **Query name** box, enter a name for your query, and then select **Save**.
+
+3. To save a query with a different name, select the ellipses (**...**) next to **Save**, and then select **Save as**.
+
+4. Make your query selections from the **New query** section, select the ellipses (**...**), and then select **Save as**.
+
+5. To save a new query, in the **Save query** box, enter the name for the query, and then select **Save**.
+
+ The following message displays in green at the top of the screen to indicate the query was saved successfully: **Saved query as XXX**.
+
+6. To save an existing query you've modified, select the ellipses (**...**).
+
+ - To save a modified query under the same name, select **Save**.
+ - To save a modified query under a different name, select **Save as**.
+
+### View a saved query
+
+1. Select **Saved Queries**, and then select **Load queries**.
+
+ A message box opens with the following options: **Load with the saved authorization system** or **Load with the currently selected authorization system**.
+
+1. Select the appropriate option, and then select **Load query**.
+
+1. View the query information:
+
+ - **Query**: Displays the name of the saved query.
+ - **Query type**: Displays whether the query is a *System* query or a *Custom* query.
+ - **Schedule**: Displays how often a report will be generated. You can schedule a one-time report or a monthly report.
+ - **Next on**: Displays the date and time the next report will be generated.
+ - **Format**: Displays the output format for the report, for example, CSV.
+
+1. To view or set schedule details, select the gear icon, select **Create schedule**, and then set the details.
+
+ If a schedule has already been created, select the gear icon to open the **Edit schedule** box.
+
+ - **Repeats**: Sets how often the report should repeat.
+ - **Date**: Sets the date when you want to receive the report.
+ - **hh:mm**: Sets the specific time when you want to receive the report.
+ - **Report file format**: Select the output type for the file, for example, CSV.
+ - **Share report with people**: The email address of the user who is creating the schedule is displayed in this field. You can add other email addresses.
+
+1. After selecting your options, select **Schedule**.
++
+### Save a query under a different name
+
+- Select the ellipses (**...**).
+
+ System queries have only one option:
+
+ - **Duplicate**: Creates a duplicate of the query and names the file *Copy of XXX*.
+
+ Custom queries have the following options:
+
+ - **Rename**: Enter the new name of the query and select **Save**.
+ - **Delete**: Delete the saved query.
+
+ The **Delete query** box opens, asking you to confirm that you want to delete the query. Select **Yes** or **No**.
+
+ - **Duplicate**: Creates a duplicate of the query and names it *Copy of XXX*.
+ - **Delete schedule**: Deletes the schedule details for this query.
+
+ This option isn't available if you haven't yet saved a schedule.
+
+ The **Delete schedule** box opens, asking you to confirm that you want to delete the schedule. Select **Yes** or **No**.
++
+## Export the results of a query as a report
+
+- To export the results of the query, select **Export**.
+
+ CloudKnox exports the results in comma-separated values (**CSV**) format, portable document format (**PDF**), or Microsoft Excel Open XML Spreadsheet (**XLSX**) format.
++
+## Next steps
+
+- For information on how to view how users access information, see [Use queries to see how users access information](cloudknox-ui-audit-trail.md).
+- For information on how to create a query, see [Create a custom query](cloudknox-howto-create-custom-queries.md).
+- For information on how to generate an on-demand report from a query, see [Generate an on-demand report from a query](cloudknox-howto-audit-trail-results.md).
active-directory Cloudknox Product Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-dashboard.md
+
+ Title: View data about the activity in your authorization system in CloudKnox Permissions Management
+description: How to view data about the activity in your authorization system in the CloudKnox Dashboard in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++++
+# View data about the activity in your authorization system
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+The CloudKnox Permissions Management (CloudKnox) **Dashboard** provides an overview of the authorization system and account activity being monitored. You can use this dashboard to view data collected from your Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) authorization systems.
+
+## View data about your authorization system
+
+1. In the CloudKnox home page, select **Dashboard**.
+1. From the **Authorization systems type** dropdown, select **AWS**, **Azure**, or **GCP**.
+1. Select the **Authorization system** box to display a **List** of accounts and **Folders** available to you.
+1. Select the accounts and folders you want, and then select **Apply**.
+
+ The **Permission creep index (PCI)** chart updates to display information about the accounts and folders you selected. The number of days since the information was last updated displays in the upper right corner.
+
+1. In the Permission creep index (PCI) graph, select a bubble.
+
+ The bubble displays the number of identities that are considered high-risk.
+
+ *High-risk* refers to the number of users who have permissions that exceed their normal or required usage.
+
+1. Select the box to display detailed information about the identities contributing to the **Low PCI**, **Medium PCI**, and **High PCI**.
+
+1. The **Highest PCI change** displays the authorization system name with the PCI number and the change number for the last seven days, if applicable.
+
+ - To view all the changes and PCI ratings in your authorization system, select **View all**.
+
+1. To return to the PCI graph, select the **Graph** icon in the upper right of the list box.
+
+For more information about the CloudKnox **Dashboard**, see [View key statistics and data about your authorization system](cloudknox-ui-dashboard.md).
+
+## View user data on the PCI heat map
+
+The **Permission creep index (PCI)** heat map shows the incurred risk of users with access to high-risk privileges. The distribution graph displays all the users who contribute to the privilege creep. It displays how many users contribute to a particular score. For example, if the score from the PCI chart is 14, the graph shows how many users have a score of 14.
+
+- To view detailed data about a user, select the number.
+
+ The PCI trend graph shows you the historical trend of the PCI score over the last 90 days.
+
+- To download the **PCI History** report, select **Download** (the down arrow icon).
++
+## View information about users, roles, resources, and PCI trends
+
+To view specific information about the following, select the number displayed on the heat map.
+
+- **Users**: Displays the total number of users and how many fall into the high, medium, and low categories.
+- **Roles**: Displays the total number of roles and how many fall into the high, medium, and low categories.
+- **Resources**: Displays the total number of resources and how many fall into the high, medium, and low categories.
+- **PCI trend**: Displays a line graph of the PCI trend over the last several weeks.
+
+## View identity findings
+
+The **Identity** section below the heat map on the left side of the page shows all the relevant findings about identities, including roles that can access secret information, roles that are inactive, over provisioned active roles, and so on.
+
+- To expand the full list of identity findings, select **All findings**.
+
+## View resource findings
+
+The **Resource** section below the heat map on the right side of the page shows all the relevant findings about your resources. It includes unencrypted S3 buckets, open security groups, managed keys, and so on.
+
+## Next steps
+
+- For more information about how to view key statistics and data in the Dashboard, see [View key statistics and data about your authorization system](cloudknox-ui-dashboard.md).
active-directory Cloudknox Product Data Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-data-inventory.md
+
+ Title: CloudKnox Permissions Management - Display an inventory of created resources and licenses for your authorization system
+description: How to display an inventory of created resources and licenses for your authorization system in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Display an inventory of created resources and licenses for your authorization system
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+You can use the **Inventory** dashboard in CloudKnox Permissions Management (CloudKnox) to display an inventory of created resources and licensing information for your authorization system and its associated accounts.
+
+## View resources created for your authorization system
+
+1. To access your inventory information, in the CloudKnox home page, select **Settings** (the gear icon).
+1. Select the **Inventory** tab, select the **Inventory** subtab, and then select your authorization system type:
+
+ - **AWS** for Amazon Web Services.
+ - **Azure** for Microsoft Azure.
+ - **GCP** for Google Cloud Platform.
+
+ The **Inventory** tab displays information pertinent to your authorization system type.
+
+1. To change the columns displayed in the table, select **Columns**, and then select the information you want to display.
+
+ - To discard your changes, select **Reset to default**.
+
+## View the number of licenses associated with your authorization system
+
+1. To access licensing information about your data sources, in the CloudKnox home page, select **Settings** (the gear icon).
+
+1. Select the **Inventory** tab, select the **Licensing** subtab, and then select your authorization system type.
+
+ The **Licensing** table displays the following information pertinent to your authorization system type:
+
+ - The names of your accounts in the **Authorization system** column.
+ - The number of **Compute** licenses.
+ - The number of **Serverless** licenses.
+ - The number of **Compute containers**.
+ - The number of **Databases**.
+ - The **Total number of licenses**.
++
+## Next steps
+
+- For information about viewing and configuring settings for collecting data from your authorization system and its associated accounts, see [View and configure settings for data collection](cloudknox-product-data-sources.md).
active-directory Cloudknox Product Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-data-sources.md
+
+ Title: View and configure settings for data collection from your authorization system in CloudKnox Permissions Management
+description: How to view and configure settings for collecting data from your authorization system in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View and configure settings for data collection
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
++
+You can use the **Data Collectors** dashboard in CloudKnox Permissions Management (CloudKnox) to view and configure settings for collecting data from your authorization systems. It also provides information about the status of the data collection.
+
+## Access and view data sources
+
+1. To access your data sources, in the CloudKnox home page, select **Settings** (the gear icon). Then select the **Data Collectors** tab.
+
+1. On the **Data Collectors** dashboard, select your authorization system type:
+
+ - **AWS** for Amazon Web Services.
+ - **Azure** for Microsoft Azure.
+ - **GCP** for Google Cloud Platform.
+
+1. To display specific information about an account:
+
+ 1. Enter the following information:
+
+ - **Uploaded on**: Select **All** accounts, **Online** accounts, or **Offline** accounts.
+ - **Transformed on**: Select **All** accounts, **Online** accounts, or **Offline** accounts.
+ - **Search**: Enter an ID or Internet Protocol (IP) address to find a specific account.
+
+ 1. Select **Apply** to display the results.
+
+ Select **Reset Filter** to discard your settings.
+
+1. The following information displays:
+
+ - **ID**: The unique identification number for the data collector.
+ - **Data types**: Displays the data types that are collected:
+ - **Entitlements**: The permissions of all identities and resources for all the configured authorization systems.
+ - **Recently uploaded on**: Displays whether the entitlement data is being collected.
+
+ The status displays *ONLINE* if the data collection has no errors and *OFFLINE* if there are errors.
+ - **Recently transformed on**: Displays whether the entitlement data is being processed.
+
+ The status displays *ONLINE* if the data processing has no errors and *OFFLINE* if there are errors.
+ - The **Tenant ID**.
+ - The **Tenant name**.
+
+## Modify a data collector
+
+1. Select the ellipses **(...)** at the end of the row in the table.
+1. Select **Edit Configuration**.
+
+ The **M-CIEM Onboarding - Summary** box displays.
+
+1. Select **Edit** (the pencil icon) for each field you want to change.
+1. Select **Verify now & save**.
+
+ To verify your changes later, select **Save & verify later**.
+
+ When your changes are saved, the following message displays: **Successfully updated configuration.**
+
+## Delete a data collector
+
+1. Select the ellipses **(...)** at the end of the row in the table.
+1. Select **Delete Configuration**.
+
+ The **M-CIEM Onboarding - Summary** box displays.
+1. Select **Delete**.
+1. Check your email for a one time password (OTP) code, and enter it in **Enter OTP**.
+
+ If you don't receive an OTP, select **Resend OTP**.
+
+ The following message displays: **Successfully deleted configuration.**
+
+## Start collecting data from an authorization system
+
+1. Select the **Authorization Systems** tab, and then select your authorization system type.
+1. Select the ellipses **(...)** at the end of the row in the table.
+1. Select **Collect Data**.
+
+ A message displays to confirm data collection has started.
+
+## Stop collecting data from an authorization system
+
+1. Select the ellipses **(...)** at the end of the row in the table.
+1. To delete your authorization system, select **Delete**.
+
+ The **Validate OTP To Delete Authorization System** box displays.
+
+1. Enter the OTP code
+1. Select **Verify**.
+
+## Next steps
+
+- For information about viewing an inventory of created resources and licensing information for your authorization system, see [Display an inventory of created resources and licenses for your authorization system](cloudknox-product-data-inventory.md)
active-directory Cloudknox Product Define Permission Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-define-permission-levels.md
+
+ Title: Define and manage users, roles, and access levels in CloudKnox Permissions Management
+description: How to define and manage users, roles, and access levels in CloudKnox Permissions Management User management dashboard.
+++++++ Last updated : 02/23/2022+++
+# Define and manage users, roles, and access levels
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+In CloudKnox Permissions Management (CloudKnox), a key component of the interface is the User management dashboard. This topic describes how system administrators can define and manage users, their roles, and their access levels in the system.
+
+## The User management dashboard
+
+The CloudKnox User management dashboard provides a high-level overview of:
+
+- Registered and invited users.
+- Permissions allowed for each user within a given system.
+- Recent user activity.
+
+It also provides the functionality to invite or delete a user, edit, view, and customize permissions settings.
++
+## Manage users for customers without SAML integration
+
+Follow this process to invite users if the customer hasn't enabled SAML integration with the CloudKnox application.
+
+### Invite a user to CloudKnox
+
+Inviting a user to CloudKnox adds the user to the system and allows system administrators to assign permissions to those users. Follow the steps below to invite a user to CloudKnox.
+
+1. To invite a user to CloudKnox, select the down caret icon next to the **User** icon on the right of the screen, and then select **User Management**.
+2. From the **Users** tab, select **Invite User**.
+3. From the **Set User Permission** window, in the **User** text box, enter the user's email address.
+4. Under **Permission**, select the applicable option.
+
+ - **Admin for All Authorization System Types**: **View**, **Control**, and **Approve** permissions for all Authorization System Types.
+
+ 1. Select **Next**.
+ 2. Select **Requestor for User** for each authorization system, if applicable.
+
+ A user must have an account with a valid email address in the authorization system to select **Requestor for User**. If a user doesn't exist in the authorization system, **Requestor for User** is grayed out.
+ 3. Optional: To request access for multiple other identities, under **Requestor for Other Users**, select **Add**, and then select **Users**.
+
+ For example, a user may have various roles in different authorization systems, so they can select the **Add** icon and the **Users** icon to request access for all their accounts.
+ 4. On the **Add Users** screen, enter the user's name or ID in the **User Search** box and select all applicable users. Then select **Add**.
+
+ - **Admin for Selected Authorization System Types**: **View**, **Control**, and **Approve** permissions for selected Authorization System Types.
+
+ 1. Select **Viewer**, **Controller**, or **Approver** for the appropriate authorization system(s).
+ 2. Select **Next**.
+ 3. Select **Requestor for User** for each authorization system, if applicable.
+
+ A user must have an account with a valid email address in the authorization system to select **Requestor for User**. If a user doesn't exist in the authorization system, **Requestor for User** is grayed out.
+ 4. Optional: To request access for multiple other identities, under **Requestor for Other Users**, select **Add**, and then select **Users**.
+
+ For example, a user may have various roles in different authorization systems, so they can select **Add**, and then select **Users** to request access for all their accounts.
+ 5. On the **Add Users** screen, enter the user's name or ID in the **User Search** box and select all applicable users. Then select **Add**.
+
+ - **Custom**: **View**, **Control**, and **Approve** permissions for specific accounts in **Auth System Types**.
+
+ 1. Select **Next**.
+
+ The default view displays the **List** section.
+ 2. Select the appropriate boxes for **Viewer**, **Controller**, or **Approver**.
+
+ For access to all authorization system types, select **All (Current and Future)**.
+ 1. Select **Next**.
+ 1. Select **Requestor for User** for each authorization system, if applicable.
+
+ A user must have an account with a valid email address in the authorization system to select **Requestor for User**. If a user doesn't exist in the authorization system, **Requestor for User** is grayed out.
+ 5. Optional: To request access for multiple other identities, under **Requestor for Other Users**, select **Add**, and then select **Users**.
+
+ For example, a user may have various roles in different authorization systems, so they can select **Add**, and then select **Users** to request access for all their accounts.
+ 6. On the **Add Users** screen, enter the user's name or ID in the **User Search** box and select all applicable users. Then select **Add**.
+
+5. Select **Save**.
+
+ The following message displays in green at the top of the screen: **New User Has Been Invited Successfully**.
+++
+## Manage users for customers with SAML integration
+
+Follow this process to invite users if the customer has enabled SAML integration with the CloudKnox application.
+
+### Create a permission in CloudKnox
+
+Creating a permission directly in CloudKnox allows system administrators to assign permissions to specific users. The following steps help you to create a permission.
+
+- On the right side of the screen, select the down caret icon next to **User**, and then select **User management**.
+
+- For **Users**:
+ 1. To create permissions for a specific user, select the **Users** tab, and then select **Permission.**
+ 2. From the **Set User Permission** window, enter the user's email address in the **User** text box.
+ 3. Under **Permission**, select the applicable button. Then expand menu to view instructions for each option.
+ - **Admin for All Authorization System Types**: **View**, **Control**, and **Approve** permissions for all Authorization System Types.
+ 1. Select **Next**.
+ 2. Check **Requestor for User** for each authorization system, if applicable.
+
+ A user must have an account with a valid email address in the authorization system to select **Requestor for User**. If a user doesn't exist in the authorization system, **Requestor for User** is grayed out.
+
+ 3. Optional: To request access for multiple other identities, under **Requestor for Other Users**, select **Add**, and then select **Users**.
+
+ For example, a user may have various roles in different authorization systems, so they can select **Add**, and then select **Users** to request access for all their accounts.
+
+ 4. On the **Add Users** screen, enter the user's name or ID in the **User Search** box and select all applicable users. Then select **Add**.
+
+ - **Admin for Selected Authorization System Types**: **View**, **Control**, and **Approve** permissions for selected Authorization System Types.
+ 1. Check **Viewer**, **Controller**, or **Approver** for the appropriate authorization system(s).
+ 2. Select **Next**.
+ 3. Check **Requestor for User** for each authorization system, if applicable.
+
+ A user must have an account with a valid email address in the authorization system to select **Requestor for User**. If a user doesn't exist in the authorization system, **Requestor for User** is grayed out.
+
+ 4. Optional: To request access for multiple other identities, under **Requestor for Other Users**, select **Add**, and then select **Users**.
+
+ For example, a user may have various roles in different authorization systems, so they can select **Add**, and then select **Users** to request access for all their accounts.
+ 5. On the **Add Users** screen, enter the user's name or ID in the **User Search** box and select all applicable users. Then select **Add**.
+ - **Custom**: **View**, **Control**, and **Approve** permissions for specific accounts in **Auth System Types**.
+
+ 1. Select **Next**.
+
+ The default view displays the **List** tab, which displays individual authorization systems.
+ - To view groups of authorization systems organized into folder, select the **Folder** tab.
+ 2. Check the appropriate boxes for **Viewer**, **Controller**, or **Approver**.
+
+ For access to all authorization system types, select **All (Current and Future)**.
+ 3. Select **Next**.
+ 4. Check **Requestor for User** for each authorization system, if applicable.
+
+ A user must have an account with a valid email address in the authorization system to select **Requestor for User**. If a user doesn't exist in the authorization system, **Requestor for User** is grayed out.
+ 5. Optional: To request access for multiple other identities, under **Requestor for Other Users**, select **Add**, and then select **Users**.
+
+ For example, a user can have various roles in different authorization systems, so they can select **Add**, and then select **Users** to request access for all their accounts.
+
+ 6. On the **Add Users** screen, enter the user's name or ID in the **User Search** box and select all applicable users. Then select **Add**.
+
+ 4. Select **Save**.
+
+ The following message displays in green at the top of the screen:
+ **New User Has Been Created Successfully**.
+ 5. The new user receives an email invitation to log in to CloudKnox.
+
+### The Pending tab
+
+1. To view the created permission, select the **Pending** tab. The system administrator can view the following details:
+ - **Email Address**: Displays the email address of the invited user.
+ - **Permissions**: Displays each service account and if the user has permissions as a **Viewer**, **Controller**, **Approver**, or **Requestor**.
+ - **Invited By**: Displays the email address of the person who sent the invitation.
+ - **Sent**: Displays the date the invitation was sent to the user.
+2. To make changes to the following, select the ellipses **(...)** in the far right column.
+ - **View Permissions**: Displays a list of accounts for which the user has permissions.
+ - **Edit Permissions**: System administrators can edit a user's permissions.
+ - **Delete**: System administrators can delete a permission
+ - **Reinvite**: System administrator can reinvite the permission if the user didn't receive the email invite
+
+ When a user registers with CloudKnox, they move from the **Pending** tab to the **Registered** tab.
+
+### The Registered tab
+
+- For **Users**:
+
+ 1. The **Registered** tab provides a high-level overview of user details to system administrators:
+ - The **Name/Email Address** column lists the name and email address of the user.
+ - The **Permissions** column lists each authorization system, and each type of permission.
+
+ If a user has all permissions for all authorization systems, **Admin for All Authorization Types** display across all columns. If a user only has some permissions, numbers display in each column they have permissions for. For example, if the number "3" is listed in the **Viewer** column, the user has viewer permission for three accounts within that authorization system.
+ - The **Joined On** column records when the user registered for CloudKnox.
+ - The **Recent Activity** column displays the date when a user last performed an activity.
+ - The **Search** button allows a system administrator to search for a user by name and all users who match the criteria displays.
+ - The **Filters** option allows a system administrator to filter by specific details. When the filter option is selected, the **Authorization System** box displays.
+
+ To display all authorization system accounts,Select **All**. Then select the appropriate boxes for the accounts that need to be viewed.
+ 2. To make the changes to the following changes, select the ellipses **(...)** in the far right column:
+ - **View Permissions**: Displays a list of accounts for which the user has permissions.
+ - **Edit Permissions**: System administrators can edit the accounts for which a user has permissions.
+ - **Remove Permissions**: System administrators can remove permissions from a user.
+
+- For **Groups**:
+ 1. To create permissions for a specific user, select the **Groups** tab, and then select **Permission**.
+ 2. From the **Set Group Permission** window, enter the name of the group in the **Group Name** box.
+
+ The identity provider creates groups.
+
+ Some users may be part of multiple groups. In this case, the user's overall permissions is a union of the permissions assigned the various groups the user is a member of.
+ 3. Under **Permission**, select the applicable button and expand the menu to view instructions for each option.
+
+ - **Admin for All Authorization System Types**: **View**, **Control**, and **Approve** permissions for all Authorization System Types.
+ 1. Select **Next**.
+ 2. Check **Requestor for User** for each authorization system, if applicable.
+
+ A user must have an account with a valid email address in the authorization system to select **Requestor for User**. If a user doesn't exist in the authorization system, **Requestor for User** is grayed out.
+ 3. Optional: To request access for multiple other identities, under **Requestor for Other Users**, select **Add**, and then select **Users**.
+
+ For example, a user may have various roles in different authorization systems, so they can select **Add**, and then select **Users** to request access for all their accounts.
+
+ 4. On the **Add Users** screen, enter the user's name or ID in the **User Search** box and select all applicable users. Then select **Add**.
+
+ - **Admin for Selected Authorization System Types**: **View**, **Control**, and **Approve** permissions for selected Authorization System Types.
+ 1. Check **Viewer**, **Controller**, or **Approver** for the appropriate authorization system(s).
+ 2. Select **Next**.
+ 3. Check **Requestor for User** for each authorization system, if applicable.
+
+ A user must have an account with a valid email address in the authorization system to select **Requestor for User**. If a user doesn't exist in the authorization system, **Requestor for User** is grayed out.
+ 4. Optional: To request access for multiple other identities, under **Requestor for Other Users**, select **Add**, and then select **Users**.
+
+ For example, a user may have various roles in different authorization systems, so they can select **Add**, and then select **Users** to request access for all their accounts.
+
+ 5. On the **Add Users** screen, enter the user's name or ID in the **User Search** box and select all applicable users. Then select **Add**.
+
+ - **Custom**: **View**, **Control**, and **Approve** permissions for specific accounts in Auth System Types.
+ 1. Select **Next**.
+
+ The default view displays the **List** section.
+
+ 2. Check the appropriate boxes for **Viewer**, **Controller**, or **Approver.
+
+ For access to all authorization system types, select **All (Current and Future)**.
+
+ 3. Select **Next**.
+
+ 4. Check **Requestor for User** for each authorization system, if applicable.
+
+ A user must have an account with a valid email address in the authorization system to select **Requestor for User**. If a user doesn't exist in the authorization system, **Requestor for User** is grayed out.
+
+ 5. Optional: To request access for multiple other identities, under **Requestor for Other Users**, select **Add**, and then select **Users**.
+
+ For example, a user may have various roles in different authorization systems, so they can select **Add**, and then select **Users** to request access for all their accounts.
+
+ 6. On the **Add Users** screen, enter the user's name or ID in the **User Search** box and select all applicable users. Then select **Add**.
+
+ 4. Select **Save**.
+
+ The following message displays in green at the top of the screen: **New Group Has Been Created Successfully**.
+
+### The Groups tab
+
+1. The **Groups** tab provides a high-level overview of user details to system administrators:
+
+ - The **Name** column lists the name of the group.
+ - The **Permissions** column lists each authorization system, and each type of permission.
+
+ If a group has all permissions for all authorization systems, **Admin for All Authorization Types** displays across all columns.
+
+ If a group only has some permissions, the corresponding columns display numbers for the groups.
+
+ For example, if the number "3" is listed in the **Viewer** column, then the group has viewer permission for three accounts within that authorization system.
+ - The **Modified By** column records the email address of the person who created the group.
+ - The **Modified On** column records the date the group was last modified on.
+ - The **Search** button allows a system administrator to search for a group by name and all groups who match the criteria displays.
+ - The **Filters** option allows a system administrator to filter by specific details. When the filter option is selected, the **Authorization System** box displays.
+
+ To display all authorization system accounts, select **All**. Then select the appropriate boxes for the accounts that need to be viewed.
+
+2. To make changes to the following, select the ellipses **(...)** in the far right column:
+ - **View Permissions**: Displays a list of the accounts for which the group has permissions.
+ - **Edit Permissions**: System administrators can edit a group's permissions.
+ - **Duplicate**: System administrators can duplicate permissions from one group to another.
+ - **Delete**: System administrators can delete permissions from a group.
++
+## Next steps
+
+- For information about how to view user management information, see [Manage users with the User management dashboard](cloudknox-ui-user-management.md).
+- For information about how to create group-based permissions, see [Create group-based permissions](cloudknox-howto-create-group-based-permissions.md).
+
active-directory Cloudknox Product Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-integrations.md
+
+ Title: View integration information about an authorization system in CloudKnox Permissions Management
+description: View integration information about an authorization system in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View integration information about an authorization system
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+The **Integrations** dashboard in CloudKnox Permissions Management (CloudKnox) allows you to view all your authorization systems in one place, and to ensure all applications are functioning as one. This information helps improve quality and performance as a whole.
+
+## Display integration information about an authorization system
+
+Refer to the **Integration** subpages in CloudKnox for information about available authorization systems for integration.
+
+1. To display the **Integrations** dashboard, select **User** (your initials) in the upper right of the screen, and then select **Integrations.**
+
+ The **Integrations** dashboard displays a tile for each available authorization system.
+
+1. Select an authorization system tile to view its integration information.
+
+## Available integrated authorization systems
+
+The following authorization systems may be listed in the **Integrations** dashboard, depending on which systems are integrated into the CloudKnox application.
+
+- **ServiceNow**: Manages digital workflows for enterprise operations, and the CloudKnox integration allows you to request and approve permissions through the ServiceNow ticketing workflow.
+- **Splunk**: Searches, monitors, and analyzes machine-generated data, and the CloudKnox integration enables exporting usage analytics data, alerts, and logs.
+- **HashiCorp Terraform**: CloudKnox enables the generation of least-privilege policies through the Hashi Terraform provider.
+- **CloudKnox API**: The CloudKnox application programming interface (API) provides access to CloudKnox features.
+- **Saviynt**: Enables you to view Identity entitlements and usage inside the Saviynt console.
+- **Securonix**: Enables exporting usage analytics data, alerts, and logs.
++++
+<!## Next steps>
+
+<![Installation overview](cloudknox-installation.md)>
+<![Configure integration with the CloudKnox API](cloudknox-integration-api.md)>
+<![Sign up and deploy FortSentry in your organization](cloudknox-fortsentry-registration.md)>
active-directory Cloudknox Product Permission Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-permission-analytics.md
+
+ Title: Create and view permission analytics triggers in CloudKnox Permissions Management
+description: How to create and view permission analytics triggers in the Permission analytics tab in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Create and view permission analytics triggers
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how you can create and view permission analytics triggers in CloudKnox Permissions Management (CloudKnox).
+
+## View permission analytics triggers
+
+1. In the CloudKnox home page, select **Activity triggers** (the bell icon).
+1. Select **Permission analytics**, and then select the **Alerts** subtab.
+
+ The **Alerts** subtab displays the following information:
+
+ - **Alert name**: Lists the name of the alert.
+ - To view the name, ID, role, domain, authorization system, statistical condition, anomaly date, and observance period, select **Alert name**.
+ - To expand the top information found with a graph of when the anomaly occurred, select **Details**.
+ - **Anomaly alert rule**: Displays the name of the rule select when creating the alert.
+ - **# of Occurrences**: Displays how many times the alert trigger has occurred.
+ - **Task**: Displays how many tasks are affected by the alert
+ - **Resources**: Displays how many resources are affected by the alert
+ - **Identity**: Displays how many identities are affected by the alert
+ - **Authorization System**: Displays which authorization systems the alert applies to
+ - **Date/Time**: Displays the date and time of the alert.
+ - **Date/Time (UTC)**: Lists the date and time of the alert in Coordinated Universal Time (UTC).
+
+1. To filter the alerts, select the appropriate alert name or, from the **Alert Name** menu,select **All**.
+
+ - From the **Date** dropdown menu, select **Last 24 Hours**, **Last 2 Days**, **Last Week**, or **Custom Range**, and then select **Apply**.
+
+ If you select **Custom range**, select date and time settings, and then select **Apply**. - **View Trigger**: Displays the current trigger settings and applicable authorization system details.
+
+1. To view the following details, select the ellipses (**...**):
+
+ - **Details**: Displays **Authorization System Type**, **Authorization Systems**, **Resources**, **Tasks**, and **Identities** that matched the alert criteria.
+1. To view specific matches, select **Resources**, **Tasks**, or **Identities**.
+
+ The **Activity** section displays details about the **Identity Name**, **Resource Name**, **Task Name**, **Date**, and **IP Address**.
+
+## Create a permission analytics trigger
+
+1. In the CloudKnox home page, select **Activity triggers** (the bell icon).
+1. Select **Permission analytics**, select the **Alerts** subtab, and then select **Create Permission Analytics Trigger**.
+1. In the **Alert name** box, enter a name for the alert.
+1. Select the **Authorization system**.
+1. Select **Identity performed high number of tasks**, and then select **Next**.
+1. On the **Authorization systems** tab, select the appropriate accounts and folders, or select **All**.
+
+ This screen defaults to the **List** view but can also be changed to the **Folder** view, and the applicable folder can be selected instead of individually by system.
+
+ - The **Status** column displays if the authorization system is online or offline
+ - The **Controller** column displays if the controller is enabled or disabled.
+
+1. On the **Configuration** tab, to update the **Time Interval**, select **90 Days**, **60 Days**, or **30 Days** from the **Time range** dropdown.
+1. Select **Save**.
+
+## View permission analytics alert triggers
+
+1. In the CloudKnox home page, select **Activity triggers** (the bell icon).
+1. Select **Permission analytics**, and then select the **Alert triggers** subtab.
+
+ The **Alert triggers** subtab displays the following information:
+
+ - **Alert**: Lists the name of the alert.
+ - **Anomaly alert rule**: Displays the name of the rule select when creating the alert.
+ - **# of users subscribed**: Displays the number of users subscribed to the alert.
+ - **Created by**: Displays the email address of the user who created the alert.
+ - **Last modified by**: Displays the email address of the user who last modified the alert.
+ - **Last Modified On**: Displays the date and time the trigger was last modified.
+ - **Subscription**: Toggle the button to **On** or **Off**.
+ - **View trigger**: Displays the current trigger settings and applicable authorization system details.
+
+1. To view other options available to you, select the ellipses (**...**), and then make a selection from the available options:
+
+ - **Details** displays **Authorization System Type**, **Authorization Systems**, **Resources**, **Tasks**, and **Identities** that matched the alert criteria.
+ - To view the specific matches, select **Resources**, **Tasks**, or **Identities**.
+ - The **Activity** section displays details about the **Identity Name**, **Resource Name**, **Task Name**, **Date**, and **IP Address**.
+
+1. To filter by **Activated** or **Deactivated**, in the **Status** section, select **All**, **Activated**, or **Deactivated**, and then select **Apply**.
++
+## Next steps
+
+- For an overview on activity triggers, see [View information about activity triggers](cloudknox-ui-triggers.md).
+- For information on activity alerts and alert triggers, see [Create and view activity alerts and alert triggers](cloudknox-howto-create-alert-trigger.md).
+- For information on rule-based anomalies and anomaly triggers, see [Create and view rule-based anomalies and anomaly triggers](cloudknox-product-rule-based-anomalies.md).
+- For information on finding outliers in identity's behavior, see [Create and view statistical anomalies and anomaly triggers](cloudknox-product-statistical-anomalies.md).
active-directory Cloudknox Product Permissions Analytics Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-permissions-analytics-reports.md
+
+ Title: Generate and download the Permissions analytics report in CloudKnox Permissions Management
+description: How to generate and download the Permissions analytics report in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Generate and download the Permissions analytics report
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how to generate and download the **Permissions analytics report** in CloudKnox Permissions Management (CloudKnox).
+
+> [!NOTE]
+> This topic applies only to Amazon Web Services (AWS) users.
+
+## Generate the Permissions analytics report
+
+1. In the CloudKnox home page, select the **Reports** tab, and then select the **Systems Reports** subtab.
+
+ The **Systems Reports** subtab displays a list of reports the **Reports** table.
+1. Find **Permissions analytics report** in the list, and to download the report, select the down arrow to the right of the report name, or from the ellipses **(...)** menu, select **Download**.
+
+ The following message displays: **Successfully started to generate on-demand report.**
+
+1. For detailed information in the report, select the right arrow next to one of the following categories. Or, select the required category under the **Findings** column.
+
+ - **AWS**
+ - Inactive Identities
+ - Users
+ - Roles
+ - Resources
+ - Serverless Functions
+ - Inactive Groups
+ - Super Identities
+ - Users
+ - Roles
+ - Resources
+ - Serverless Functions
+ - Over-Provisioned Active Identities
+ - Users
+ - Roles
+ - Resources
+ - Serverless Functions
+ - PCI Distribution
+ - Privilege Escalation
+ - Users
+ - Roles
+ - Resources
+ - S3 Bucket Encryption
+ - Unencrypted Buckets
+ - SSE-S3 Buckets
+ - S3 Buckets Accessible Externally
+ - EC2 S3 Buckets Accessibility
+ - Open Security Groups
+ - Identities That Can Administer Security Tools
+ - Users
+ - Roles
+ - Resources
+ - Serverless Functions
+ - Identities That Can Access Secret Information
+ - Users
+ - Roles
+ - Resources
+ - Serverless Functions
+ - Cross-Account Access
+ - External Accounts
+ - Roles That Allow All Identities
+ - Hygiene: MFA Enforcement
+ - Hygiene: IAM Access Key Age
+ - Hygiene: Unused IAM Access Keys
+ - Exclude From Reports
+ - Users
+ - Roles
+ - Resources
+ - Serverless Functions
+ - Groups
+ - Security Groups
+ - S3 Buckets
++
+1. Select a category and view the following columns of information:
+
+ - **User**, **Role**, **Resource**, **Serverless function name**: Displays the name of the identity.
+ - **Authorization system**: Displays the authorization system to which the identity belongs.
+ - **Domain**: Displays the domain name to which the identity belongs.
+ - **Permissions**: Displays the maximum number of permissions that the identity can be granted.
+ - **Used**: Displays how many permissions that the identity has used.
+ - **Granted**: Displays how many permissions that the identity has been granted.
+ - **PCI**: Displays the permission creep index (PCI) score of the identity.
+ - **Date last active on**: Displays the date that the identity was last active.
+ - **Date created on**: Displays the date when the identity was created.
+++
+<!## Add and remove tags in the Permissions analytics report
+
+1. Select **Tags**.
+1. Select one of the categories from the **Permissions analytics report**.
+1. Select the identity name to which you want to add a tag. Then, select the checkbox at the top to select all identities.
+1. Select **Add tag**.
+1. In the **tag** column:
+ - To select from the available options from the list, select **Select a tag**.
+ - To search for a tag, enter the tag name.
+ - To create a new custom tag, select **New custom tag**.
+ - To create a new tag, enter a name for the tag and select **Create**.
+ - To remove a tag, select **Delete**.
+
+1. In the **Value (optional)** box, enter a value, if necessary.
+1. Select **Save**.>
+
+## Next steps
+
+- For information on how to view system reports in the **Reports** dashboard, see [View system reports in the Reports dashboard](cloudknox-product-reports.md).
+- For a detailed overview of available system reports, see [View a list and description of system reports](cloudknox-all-reports.md).
+- For information about how to generate and view a system report, see [Generate and view a system report](cloudknox-report-view-system-report.md).
+- For information about how to create, view, and share a system report, see [Create, view, and share a custom report](cloudknox-report-view-system-report.md).
active-directory Cloudknox Product Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-reports.md
+
+ Title: View system reports in the Reports dashboard in CloudKnox Permissions Management
+description: How to view system reports in the Reports dashboard in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View system reports in the Reports dashboard
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+CloudKnox Permissions Management (CloudKnox) has various types of system report types available that capture specific sets of data. These reports allow management to:
+
+- Make timely decisions.
+- Analyze trends and system/user performance.
+- Identify trends in data and high risk areas so that management can address issues more quickly and improve their efficiency.
+
+## Explore the Reports dashboard
+
+The **Reports** dashboard provides a table of information with both system reports and custom reports. The **Reports** dashboard defaults to the **System reports** tab, which has the following details:
+
+- **Report Name**: The name of the report.
+- **Category**: The type of report. For example, **Permission**.
+- **Authorization System**: Displays which authorizations the custom report applies to.
+- **Format**: Displays the output format the report can be generated in. For example, comma-separated values (CSV) format, portable document format (PDF), or Microsoft Excel Open XML Spreadsheet (XLSX) format.
+
+ - To download a report, select the down arrow to the right of the report name, or from the ellipses **(...)** menu, select **Download**.
+
+ The following message displays across the top of the screen in green if the download is successful: **Successfully started to generate on demand report**.
+
+## Available system reports
+
+CloudKnox offers the following reports for management associated with the authorization systems noted in parenthesis:
+
+- **Access key entitlements and usage**:
+ - **Summary of report**: Provides information about access key, for example, permissions, usage, and rotation date.
+ - **Applies to**: Amazon Web Services (AWS) and Microsoft Azure
+ - **Report output type**: CSV
+ - **Ability to collate report**: Yes
+ - **Type of report**: **Summary** or **Detailed**
+ - **Use cases**:
+ - The access key age, last rotation date, and last usage date is available in the summary report to help with key rotation.
+ - The granted task and Permissions creep index (PCI) score to take action on the keys.
+
+- **User entitlements and usage**:
+ - **Summary of report**: Provides information about the identities' permissions, for example, entitlement, usage, and PCI.
+ - **Applies to**: AWS, Azure, and Google Cloud Platform (GCP)
+ - **Report output type**: CSV
+ - **Ability to collate report**: Yes
+ - **Type of report**: **Summary** or **Detailed**
+ - **Use cases**:
+ - The data displayed on the **Usage Analytics** screen is downloaded as part of the **Summary** report. The user's detailed permissions usage is listed in the **Detailed** report.
+
+- **Group entitlements and usage**:
+ - **Summary of report**: Provides information about the group's permissions, for example, entitlement, usage, and PCI.
+ - **Applies to**: AWS, Azure, and GCP
+ - **Report output type**: CSV
+ - **Ability to collate report**: Yes
+ - **Type of report**: **Summary**
+ - **Use cases**:
+ - All group level entitlements and permission assignments, PCIs, and the number of members are listed as part of this report.
+
+- **Identity permissions**:
+ - **Summary of report**: Report on identities that have specific permissions, for example, identities that have permission to delete any S3 buckets.
+ - **Applies to**: AWS, Azure, and GCP
+ - **Report output type**: CSV
+ - **Ability to collate report**: No
+ - **Type of report**: **Summary**
+ - **Use cases**:
+ - Any task usage or specific task usage via User/Group/Role/App can be tracked with this report.
+
+- **Identity privilege activity report**
+ - **Summary of report**: Provides information about permission changes that have occurred in the selected duration.
+ - **Applies to**: AWS, Azure, and GCP
+ - **Report output type**: PDF
+ - **Ability to collate report**: No
+ - **Type of report**: **Summary**
+ - **Use cases**:
+ - Any identity permission change can be captured using this report.
+ - The **Identity Privilege Activity** report has the following main sections: **User Summary**, **Group Summary**, **Role Summary**, and **Delete Task Summary**.
+ - The **User** summary lists the current granted permissions and high-risk permissions and resources accessed in 1 day, 7 days, or 30 days. There are subsections for newly added or deleted users, users with PCI change, and High-risk active/inactive users.
+ - The **Group** summary lists the administrator level groups with the current granted permissions and high-risk permissions and resources accessed in 1 day, 7 days, or 30 days. There are subsections for newly added or deleted groups, groups with PCI change, and High-risk active/inactive groups.
+ - The **Role summary** lists similar details as **Group Summary**.
+ - The **Delete Task summary** section lists the number of times the **Delete task** has been executed in the given time period.
+
+- **Permissions analytics report**
+ - **Summary of report**: Provides information about the violation of key security best practices.
+ - **Applies to**: AWS, Azure, and GCP
+ - **Report output type**: CSV
+ - **Ability to collate report**: Yes
+ - **Type of report**: **Detailed**
+ - **Use cases**:
+ - This report lists the different key findings in the selected auth systems. The key findings include super identities, inactive identities, over provisioned active identities, storage bucket hygiene, and access key age (for AWS only). The report helps administrators to visualize the findings across the organization.
+
+ For more information about this report, see [Permissions analytics report](cloudknox-product-permissions-analytics-reports.md).
+
+- **Role/Policy Details**
+ - **Summary of report**: Provides information about roles and policies.
+ - **Applies to**: AWS, Azure, GCP
+ - **Report output type**: CSV
+ - **Ability to collate report**: No
+ - **Type of report**: **Summary**
+ - **Use cases**:
+ - Assigned/Unassigned, custom/system policy, and the used/unused condition is captured in this report for any specific, or all, AWS accounts. Similar data can be captured for Azure/GCP for the assigned/unassigned roles.
+
+- **PCI History**
+ - **Summary of report**: Provides a report of privilege creep index (PCI) history.
+ - **Applies to**: AWS, Azure, GCP
+ - **Report output type**: CSV
+ - **Ability to collate report**: Yes
+ - **Type of report**: **Summary**
+ - **Use cases**:
+ - This report plots the trend of the PCI by displaying the monthly PCI history for each authorization system.
+
+- **All Permissions for Identity**
+ - **Summary of report**: Provides results of all permissions for identities.
+ - **Applies to**: AWS, Azure, GCP
+ - **Report output type**: CSV
+ - **Ability to collate report**: Yes
+ - **Type of report**: **Detailed**
+ - **Use cases**:
+ - This report lists all the assigned permissions for the selected identities.
++++
+## Next steps
+
+- For a detailed overview of available system reports, see [View a list and description of system reports](cloudknox-all-reports.md).
+- For information about how to create, view, and share a system report, see [Create, view, and share a custom report](cloudknox-report-view-system-report.md).
+- For information about how to create and view a custom report, see [Generate and view a custom report](cloudknox-report-create-custom-report.md).
+- For information about how to create and view the Permissions analytics report, see [Generate and download the Permissions analytics report](cloudknox-product-permissions-analytics-reports.md).
active-directory Cloudknox Product Rule Based Anomalies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-rule-based-anomalies.md
+
+ Title: Create and view rule-based anomalies and anomaly triggers in CloudKnox Permissions Management
+description: How to create and view rule-based anomalies and anomaly triggers in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Create and view rule-based anomaly alerts and anomaly triggers
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+Rule-based anomalies identify recent activity in CloudKnox Permissions Management (CloudKnox) that is determined to be unusual based on explicit rules defined in the activity trigger. The goal of rule-based anomaly is high precision detection.
+
+## View rule-based anomaly alerts
+
+1. In the CloudKnox home page, select **Activity triggers** (the bell icon).
+1. Select **Rule-based anomaly**, and then select the **Alerts** subtab.
+
+ The **Alerts** subtab displays the following information:
+
+ - **Alert name**: Lists the name of the alert.
+
+ - To view the specific identity, resource, and task names that occurred during the alert collection period, select the **Alert Name**.
+
+ - **Anomaly alert rule**: Displays the name of the rule select when creating the alert.
+ - **# of occurrences**: How many times the alert trigger has occurred.
+ - **Task**: How many tasks performed are triggered by the alert.
+ - **Resources**: How many resources accessed are triggered by the alert.
+ - **Identity**: How many identities performing unusual behavior are triggered by the alert.
+ - **Authorization system**: Displays which authorization systems the alert applies to, Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
+ - **Date/Time**: Lists the date and time of the alert.
+ - **Date/Time (UTC)**: Lists the date and time of the alert in Coordinated Universal Time (UTC).
+
+
+1. To filter alerts:
+
+ - From the **Alert Name** dropdown, select **All** or the appropriate alert name.
+ - From the **Date** dropdown menu, select **Last 24 Hours**, **Last 2 Days**, **Last Week**, or **Custom Range**, and select **Apply**.
+
+ - If you select **Custom Range**, also enter **From** and **To** duration settings.
+1. To view details that match the alert criteria, select the ellipses (**...**).
+
+ - **View Trigger**: Displays the current trigger settings and applicable authorization system details
+ - **Details**: Displays details about **Authorization System Type**, **Authorization Systems**, **Resources**, **Tasks**, **Identities**, and **Activity**
+ - **Activity**: Displays details about the **Identity Name**, **Resource Name**, **Task Name**, **Date/Time**, **Inactive For**, and **IP Address**. Selecting the "eye" icon displays the **Raw Events Summary**
+
+## Create a rule-based anomaly trigger
+
+1. In the CloudKnox home page, select **Activity triggers** (the bell icon).
+1. Select **Rule-based anomaly**, and then select the **Alerts** subtab.
+1. Select **Create Anomaly Trigger**.
+
+1. In the **Alert Name** box, enter a name for the alert.
+1. Select the **Authorization system**, **AWS**, **Azure**, or **GCP**.
+1. Select one of the following conditions:
+ - **Any Resource Accessed for the First Time**: The identity accesses a resource for the first time during the specified time interval.
+ - **Identity Performs a Particular Task for the First Time**: The identity does a specific task for the first time during the specified time interval.
+ - **Identity Performs a Task for the First Time**: The identity performs any task for the first time during the specified time interval
+1. Select **Next**.
+1. On the **Authorization Systems** tab, select the available authorization systems and folders, or select **All**.
+
+ This screen defaults to **List** view, but you can change it to **Folders** view. You can select the applicable folder instead of individually selecting by authorization system.
+
+ - The **Status** column displays if the authorization system is online or offline.
+ - The **Controller** column displays if the controller is enabled or disabled.
+
+1. On the **Configuration** tab, to update the **Time Interval**, select **90 Days**, **60 Days**, or **30 Days** from the **Time range** dropdown.
+1. Select **Save**.
+
+## View a rule-based anomaly trigger
+
+1. In the CloudKnox home page, select **Activity triggers** (the bell icon).
+1. Select **Rule-based anomaly**, and then select the **Alert triggers** subtab.
+
+ The **Alert triggers** subtab displays the following information:
+
+ - **Alerts**: Displays the name of the alert.
+ - **Anomaly Alert Rule**: Displays the name of the selected rule when creating the alert.
+ - **# of users subscribed**: Displays the number of users subscribed to the alert.
+ - **Created by**: Displays the email address of the user who created the alert.
+ - **Last Modified By**: Displays the email address of the user who last modified the alert.
+ - **Last Modified On**: Displays the date and time the trigger was last modified.
+ - **Subscription**: Subscribes you to receive alert emails. Switches between **On** and **Off**.
+
+1. To view other options available to you, select the ellipses (**...**), and then select from the available options:
+
+ If the **Subscription** is **On**, the following options are available:
+
+ - **Edit**: Enables you to modify alert parameters.
+
+ Only the user who created the alert can edit the trigger screen, rename an alert, deactivate an alert, and delete an alert. Changes made by other users aren't saved.
+
+ - **Duplicate**: Create a duplicate copy of the selected alert trigger.
+ - **Rename**: Enter the new name of the query, and then select **Save.**
+ - **Deactivate**: The alert will still be listed, but will no longer send emails to subscribed users.
+ - **Activate**: Activate the alert trigger and start sending emails to subscribed users.
+ - **Notification settings**: View the **Email** of users who are subscribed to the alert trigger.
+ - **Delete**: Delete the alert.
+
+ If the **Subscription** is **Off**, the following options are available:
+ - **View**: View details of the alert trigger.
+ - **Notification settings**: View the **Email** of users who are subscribed to the alert trigger.
+ - **Duplicate**: Create a duplicate copy of the selected alert trigger.
+
+1. To filter by **Activated** or **Deactivated**, in the **Status** section, select **All**, **Activated**, or **Deactivated**, and then select **Apply**.
+++
+## Next steps
+
+- For an overview on activity triggers, see [View information about activity triggers](cloudknox-ui-triggers.md).
+- For information on activity alerts and alert triggers, see [Create and view activity alerts and alert triggers](cloudknox-howto-create-alert-trigger.md).
+- For information on finding outliers in identity's behavior, see [Create and view statistical anomalies and anomaly triggers](cloudknox-product-statistical-anomalies.md).
+- For information on permission analytics triggers, see [Create and view permission analytics triggers](cloudknox-product-permission-analytics.md).
active-directory Cloudknox Product Statistical Anomalies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-statistical-anomalies.md
+
+ Title: Create and view statistical anomalies and anomaly triggers in CloudKnox Permissions Management
+description: How to create and view statistical anomalies and anomaly triggers in the Statistical Anomaly tab in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Create and view statistical anomalies and anomaly triggers
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+Statistical anomalies can detect outliers in an identity's behavior if recent activity is determined to be unusual based on models defined in an activity trigger. The goal of this anomaly trigger is a high recall rate.
+
+## View statistical anomalies in an identity's behavior
+
+1. In the CloudKnox home page, select **Activity triggers** (the bell icon).
+1. Select **Statistical anomaly**, and then select the **Alerts** subtab.
+
+ The **Alerts** subtab displays the following information:
+
+ - **Alert Name**: Lists the name of the alert.
+ - **Anomaly Alert Rule**: Displays the name of the rule select when creating the alert.
+ - **# of Occurrences**: Displays how many times the alert trigger has occurred.
+ - **Authorization System**: Displays which authorization systems the alert applies to.
+ - **Date/Time**: Lists the day of the outlier occurring.
+ - **Date/Time (UTC)**: Lists the day of the outlier occurring in Coordinated Universal Time (UTC).
+
+
+1. To filter the alerts based on name, select the appropriate alert name or choose **All** from the **Alert Name** dropdown menu, and select **Apply**.
+1. To filter the alerts based on alert time, select **Last 24 Hours**, **Last 2 Days**, **Last Week**, or **Custom Range** from the **Date** dropdown menu, and select **Apply**.
+1. If you select the ellipses (**...**) and select:
+ - **Details**, this brings you to an Alert Summary view with **Authorization System**, **Statistical Model** and **Observance Period** displayed along with a table with a row per identity triggering this alert. From here you can click:
+ - **Details**: Displays graph(s) highlighting the anomaly with context, and up to the top 3 actions performed on the day of the anomaly
+ - **View Trigger**: Displays the current trigger settings and applicable authorization system details
+ - **View Trigger**: Displays the current trigger settings and applicable authorization system details
+
+## Create a statistical anomaly trigger
+
+1. In the CloudKnox home page, select **Activity triggers** (the bell icon).
+1. Select **Statistical anomaly**, select the **Alerts** subtab, and then select **Create alert trigger**.
+1. Enter a name for the alert in the **Alert Name** box.
+1. Select the **Authorization system**, Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
+1. Select one of the following conditions:
+
+ - **Identity Performed High Number of Tasks**: The identity performs higher than their usual volume of tasks. For example, an identity typically performs 25 tasks per day, and now it is performing 100 tasks per day.
+ - **Identity Performed Low Number of Tasks**: The identity performs lower than their usual volume of tasks. For example, an identity typically performs 100 tasks per day, and now it is performing 25 tasks per day.
+ - **Identity Performed Tasks with Unusual Results**: The identity performing an action gets a different result than usual, such as most tasks end in a successful result and are now ending in a failed result or vice versa.
+ - **Identity Performed Tasks with Unusual Timing**: The identity does tasks at unusual times as established by their baseline in the observance period. Times are grouped by the following UTC 4 hour windows.
+ - 12AM-4AM UTC
+ - 4AM-8AM UTC
+ - 8AM-12PM UTC
+ - 12PM-4PM UTC
+ - 4PM-8PM UTC
+ - 8PM-12AM UTC
+ - **Identity Performed Tasks with Unusual Types**: The identity performs unusual types of tasks as established by their baseline in the observance period. For example, an identity performs read, write, or delete tasks they wouldn't ordinarily perform.
+ - **Identity Performed Tasks with Multiple Unusual Patterns**: The identity has several unusual patterns in the tasks performed by the identity as established by their baseline in the observance period.
+1. Select **Next**.
+
+1. On the **Authorization systems** tab, select the appropriate systems, or, to select all systems, select **All**.
+
+ The screen defaults to the **List** view but you can switch to **Folder** view using the menu, and then select the applicable folder instead of individually by system.
+
+ - The **Status** column displays if the authorization system is online or offline.
+
+ - The **Controller** column displays if the controller is enabled or disabled.
++
+1. On the **Configuration** tab, to update the **Time Interval**, from the **Time range** dropdown, select **90 Days**, **60 Days**, or **30 Days**, and then select **Save**.
+
+## View statistical anomaly triggers
+
+1. In the CloudKnox home page, select **Activity triggers** (the bell icon).
+1. Select **Statistical anomaly**, and then select the **Alert triggers** subtab.
+
+ The **Alert triggers** subtab displays the following information:
+
+ - **Alert**: Displays the name of the alert.
+ - **Anomaly alert rule**: Displays the name of the rule select when creating the alert.
+ - **# of users subscribed**: Displays the number of users subscribed to the alert.
+ - **Created by**: Displays the email address of the user who created the alert.
+ - **Last modified by**: Displays the email address of the user who last modified the alert.
+ - **Last modified on**: Displays the date and time the trigger was last modified.
+ - **Subscription**: Subscribes you to receive alert emails. Toggle the button to **On** or **Off**.
+
+1. To filter by **Activated** or **Deactivated**, in the **Status** section, select **All**, **Activated**, or **Deactivated**, and then select **Apply**.
+
+1. To view other options available to you, select the ellipses (**...**), and then select from the available options:
+
+ If the **Subscription** is **On**, the following options are available:
+ - **Edit**: Enables you to modify alert parameters
+
+ > [!NOTE]
+ > Only the user who created the alert can perform the following actions: edit the trigger screen, rename an alert, deactivate an alert, and delete an alert. Changes made by other users aren't saved.
+ - **Duplicate**: Create a duplicate copy of the selected alert trigger.
+ - **Rename**: Enter the new name of the query, and then select **Save.**
+ - **Deactivate**: The alert will still be listed, but will no longer send emails to subscribed users.
+ - **Activate**: Activate the alert trigger and start sending emails to subscribed users.
+ - **Notification settings**: View the **Email** of users who are subscribed to the alert trigger.
+ - **Delete**: Delete the alert.
+
+ If the **Subscription** is **Off**, the following options are available:
+ - **View**: View details of the alert trigger.
+ - **Notification settings**: View the **Email** of users who are subscribed to the alert trigger.
+ - **Duplicate**: Create a duplicate copy of the selected alert trigger.
+
+
+1. Select **Apply**.
+++
+## Next steps
+
+- For an overview on activity triggers, see [View information about activity triggers](cloudknox-ui-triggers.md).
+- For information on activity alerts and alert triggers, see [Create and view activity alerts and alert triggers](cloudknox-howto-create-alert-trigger.md).
+- For information on rule-based anomalies and anomaly triggers, see [Create and view rule-based anomalies and anomaly triggers](cloudknox-product-rule-based-anomalies.md).
+- For information on permission analytics triggers, see [Create and view permission analytics triggers](cloudknox-product-permission-analytics.md).
active-directory Cloudknox Report Create Custom Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-report-create-custom-report.md
+
+ Title: Create, view, and share a custom report a custom report in CloudKnox Permissions Management
+description: How to create, view, and share a custom report in the CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Create, view, and share a custom report
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how to create, view, and share a custom report in CloudKnox Permissions Management (CloudKnox).
+
+## Create a custom report
+
+1. In the CloudKnox home page, select the **Reports** tab, and then select the **Custom reports** subtab.
+1. Select **New Custom Report**.
+1. In the **Report Name** box, enter a name for your report.
+1. From the **Report Based on** list:
+ 1. To view which authorization systems the report applies to, hover over each report name.
+ 1. To view a description of a report, select the report.
+1. Select a report you want to use as the base for your custom report, and then select **Next**.
+1. In the **MyReport** box, select the **Authorization system** you want: Amazon Web Services (**AWS**), Microsoft Azure (**Azure**), or Google Cloud Platform (**GCP**).
+
+1. To add specific accounts, select the **List** subtab, and then select **All** or the account names.
+1. To add specific folders, select the **Folders** subtab, and then select **All** or the folder names.
+
+1. Select the **Report Format** subtab, and then select the format for your report: comma-separated values (**CSV**) file, portable document format (**PDF**), or Microsoft Excel Open XML Spreadsheet (**XLSX**) file.
+1. Select the **Schedule** tab, and then select the frequency for your report, from **None** up to **Monthly**.
+
+ - For **Hourly** and **Daily** options, set the start date by choosing from the **Calendar** dropdown, and can input a specific time of the day they want to receive the report.
+
+ In addition to date and time, the **Weekly** and **Biweekly** provide options for you to select on which day(s)of the week the report should repeat.
+
+1. Select **Save**.
+
+ The following message displays across the top of the screen in green if the download is successful: **Report has been created**.
+The report name appears in the **Reports** table.
+
+## View a custom report
+
+1. In the CloudKnox home page, select the **Reports** tab, and then select the **Custom reports** subtab.
+
+ The **Custom Reports** tab displays the following information in the **Reports** table:
+
+ - **Report Name**: The name of the report.
+ - **Category**: The type of report: **Permission**.
+ - **Authorization System**: The authorization system in which you can view the report: AWS, Azure, and GCP.
+ - **Format**: The format of the report, **CSV**, **PDF**, or **XLSX** format.
+
+1. To view a report, from the **Report Name** column, select the report you want.
+1. To download a report, select the down arrow to the right of the report name, or from the ellipses **(...)** menu, select **Download**.
+1. To refresh the list of reports, select **Reload**.
+
+## Share a custom report
+
+1. In the CloudKnox home page, select the **Reports** tab, and then select the **Custom reports** subtab.
+1. In the **Reports** table, select a report and then select the ellipses (**...**) icon.
+1. In the **Report settings** box, select **Share with**.
+1. In the **Search Email to add** box, enter the name of other CloudKnox user(s).
+
+ You can only share reports with other CloudKnox users.
+1. Select **Save**.
+
+## Search for a custom report
+
+1. In the CloudKnox home page, select the **Reports** tab, and then select the **Custom reports** subtab.
+1. On the **Custom Reports** tab, select **Search**.
+1. In the **Search** box, enter the name of the report you want.
+
+ The **Custom Reports** tab displays a list of reports that match your search criteria.
+1. Select the report you want.
+1. To download a report, select the down arrow to the right of the report name, or from the ellipses **(...)** menu, select **Download**.
+1. To refresh the list of reports, select **Reload**.
++
+## Modify a saved or scheduled custom report
+
+1. In the CloudKnox home page, select the **Reports** tab, and then select the **Custom reports** subtab.
+1. Hover over the report name on the **Custom Reports** tab.
+
+ - To rename the report, select **Edit** (the pencil icon), and enter a new name.
+ - To change the settings for your report, select **Settings** (the gear icon). Make your changes, and then select **Save**.
+
+ - To download a copy of the report, select the **Down arrow** icon.
+
+1. To perform other actions to the report, select the ellipses (**...**) icon:
+
+ - **Download**: Downloads a copy of the report.
+
+ - **Report Settings**: Displays the settings for the report, including scheduling, sharing the report, and so on.
+
+ - **Duplicate**: Creates a duplicate of the report called **"Copy of XXX"**. Any reports not created by the current user are listed as **Duplicate**.
+
+ When you select **Duplicate**, a box appears asking if you're sure you want to create a duplicate. Select **Confirm**.
+
+ When the report is successfully duplicated, the following message displays: **Report generated successfully**.
+
+ - **API Settings**: Download the report using your Application Programming Interface (API) settings.
+
+ When this option is selected, the **API Settings** window opens and displays the **Report ID** and **Secret Key**. Select **Generate New Key**.
+
+ - **Delete**: Select this option to delete the report.
+
+ After selecting **Delete**, a pop-up box appears asking if the user is sure they want to delete the report. Select **Confirm**.
+
+ **Report is deleted successfully** appears across the top of the screen in green if successfully deleted.
+
+ - **Unsubscribe**: Unsubscribe the user from receiving scheduled reports and notifications.
+
+ This option is only available after a report has been scheduled.
++
+## Next steps
+
+- For information on how to view system reports in the **Reports** dashboard, see [View system reports in the Reports dashboard](cloudknox-product-reports.md).
+- For a detailed overview of available system reports, see [View a list and description of system reports](cloudknox-all-reports.md).
+- For information about how to generate and view a system report, see [Generate and view a system report](cloudknox-report-view-system-report.md).
+- For information about how to create and view the Permissions analytics report, see [Generate and download the Permissions analytics report](cloudknox-product-permissions-analytics-reports.md).
active-directory Cloudknox Report View System Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-report-view-system-report.md
+
+ Title: Generate and view a system report in CloudKnox Permissions Management
+description: How to generate and view a system report in the CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Generate and view a system report
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how to generate and view a system report in CloudKnox Permissions Management (CloudKnox).
+
+## Generate a system report
+
+1. In the CloudKnox home page, select the **Reports** tab, and then select the **Systems reports** subtab.
+ The **Systems Reports** subtab displays the following options in the **Reports** table:
+
+ - **Report Name**: The name of the report.
+ - **Category**: The type of report: **Permission**.
+ - **Authorization System**: The authorization system activity in the report: Amazon Web Services (AWS), Microsoft Azure (Azure), and Google Cloud Platform (GCP).
+ - **Format**: The format in which the report is available: comma-separated values (**CSV**) format, portable document format (**PDF**), or Microsoft Excel Open XML Spreadsheet (**XLSX**) format.
+
+1. In the **Report Name** column, find the report you want, and then select the down arrow to the right of the report name to download the report.
+
+ Or, from the ellipses **(...)** menu, select **Download**.
+
+ The following message displays: **Successfully started to generate on demand report.**
+
+ > [!NOTE]
+ > If you select one authorization system, the report includes a summary. If you select more than one authorization system, the report does not include a summary.
+
+1. To refresh the list of reports, select **Reload**.
+
+## Search for a system report
+
+1. On the **Systems Reports** subtab, select **Search**.
+1. In the **Search** box, enter the name of the report you want.
+
+ The **Systems Reports** subtab displays a list of reports that match your search criteria.
+1. Select a report from the **Report Name** column.
+1. To download a report, select the down arrow to the right of the report name, or from the ellipses **(...)** menu, select **Download**.
+1. To refresh the list of reports, select **Reload**.
++
+## Next steps
+
+- For information on how to view system reports in the **Reports** dashboard, see [View system reports in the Reports dashboard](cloudknox-product-reports.md).
+- For a detailed overview of available system reports, see [View a list and description of system reports](cloudknox-all-reports.md).
+- For information about how to create, view, and share a system report, see [Create, view, and share a custom report](cloudknox-report-view-system-report.md).
+- For information about how to create and view the Permissions analytics report, see [Generate and download the Permissions analytics report](cloudknox-product-permissions-analytics-reports.md).
active-directory Cloudknox Training Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-training-videos.md
+
+ Title: Microsoft CloudKnox Permissions Management training videos
+description: Microsoft CloudKnox Permissions Management training videos.
+++++++ Last updated : 12/27/2021+++
+# Microsoft CloudKnox Permissions Management training videos
+
+To view step-by-step training videos on how to use CloudKnox features, select a link below.
+
+## Privilege on demand (POD) work flows
+
+- View a step-by-step video on the [privilege on demand (POD) work flow from the Just Enough Permissions (JEP) Controller](https://vimeo.com/461508166/3d88107f41).
+
+## Usage analytics
+
+- View a step-by-step video on [usage analytics](https://vimeo.com/461509556/b7bb392b83).
+
+## Just Enough Permissions (JEP) roles and policies
+
+- View a step-by-step video on [how to use and interpret data on the Role/Policy tab under the JEP Controller](https://vimeo.com/461510754/3dd31d85b7).
+
+## Attach or detach permissions for users, roles, and resources
+
+- View a step-by-step video on [how to attach and detach permissions for users, roles, and resources](https://vimeo.com/461512552/6f6a06e6c1).
+
+## Audit trails
+
+- View a step-by-step video on [how to use the audit trail](https://vimeo.com/461513290/b431a38b6c).
+
+## Alert triggers
+
+- View a step-by-step video on [how to create an alert trigger](https://vimeo.com/461881849/019c843cc6).
+
+## Group permissions
+
+- View a step-by-step video on [how to create group-based permissions](https://vimeo.com/462797947/d041de9157).
++
+<!## Next steps>
active-directory Cloudknox Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-troubleshoot.md
+
+ Title: Troubleshoot issues with CloudKnox Permissions Management
+description: Troubleshoot issues with CloudKnox Permissions Management
+++++++ Last updated : 02/23/2022+++
+# Troubleshoot issues with CloudKnox Permissions Management
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This section answers troubleshoot issues with CloudKnox Permissions Management (CloudKnox).
+
+## One time passcode (OTP) email
+
+### The user didn't receive the OTP email.
+
+- Check your junk or Spam mail folder for the email.
+
+## Reports
+
+### The individual files are generated according to the authorization system (subscription/account/project).
+
+- Select the **Collate** option in the **Custom report** screen in the CloudKnox **Reports** tab.
+
+## Data collection in AWS
+
+### Data collection > AWS Authorization system data collection status is offline. Upload and transform is also offline.
+
+- Check the CloudKnox-related role that exists in these accounts.
+- Validate the trust relationship with the OpenID Connect (OIDC) role.
+
+<!Next steps>
active-directory Cloudknox Ui Audit Trail https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-ui-audit-trail.md
+
+ Title: Use queries to see how users access information in an authorization system in CloudKnox Permissions Management
+description: How to use queries to see how users access information in an authorization system in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Use queries to see how users access information
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+The **Audit** dashboard in CloudKnox Permissions Management (CloudKnox) provides an overview of queries a CloudKnox user has created to review how users access their authorization systems and accounts.
+
+This article provides an overview of the components of the **Audit** dashboard.
+
+## View information in the Audit dashboard
++
+1. In CloudKnox, select the **Audit** tab.
+
+ CloudKnox displays the query options available to you.
+
+1. The following options display at the top of the **Audit** dashboard:
+
+ - A tab for each existing query. Select the tab to see details about the query.
+ - **New query**: Select the tab to create a new query.
+ - **New tab (+)**: Select the tab to add a **New query** tab.
+ - **Saved queries**: Select to view a list of saved queries.
+
+1. To return to the main page, select **Back to Audit**.
++
+## Use a query to view information
+
+1. In CloudKnox, select the **Audit** tab.
+1. The **New query** tab displays the following options:
+
+ - **Authorization systems type**: A list of your authorization systems: Amazon Web Services (**AWS**), Microsoft Azure (**Azure**), or Google Cloud Platform (**GCP**).
+
+ - **Authorization system**: A **List** of accounts and **Folders** in the authorization system.
+
+ - To display a **List** of accounts and **Folders** in the authorization system, select the down arrow, and then select **Apply**.
+
+1. To add an **Audit condition**, select **Conditions** (the eye icon), select the conditions you want to add, and then select **Close**.
+
+1. To edit existing parameters, select **Edit** (the pencil icon).
+
+1. To add the parameter that you created to the query, select **Add**.
+
+1. To search for activity data that you can add to the query, select **Search** .
+
+1. To save your query, select **Save**.
+
+1. To save your query under a different name, select **Save As** (the ellipses **(...)** icon).
+
+1. To discard your work and start creating a query again, select **Reset query**.
+
+1. To delete a query, select the **X** to the right of the query tab.
+++
+## Next steps
+
+- For information on how to filter and view user activity, see [Filter and query user activity](cloudknox-product-audit-trail.md).
+- For information on how to create a query,see [Create a custom query](cloudknox-howto-create-custom-queries.md).
+- For information on how to generate an on-demand report from a query, see [Generate an on-demand report from a query](cloudknox-howto-audit-trail-results.md).
active-directory Cloudknox Ui Autopilot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-ui-autopilot.md
+
+ Title: View rules in the Autopilot dashboard in CloudKnox Permissions Management
+description: How to view rules in the Autopilot dashboard in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View rules in the Autopilot dashboard
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+The **Autopilot** dashboard in CloudKnox Permissions Management (CloudKnox) provides a table of information about **Autopilot rules** for administrators.
++
+> [!NOTE]
+> Only users with the **Administrator** role can view and make changes on this tab.
+
+## View a list of rules
+
+1. In the CloudKnox home page, select the **Autopilot** tab.
+1. In the **Autopilot** dashboard, from the **Authorization system types** dropdown, select the authorization system types you want: Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
+1. From the **Authorization system** dropdown, in the **List** and **Folders** box, select the account and folder names that you want.
+1. Select **Apply**.
+
+ The following information displays in the **Autopilot rules** table:
+
+ - **Rule Name**: The name of the rule.
+ - **State**: The status of the rule: idle (not being use) or active (being used).
+ - **Rule Type**: The type of rule being applied.
+ - **Mode**: The status of the mode: on-demand or not.
+ - **Last Generated**: The date and time the rule was last generated.
+ - **Created By**: The email address of the user who created the rule.
+ - **Last Modified**: The date and time the rule was last modified.
+ - **Subscription**: Provides an **On** or **Off** subscription that allows you to receive email notifications when recommendations have been generated, applied, or unapplied.
+
+## View other available options for rules
+
+- Select the ellipses **(...)**
+
+ The following options are available:
+
+ - **View rule**: Select to view details of the rule.
+ - **Delete rule**: Select to delete the rule. Only the user who created the selected rule can delete the rule.
+ - **Generate recommendations**: Creates recommendations for each user and the authorization system. Only the user who created the selected rule can create recommendations.
+ - **View recommendations**: Displays the recommendations for each user and authorization system.
+ - **Notification settings**: Displays the users subscribed to this rule. Only the user who created the selected rule can add other users to be notified.
+
+You can also select:
+
+- **Reload**: Select to refresh the displayed list of roles/policies.
+- **Search**: Select to search for a specific role/policy.
+- **Columns**: From the dropdown list, select the columns you want to display.
+ - Select **Reset to default** to return to the system defaults.
+- **New Rule**: Select to create a new rule. For more information, see [Create a rule](cloudknox-howto-create-rule.md).
+++
+## Next steps
+
+- For information about creating rules, see [Create a rule](cloudknox-howto-create-rule.md).
+- For information about generating, viewing, and applying rule recommendations for rules, see [Generate, view, and apply rule recommendations for rules](cloudknox-howto-recommendations-rule.md).
+- For information about notification settings for rules, see [View notification settings for a rule](cloudknox-howto-notifications-rule.md).
active-directory Cloudknox Ui Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-ui-dashboard.md
+
+ Title: View key statistics and data about your authorization system in CloudKnox Permissions Management
+description: How to view statistics and data about your authorization system in the CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022++++
+# View key statistics and data about your authorization system
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+CloudKnox Permissions Management (CloudKnox) provides a summary of key statistics and data about your authorization system regularly. This information is available for Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).
+
+## View metrics related to avoidable risk
+
+The data provided by CloudKnox includes metrics related to avoidable risk. These metrics allow the CloudKnox administrator to identify areas where they can reduce risks related to the principle of least permissions.
+
+You can view the following information in CloudKnox:
+
+- The **Permission creep index (PCI)** heat map on the CloudKnox **Dashboard** identifies:
+ - The number of users who have been granted high-risk permissions but aren't using them.
+ - The number of users who contribute to the permission creep index (PCI) and where they are on the scale.
+
+- The [**Analytics** dashboard](cloudknox-usage-analytics-home.md) provides a snapshot of permission metrics within the last 90 days.
++
+## Components of the CloudKnox Dashboard
+
+The CloudKnox **Dashboard** displays the following information:
+
+- **Authorization system types**: A dropdown list of authorization system types you can access: AWS, Azure, and GCP.
+
+- **Authorization system**: Displays a **List** of accounts and **Folders** in the selected authorization system you can access.
+
+ - To add or remove accounts and folders, from the **Name** list, select or deselect accounts and folders, and then select **Apply**.
+
+- **Permission creep index (PCI)**: The graph displays the **# of identities contributing to PCI**.
+
+ The PCI graph may display one or more bubbles. Each bubble displays the number of identities that are considered high risk. *High-risk* refers to the number of users who have permissions that exceed their normal or required usage.
+ - To display a list of the number of identities contributing to the **Low PCI**, **Medium PCI**, and **High PCI**, select the **List** icon in the upper right of the graph.
+ - To display the PCI graph again, select the **Graph** icon in the upper right of the list box.
+
+- **Highest PCI change**: Displays a list of your accounts and information about the **PCI** and **Change** in the index over the past 7 days.
+ - To download the list, select the down arrow in the upper right of the list box.
+
+ The following message displays: **We'll email you a link to download the file.**
+ - Check your email for the message from the CloudKnox Customer Success Team. The email contains a link to the **PCI history** report in Microsoft Excel format.
+ - The email also includes a link to the **Reports** dashboard, where you can configure how and when you want to receive reports automatically.
+ - To view all the PCI changes, select **View all**.
+
+- **Identity**: A summary of the **Findings** that includes:
+ - The number of **Inactive** identities that haven't been accessed in over 90 days.
+ - The number of **Super** identities that access data regularly.
+ - The number of identities that can **Access secret information**: A list of roles that can access sensitive or secret information.
+ - **Over-provisioned active** identities that have more permissions than they currently access.
+ - The number of identities **With permission escalation**: A list of roles that can increase permissions.
+
+ To view the list of all identities, select **All findings**.
+
+- **Resources**: A summary of the **Findings** that includes the number of resources that are:
+ - **Open security groups**
+ - **Microsoft managed keys**
+ - **Instances with access to S3 buckets**
+ - **Unencrypted S3 buckets**
+ - **SSE-S3 Encrypted buckets**
+ - **S3 Bucket accessible externally**
+++
+## The PCI heat map
+
+The **Permission creep index** heat map shows the incurred risk of users with access to high-risk permissions, and provides information about:
+
+- Users who were given access to high-risk permissions but aren't actively using them. *High-risk permissions* include the ability to modify or delete information in the authorization system.
+
+- The number of resources a user has access to, otherwise known as resource reach.
+
+- The high-risk permissions coupled with the number of resources a user has access to produce the score seen on the chart.
+
+ Permissions are classified as *high*, *medium*, and *low*.
+
+ - **High** (displayed in red) - The score is between 68 and 100. The user has access to many high-risk permissions they aren't using, and has high resource reach.
+ - **Medium** (displayed in yellow) - The score is between 34 and 67. The user has access to some high-risk permissions that they use, or have medium resource reach.
+ - **Low** (displayed in green) - The score is between 0 and 33. The user has access to few high-risk permissions. They use all their permissions and have low resource reach.
+
+- The number displayed on the graph shows how many users contribute to a particular score. To view detailed data about a user, hover over the number.
+
+ The distribution graph displays all the users who contribute to the permission creep. It displays how many users contribute to a particular score. For example, if the score from the PCI chart is 14, the graph shows how many users have a score of 14.
+
+- The **PCI Trend** graph shows you the historical trend of the PCI score over the last 90 days.
+ - To download the **PCI history report**, select **Download**.
+
+### View information on the heat map
+
+1. Select the number on the heat map bubble to display:
+
+ - The total number of **Identities** and how many of them are in the high, medium, and low categories.
+ - The **PCI trend** over the last several weeks.
+
+1. The **Identity** section below the heat map on the left side of the page shows all the relevant findings about identities, including roles that can access secret information, roles that are inactive, over provisioned active roles, and so on.
+
+ - To expand the full list of identities, select **All findings**.
+
+1. The **Resource** section below the heat map on the right side of the page shows all the relevant findings about resources. It includes unencrypted S3 buckets, open security groups, and so on.
++
+## The Analytics summary
+
+You can also view a summary of users and activities section on the [Analytics dashboard](cloudknox-usage-analytics-home.md). This dashboard provides a snapshot of the following high-risk tasks or actions users have accessed, and displays the total number of users with the high-risk access, how many users are inactive or have unexecuted tasks, and how many users are active or have executed tasks:
+
+- **Users with access to high-risk tasks**: Displays the total number of users with access to a high risk task (**Total**), how many users have access but haven't used the task (**Inactive**), and how many users are actively using the task (**Active**).
+
+- **Users with access to delete tasks**: A subset of high-risk tasks, which displays the number of users with access to delete tasks (**Total**), how many users have the delete permissions but haven't used the permissions (**Inactive**), and how many users are actively executing the delete capability (**Active**).
+
+- **High-risk tasks accessible by users**: Displays all available high-risk tasks in the authorization system (**Granted**), how many high-risk tasks aren't used (**Unexecuted**), and how many high-risk tasks are used (**Executed**).
+
+- **Delete tasks accessible by users**: Displays all available delete tasks in the authorization system (**Granted**), how many delete tasks aren't used (**Unexecuted**), and how many delete tasks are used (**Executed**).
+
+- **Resources that permit high-risk tasks**: Displays the total number of resources a user has access to (**Total**), how many resources are available but not used (**Inactive**), and how many resources are used (**Active**).
+
+- **Resources that permit delete tasks**: Displays the total number of resources that permit delete tasks (**Total**), how many resources with delete tasks aren't used (**Inactive**), and how many resources with delete tasks are used (**Active**).
+++
+## Next steps
+
+- For information on how to view authorization system and account activity data on the CloudKnox Dashboard, see [View data about the activity in your authorization system](cloudknox-product-dashboard.md).
+- For an overview of the Analytics dashboard, see [An overview of the Analytics dashboard](cloudknox-usage-analytics-home.md).
++
active-directory Cloudknox Ui Remediation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-ui-remediation.md
+
+ Title: View existing roles/policies and requests for permission in the Remediation dashboard in CloudKnox Permissions Management
+description: How to view existing roles/policies and requests for permission in the Remediation dashboard in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View roles/policies and requests for permission in the Remediation dashboard
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+The **Remediation** dashboard in CloudKnox Permissions Management (CloudKnox) provides an overview of roles/policies, permissions, a list of existing requests for permissions, and requests for permissions you have made.
+
+This article provides an overview of the components of the **Remediation** dashboard.
+
+> [!NOTE]
+> To view the **Remediation** dashboard, your must have **Viewer**, **Controller**, or **Administrator** permissions. To make changes on this dashboard, you must have **Controller** or **Administrator** permissions. If you donΓÇÖt have these permissions, contact your system administrator.
+
+> [!NOTE]
+> Microsoft Azure uses the term *role* for what other cloud providers call *policy*. CloudKnox automatically makes this terminology change when you select the authorization system type. In the user documentation, we use *role/policy* to refer to both.
+
+## Display the Remediation dashboard
+
+1. On the CloudKnox home page, select the **Remediation** tab.
+
+ The **Remediation** dashboard includes six subtabs:
+
+ - **Roles/Policies**: Use this subtab to perform Create Read Update Delete (CRUD) operations on roles/policies.
+ - **Permissions**: Use this subtab to perform Read Update Delete (RUD) on granted permissions.
+ - **Role/Policy template**: Use this subtab to create a template for roles/policies template.
+ - **Requests**: Use this subtab to view approved, pending, and processed Permission on Demand (POD) requests.
+ - **My requests**: Use this tab to manage lifecycle of the POD request either created by you or needs your approval.
+ - **Settings**: Use this subtab to select **Request role/policy filters**, **Request settings**, and **Auto-approve** settings.
+
+1. Use the dropdown to select the **Authorization System Type** and **Authorization System**, and then select **Apply**.
+
+## View and create roles/policies
+
+The **Role/Policies** subtab provides the following settings that you can use to view and create a role/policy.
+
+- **Authorization system type**: Displays a dropdown with authorization system types you can access, Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).
+- **Authorization system**: Displays a list of authorization systems accounts you can access.
+- **Role/Policy type**: A dropdown with available role/policy types. You can select **All**, **Custom**, **System**, or **CloudKnox only**.
+- **Role/Policy status**: A dropdown with available role/policy statuses. You can select **All**, **Assigned**, or **Unassigned**.
+- **Role/Policy usage**: A dropdown with **All** or **Unused** roles/policies.
+- **Apply**: Select this option to save the changes you've made.
+- **Reset Filter**: Select this option to discard the changes you've made.
+
+The **Role/Policies list** displays a list of existing roles/policies and the following information about each role/policy.
+
+- **Role/Policy name**: The name of the roles/policies available to you.
+- **Role/Policy type**: **Custom**, **System**, or **CloudKnox only**
+- **Actions**
+ - Select **Clone** to create a duplicate copy of the role/policy.
+ - Select **Modify** to change the existing role/policy.
+ - Select **Delete** to delete the role/policy.
+
+Other options available to you:
+- **Search**: Select this option to search for a specific role/policy.
+- **Reload**: Select this option to refresh the displayed list of roles/policies.
+- **Export CSV**: Select this option to export the displayed list of roles/policies as a comma-separated values (CSV) file.
+
+ When the file is successfully exported, a message appears: **Exported successfully.**
+
+ - Check your email for a message from the CloudKnox Customer Success Team. This email contains a link to:
+ - The **Role Policy Details** report in CSV format.
+ - The **Reports** dashboard where you can configure how and when you can automatically receive reports.
+- **Create Role/Policy**: Select this option to create a new role/policy. For more information, see [Create a role/policy](cloudknox-howto-create-role-policy.md).
++
+## Add filters to permissions
+
+The **Permissions** subtab provides the following settings that you can use to add filters to your permissions.
+
+- **Authorization system type**: Displays a dropdown with authorization system types you can access, AWS, Azure, and GCP.
+- **Authorization system**: Displays a list of authorization systems accounts you can access.
+- **Search for**: A dropdown from which you can select **Group**, **User**, or **Role**.
+- **User status**: A dropdown from which you can select **Any**, **Active**, or **Inactive**.
+- **Privilege creep index** (PCI): A dropdown from which you can select a PCI rating of **Any**, **High**, **Medium**, or **Low**.
+- **Task Usage**: A dropdown from which you can select **Any**, **Granted**, **Used**, or **Unused**.
+- **Enter a username**: A dropdown from which you can select a username.
+- **Enter a Group Name**: A dropdown from which you can select a group name.
+- **Apply**: Select this option to save the changes you've made and run the filter.
+- **Reset Filter**: Select this option to discard the changes you've made.
+- **Export CSV**: Select this option to export the displayed list of roles/policies as a comma-separated values (CSV) file.
+
+ When the file is successfully exported, a message appears: **Exported successfully.**
+
+ - Check your email for a message from the CloudKnox Customer Success Team. This email contains a link to:
+ - The **Role Policy Details** report in CSV format.
+ - The **Reports** dashboard where you can configure how and when you can automatically receive reports.
++
+## Create templates for roles/policies
+
+Use the **Role/Policy template** subtab to create a template for roles/policies.
+
+1. Select:
+ - **Authorization system type**: Displays a dropdown with authorization system types you can access, WS, Azure, and GCP.
+ - **Create template**: Select this option to create a template.
+
+1. In the **Details** page, make the required selections:
+ - **Authorization system type**: Select the authorization system types you want, **AWS**, **Azure**, or **GCP**.
+ - **Template name**: Enter a name for your template, and then select **Next**.
+
+1. In the **Statements** page, complete the **Tasks**, **Resources**, **Request conditions** and **Effect** sections. Then select **Save** to save your role/policy template.
+
+Other options available to you:
+- **Search**: Select this option to search for a specific role/policy.
+- **Reload**: Select this option to refresh the displayed list of roles/policies.
+
+## View requests for permission
+
+Use the **Requests** tab to view a list of **Pending**, **Approved**, and **Processed** requests for permissions your team members have made.
+
+- Select:
+ - **Authorization system type**: Displays a dropdown with authorization system types you can access, AWS, Azure, and GCP.
+ - **Authorization system**: Displays a list of authorization systems accounts you can access.
+
+Other options available to you:
+
+- **Reload**: Select this option to refresh the displayed list of roles/policies.
+- **Search**: Select this option to search for a specific role/policy.
+- **Columns**: Select one or more of the following to view more information about the request:
+ - **Submitted by**
+ - **On behalf of**
+ - **Authorization system**
+ - **Tasks/scope/policies**
+ - **Request date**
+ - **Schedule**
+ - **Submitted**
+ - **Reset to default**: Select this option to discard your settings.
+
+### View pending requests
+
+The **Pending** table displays the following information:
+
+- **Summary**: A summary of the request.
+- **Submitted By**: The name of the user who submitted the request.
+- **On Behalf Of**: The name of the user on whose behalf the request was made.
+- **Authorization System**: The authorization system the user selected.
+- **Task/Scope/Policies**: The type of task/scope/policy selected.
+- **Request Date**: The date when the request was made.
+- **Submitted**: The period since the request was made.
+- The ellipses **(...)** menu - Select the ellipses, and then select **Details**, **Approve**, or **Reject**.
+- Select an option:
+ - **Reload**: Select this option to refresh the displayed list of roles/policies.
+ - **Search**: Select this option to search for a specific role/policy.
+ - **Columns**: From the dropdown, select the columns you want to display.
+
+**To return to the previous view:**
+
+- Select the up arrow.
+
+### View approved requests
+
+The **Approved** table displays information about the requests that have been approved.
+
+### View processed requests
+
+The **Processed** table displays information about the requests that have been processed.
+
+## View requests for permission for your approval
+
+Use the **My Requests** subtab to view a list of **Pending**, **Approved**, and **Processed** requests for permissions your team members have made and you must approve or reject.
+
+- Select:
+ - **Authorization system type**: Displays a dropdown with authorization system types you can access, AWS, Azure, and GCP.
+ - **Authorization system**: Displays a list of authorization systems accounts you can access.
+
+Other options available to you:
+
+- **Reload**: Select this option to refresh the displayed list of roles/policies.
+- **Search**: Select this option to search for a specific role/policy.
+- **Columns**: Select one or more of the following to view more information about the request:
+ - **On behalf of**
+ - **Authorization system**
+ - **Tasks/scope/policies**
+ - **Request date**
+ - **Schedule**
+ - **Reset to default**: Select this option to discard your settings.
+- **New request**: Select this option to create a new request for permissions. For more information, see Create a request for permissions.
+
+### View pending requests
+
+The **Pending** table displays the following information:
+
+- **Summary**: A summary of the request.
+- **Submitted By**: The name of the user who submitted the request.
+- **On Behalf Of**: The name of the user on whose behalf the request was made.
+- **Authorization System**: The authorization system the user selected.
+- **Task/Scope/Policies**: The type of task/scope/policy selected.
+- **Request Date**: The date when the request was made.
+- **Submitted**: The period since the request was made.
+- The ellipses **(...)** menu - Select the ellipses, and then select **Details**, **Approve**, or **Reject**.
+- Select an option:
+ - **Reload**: Select this option to refresh the displayed list of roles/policies.
+ - **Search**: Select this option to search for a specific role/policy.
+ - **Columns**: From the dropdown, select the columns you want to display.
++
+### View approved requests
+
+The **Approved** table displays information about the requests that have been approved.
+
+### View processed requests
+
+The **Processed** table displays information about the requests that have been processed.
+
+## Make setting selections for requests and auto-approval
+
+The **Settings** subtab provides the following settings that you can use to make setting selections to **Request role/policy filters**, **Request settings**, and **Auto-approve** requests.
+
+- **Authorization system type**: Displays a dropdown with authorization system types you can access, AWS, Azure, and GCP.
+- **Authorization system**: Displays a list of authorization systems accounts you can access.
+- **Reload**: Select this option to refresh the displayed list of role/policy filters.
+- **Create filter**: Select this option to create a new filter.
+
+## Next steps
++
+- For information on how to view existing roles/policies, requests, and permissions, see [View roles/policies, requests, and permission in the Remediation dashboard](cloudknox-ui-remediation.md).
+- For information on how to create a role/policy, see [Create a role/policy](cloudknox-howto-create-role-policy.md).
+- For information on how to clone a role/policy, see [Clone a role/policy](cloudknox-howto-clone-role-policy.md).
+- For information on how to delete a role/policy, see [Delete a role/policy](cloudknox-howto-delete-role-policy.md).
+- For information on how to modify a role/policy, see Modify a role/policy](cloudknox-howto-modify-role-policy.md).
+- To view information about roles/policies, see [View information about roles/policies](cloudknox-howto-view-role-policy.md).
+- For information on how to attach and detach permissions for AWS identities, see [Attach and detach policies for AWS identities](cloudknox-howto-attach-detach-permissions.md).
+- For information on how to revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities, see [Revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities](cloudknox-howto-revoke-task-readonly-status.md)
+- For information on how to create or approve a request for permissions, see [Create or approve a request for permissions](cloudknox-howto-create-approve-privilege-request.md).
+- For information on how to view information about roles/policies, see [View information about roles/policies](cloudknox-howto-view-role-policy.md)
+
active-directory Cloudknox Ui Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-ui-tasks.md
+
+ Title: View information about active and completed tasks in CloudKnox Permissions Management
+description: How to view information about active and completed tasks in the Activities pane in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View information about active and completed tasks
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes the usage of the **CloudKnox Tasks** pane in CloudKnox Permissions Management (CloudKnox).
+
+## Display active and completed tasks
+
+1. In the CloudKnox home page, select **Tasks** (the timer icon).
+
+ The **CloudKnox Tasks** pane appears on the right of the CloudKnox home page. It has two tabs:
+ - **Active**: Displays a list of active tasks, a description of each task, and when the task was started.
+
+ If there are no active tasks, the following message displays: **There are no active tasks**.
+ - **Completed**: Displays a list of completed tasks, a description of each task, when the task was started and ended, and whether the task **Failed** or **Succeeded**.
+
+ If there are no completed activities, the following message displays: **There are no recently completed tasks**.
+1. To close the **CloudKnox Tasks** pane, click outside the pane.
+
+## Next steps
+
+- For information on how to create a role/policy in the **Remediation** dashboard, see [Create a role/policy](cloudknox-howto-create-role-policy.md).
active-directory Cloudknox Ui Triggers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-ui-triggers.md
+
+ Title: View information about activity triggers in CloudKnox Permissions Management
+description: How to view information about activity triggers in the Activity triggers dashboard in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View information about activity triggers
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how to use the **Activity triggers** dashboard in CloudKnox Permissions Management (CloudKnox) to view information about activity alerts and triggers.
+
+## Display the Activity triggers dashboard
+
+- In the CloudKnox home page, select **Activity triggers** (the bell icon).
+
+ The **Activity triggers** dashboard has four tabs:
+
+ - **Activity**
+ - **Rule-based anomaly**
+ - **Statistical anomaly**
+ - **Permission analytics**
+
+ Each tab has two subtabs:
+
+ - **Alerts**
+ - **Alert triggers**
+
+## View information about alerts
+
+The **Alerts** subtab in the **Activity**, **Rule-based anomaly**, **Statistical anomaly**, and **Permission analytics** tabs display the following information:
+
+- **Alert Name**: Select **All** alert names or specific ones.
+- **Date**: Select **Last 24 hours**, **Last 2 Days**, **Last week**, or **Custom range.**
+
+ - If you select **Custom range**, also enter **From** and **To** duration settings.
+- **Apply**: Select this option to activate your settings.
+- **Reset filter**: Select this option to discard your settings.
+- **Reload**: Select this option to refresh the displayed information.
+- **Create Activity Trigger**: Select this option to [create a new alert trigger](cloudknox-howto-create-alert-trigger.md).
+- The **Alerts** table displays a list of alerts with the following information:
+ - **Alerts**: The name of the alert.
+ - **# of users subscribed**: The number of users who have subscribed to the alert.
+ - **Created by**: The name of the user who created the alert.
+ - **Modified By**: The name of the user who modified the alert.
+
+The **Rule-based anomaly** tab and the **Statistical anomaly** tab both have one more option:
+
+- **Columns**: Select the columns you want to display: **Task**, **Resource**, and **Identity**.
+ - To return to the system default settings, select **Reset to default**.
+
+## View information about alert triggers
+
+The **Alert triggers** subtab in the **Activity**, **Rule-based anomaly**, **Statistical anomaly**, and **Permission analytics** tab displays the following information:
+
+- **Status**: Select the alert status you want to display: **All**, **Activated**, or **Deactivated**.
+- **Apply**: Select this option to activate your settings.
+- **Reset filter**: Select this option to discard your settings.
+- **Reload**: Select **Reload** to refresh the displayed information.
+- **Create Activity Trigger**: Select this option to [create a new alert trigger](cloudknox-howto-create-alert-trigger.md).
+- The **Triggers** table displays a list of triggers with the following information:
+ - **Alerts**: The name of the alert.
+ - **# of users subscribed**: The number of users who have subscribed to the alert.
+ - **Created by**: The name of the user who created the alert.
+ - **Modified By**: The name of the user who modified the alert.
++++++
+## Next steps
+
+- For information on activity alerts and alert triggers, see [Create and view activity alerts and alert triggers](cloudknox-howto-create-alert-trigger.md).
+- For information on rule-based anomalies and anomaly triggers, see [Create and view rule-based anomalies and anomaly triggers](cloudknox-product-rule-based-anomalies.md).
+- For information on finding outliers in identity's behavior, see [Create and view statistical anomalies and anomaly triggers](cloudknox-product-statistical-anomalies.md).
+- For information on permission analytics triggers, see [Create and view permission analytics triggers](cloudknox-product-permission-analytics.md).
active-directory Cloudknox Ui User Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-ui-user-management.md
+
+ Title: Manage users and groups with the User management dashboard in CloudKnox Permissions Management
+description: How to manage users and groups in the User management dashboard in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Manage users and groups with the User management dashboard
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how to use the CloudKnox Permissions Management (CloudKnox) **User management** dashboard to view and manage users and groups.
+
+**To display the User management dashboard**:
+
+- In the upper right of the CloudKnox home page, select **User** (your initials) in the upper right of the screen, and then select **User management.**
+
+ The **User management** dashboard has two tabs:
+
+ - **Users**: Displays information about registered users.
+ - **Groups**: Displays information about groups.
+
+## Manage users
+
+Use the **Users** tab to display the following information about users:
+
+- **User name** and **Email address**: The user's name and email address.
+- **Joined on**: The date the user registered on the system.
+- **Recent activity**: The date the user last used their permissions to access the system.
+- The ellipses **(...)** menu: Select the ellipses, and then select **View Permissions** to open the **View user permission** box.
+
+ - To view details about the user's permissions, select one of the following options:
+ - **Admin for all authorization system types** provides **View**, **Control**, and **Approve** permissions for all authorization system types.
+ - **Admin for selected authorization system types** provides **View**, **Control**, and **Approve** permissions for selected authorization system types.
+ - **Custom** provides **View**, **Control**, and **Approve** permissions for the authorization system types you select.
+
+You can also select the following options:
+
+- **Reload**: Select this option to refresh the information displayed in the **User** table.
+- **Search**: Enter a name or email address to search for a specific user.
+
+## Manage groups
+
+Use the **Groups** tab to display the following information about groups:
+
+- **Group name**: Displays the registered user's name and email address.
+- **Permissions**:
+ - The **Authorization systems** and the type of permissions the user has been granted: **Admin for all authorization system types**, **Admin for selected authorization system types**, or **Custom**.
+ - Information about the **Viewer**, **Controller**, **Approver**, and **Requestor**.
+- **Modified by**: The email address of the user who modified the group.
+- **Modified on**: The date the user last modified the group.
+
+- The ellipses **(...)** menu: Select the ellipses to:
+
+ - **View permissions**: Select this option to view details about the group's permissions, and then select one of the following options:
+ - **Admin for all authorization system types** provides **View**, **Control**, and **Approve** permissions for all authorization system types.
+ - **Admin for selected authorization system types** provides **View**, **Control**, and **Approve** permissions for selected authorization system types.
+ - **Custom** provides **View**, **Control**, and **Approve** permissions for specific authorization system types that you select.
+
+ - **Edit permissions**: Select this option to modify the group's permissions.
+ - **Delete**: Select this option to delete the group's permissions.
+
+ The **Delete permission** box asks you to confirm that you want to delete the group.
+ - Select **Delete** if you want to delete the group, **Cancel** to discard your changes.
++
+You can also select the following options:
+
+- **Reload**: Select this option to refresh the information displayed in the **User** table.
+- **Search**: Enter a name or email address to search for a specific user.
+- **Filters**: Select the authorization systems and accounts you want to display.
+- **Create permission**: Create a group and set up its permissions. For more information, see [Create group-based permissions](cloudknox-howto-create-group-based-permissions.md)
+++
+## Next steps
+
+- For information about how to view information about active and completed tasks, see [View information about active and completed tasks](cloudknox-ui-tasks.md).
+- For information about how to view personal and organization information, see [View personal and organization information](cloudknox-product-account-settings.md).
+- For information about how to select group-based permissions settings, see [Select group-based permissions settings](cloudknox-howto-create-group-based-permissions.md).
active-directory Cloudknox Usage Analytics Access Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-usage-analytics-access-keys.md
+
+ Title: View analytic information about access keys in CloudKnox Permissions Management
+description: How to view analytic information about access keys in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View analytic information about access keys
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+The **Analytics** dashboard in CloudKnox Permissions Management (CloudKnox) provides details about identities, resources, and tasks that you can use make informed decisions about granting permissions, and reducing risk on unused permissions.
+
+- **Users**: Tracks assigned permissions and usage of various identities.
+- **Groups**: Tracks assigned permissions and usage of the group and the group members.
+- **Active resources**: Tracks active resources (used in the last 90 days).
+- **Active tasks**: Tracks active tasks (performed in the last 90 days).
+- **Access keys**: Tracks the permission usage of access keys for a given user.
+- **Serverless functions**: Tracks assigned permissions and usage of the serverless functions.
+
+This article describes how to view usage analytics about access keys.
+
+## Create a query to view access keys
+
+When you select **Access keys**, the **Analytics** dashboard provides a high-level overview of tasks used by various identities.
+
+1. On the main **Analytics** dashboard, select **Access keys** from the drop-down list at the top of the screen.
+
+ The following components make up the **Access keys** dashboard:
+
+ - **Authorization system type**: Select the authorization you want to use: Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
+ - **Authorization system**: Select from a **List** of accounts and **Folders***.
+ - **Key status**: Select **All**, **Active**, or **Inactive**.
+ - **Key activity state**: Select **All**, how long the access key has been used, or **Not used**.
+ - **Key age**: Select **All** or how long ago the access key was created.
+ - **Task type**: Select **All** tasks, **High-risk tasks** or, for a list of tasks where users have deleted data, select **Delete tasks**.
+ - **Search**: Enter criteria to find specific tasks.
+1. Select **Apply** to display the criteria you've selected.
+
+ Select **Reset filter** to discard your changes.
++
+## View the results of your query
+
+The **Access keys** table displays the results of your query.
+
+- **Access key ID**: Provides the ID for the access key.
+ - To view details about the access keys, select the down arrow to the left of the ID.
+- The **Owner** name.
+- The **Account** number.
+- The **Permission creep index (PCI)**: Provides the following information:
+ - **Index**: A numeric value assigned to the PCI.
+ - **Since**: How many days the PCI value has been at the displayed level.
+- **Tasks** Displays the number of **Granted** and **Executed** tasks.
+- **Resources**: The number of resources used.
+- **Access key age**: How old the access key is, in days.
+- **Last used**: How long ago the access key was last accessed.
+
+## Apply filters to your query
+
+There are many filter options within the **Active tasks** screen, including filters by **Authorization system**, filters by **User** and filters by **Task**.
+Filters can be applied in one, two, or all three categories depending on the type of information you're looking for.
+
+### Apply filters by authorization system type
+
+1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
++
+### Apply filters by authorization system
+
+1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization system** dropdown, select accounts from a **List** of accounts and **Folders**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
+
+### Apply filters by key status
+
+1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Key status** dropdown, select the type of key: **All**, **Active**, or **Inactive**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
+
+### Apply filters by key activity status
+
+1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Key activity state** dropdown, select **All**, the duration for how long the access key has been used, or **Not used**.
+
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
+
+### Apply filters by key age
+
+1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Key age** dropdown, select **All** or how long ago the access key was created.
+
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
+
+### Apply filters by task type
+
+1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Task type** dropdown, select **All** tasks, **High-risk tasks** or, for a list of tasks where users have deleted data, select **Delete tasks**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
+++
+## Export the results of your query
+
+- To view a report of the results of your query as a comma-separated values (CSV) file, select **Export**, and then select **CSV** or **CSV (Detailed)**.
+
+## Next steps
+
+- To view active tasks, see [View usage analytics about active tasks](cloudknox-usage-analytics-active-tasks.md).
+- To view assigned permissions and usage by users, see [View usage analytics about users](cloudknox-usage-analytics-users.md).
+- To view assigned permissions and usage of the group and the group members, see [View usage analytics about groups](cloudknox-usage-analytics-groups.md).
+- To view active resources, see [View usage analytics about active resources](cloudknox-usage-analytics-active-resources.md).
+- To view assigned permissions and usage of the serverless functions, see [View usage analytics about serverless functions](cloudknox-usage-analytics-serverless-functions.md).
active-directory Cloudknox Usage Analytics Active Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-usage-analytics-active-resources.md
+
+ Title: View analytic information about active resources in CloudKnox Permissions Management
+description: How to view usage analytics about active resources in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View analytic information about active resources
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+The **Analytics** dashboard in CloudKnox Permissions Management (CloudKnox) collects detailed information, analyzes, reports on, and visualizes data about all identity types. System administrators can use the information to make informed decisions about granting permissions and reducing risk on unused permissions for:
+
+- **Users**: Tracks assigned permissions and usage of various identities.
+- **Groups**: Tracks assigned permissions and usage of the group and the group members.
+- **Active resources**: Tracks active resources (used in the last 90 days).
+- **Active tasks**: Tracks active tasks (performed in the last 90 days).
+- **Access keys**: Tracks the permission usage of access keys for a given user.
+- **Serverless functions**: Tracks assigned permissions and usage of the serverless functions.
+
+This article describes how to view usage analytics about active resources.
+
+## Create a query to view active resources
+
+1. On the main **Analytics** dashboard, select **Active resources** from the drop-down list at the top of the screen.
+
+ The dashboard only lists tasks that are active. The following components make up the **Active resources** dashboard:
+1. From the dropdowns, select:
+ - **Authorization system type**: The authorization you want to use: Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
+ - **Authorization system**: The **List** of accounts and **Folders** you want to include.
+ - **Tasks type**: Select **All** tasks, **High-risk tasks** or, for a list of tasks where users have deleted data, select **Delete tasks**.
+ - **Service resource type**: The service resource type.
+ - **Search**: Enter criteria to find specific tasks.
+
+1. Select **Apply** to display the criteria you've selected.
+
+ Select **Reset filter** to discard your changes.
++
+## View the results of your query
+
+The **Active resources** table displays the results of your query:
+
+- **Resource Name**: Provides the name of the task.
+ - To view details about the task, select the down arrow.
+- **Account**: The name of the account.
+- **Resources type**: The type of resources used, for example, **bucket** or **key**.
+- **Tasks**: Displays the number of **Granted** and **Executed** tasks.
+- **Number of users**: The number of users with access and accessed.
+- Select the ellipses **(...)** and select **Tags** to add a tag.
+
+## Add a tag to an active resource
+
+1. Select the ellipses **(...)** and select **Tags**.
+1. From the **Select a tag** dropdown, select a tag.
+1. To create a custom tag select **New custom tag**, add a tag name, and then select **Create**.
+1. In the **Value (Optional)** box, enter a value.
+1. Select the ellipses **(...)** to select **Advanced save** options, and then select **Save**.
+1. To add the tag to the serverless function, select **Add tag**.
++
+## Apply filters to your query
+
+There are many filter options within the **Active resources** screen, including filters by **Authorization system**, filters by **User** and filters by **Task**.
+Filters can be applied in one, two, or all three categories depending on the type of information you're looking for.
+
+### Apply filters by authorization system
+
+1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
++
+### Apply filters by authorization system type
+
+1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
+
+### Apply filters by task type
+
+You can filter user details by type of user, user role, app, or service used, or by resource.
+
+1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Task type**, select the type of user: **All**, **User**, **Role/App/Service a/c**, or **Resource**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
++
+### Apply filters by service resource type
+
+You can filter user details by type of user, user role, app, or service used, or by resource.
+
+1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Service Resource type**, select the type of service resource.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
+
+## Export the results of your query
+
+- To view a report of the results of your query as a comma-separated values (CSV) file, select **Export**, and then select **CSV**.
++
+## Next steps
+
+- To track active tasks, see [View usage analytics about active tasks](cloudknox-usage-analytics-active-tasks.md).
+- To track assigned permissions and usage of users, see [View usage analytics about users](cloudknox-usage-analytics-users.md).
+- To track assigned permissions and usage of the group and the group members, see [View usage analytics about groups](cloudknox-usage-analytics-groups.md).
+- To track the permission usage of access keys for a given user, see [View usage analytics about access keys](cloudknox-usage-analytics-access-keys.md).
+- To track assigned permissions and usage of the serverless functions, see [View usage analytics about serverless functions](cloudknox-usage-analytics-serverless-functions.md).
active-directory Cloudknox Usage Analytics Active Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-usage-analytics-active-tasks.md
+
+ Title: View analytic information about active tasks in CloudKnox Permissions Management
+description: How to view analytic information about active tasks in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View analytic information about active tasks
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+The **Analytics** dashboard in CloudKnox Permissions Management (CloudKnox) collects detailed information, analyzes, reports on, and visualizes data about all identity types. System administrators can use the information to make informed decisions about granting permissions and reducing risk on unused permissions for:
+
+- **Users**: Tracks assigned permissions and usage of various identities.
+- **Groups**: Tracks assigned permissions and usage of the group and the group members.
+- **Active resources**: Tracks active resources (used in the last 90 days).
+- **Active tasks**: Tracks active tasks (performed in the last 90 days).
+- **Access keys**: Tracks the permission usage of access keys for a given user.
+- **Serverless functions**: Tracks assigned permissions and usage of the serverless functions.
+
+This article describes how to view usage analytics about active tasks.
+
+## Create a query to view active tasks
+
+When you select **Active tasks**, the **Analytics** dashboard provides a high-level overview of tasks used by various identities.
+
+1. On the main **Analytics** dashboard, select **Active tasks** from the drop-down list at the top of the screen.
+
+ The dashboard only lists tasks that are active. The following components make up the **Active tasks** dashboard:
+
+ - **Authorization system type**: Select the authorization you want to use: Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
+ - **Authorization system**: Select from a **List** of accounts and **Folders***.
+ - **Tasks type**: Select **All** tasks, **High-risk tasks** or, for a list of tasks where users have deleted data, select **Delete tasks**.
+ - **Search**: Enter criteria to find specific tasks.
+
+1. Select **Apply** to display the criteria you've selected.
+
+ Select **Reset filter** to discard your changes.
++
+## View the results of your query
+
+The **Active tasks** table displays the results of your query.
+
+- **Task Name**: Provides the name of the task.
+ - To view details about the task, select the down arrow in the table.
+
+ - A **Normal task** icon displays to the left of the task name if the task is normal (that is, not risky).
+ - A **Deleted task** icon displays to the left of the task name if the task involved deleting data.
+ - A **High-risk task** icon displays to the left of the task name if the task is high-risk.
+
+- **Performed on (resources)**: The number of resources on which the task was used.
+
+- **Number of Users**: Displays how many users performed tasks. The tasks are organized into the following columns:
+ - **With access**: Displays the number of users that have access to the task but haven't accessed it.
+ - **Accessed**: Displays the number of users that have accessed the task.
++
+## Apply filters to your query
+
+There are many filter options within the **Active tasks** screen, including **Authorization system**, **User**, and **Task**.
+Filters can be applied in one, two, or all three categories depending on the type of information you're looking for.
+
+### Apply filters by authorization system type
+
+1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
+
+### Apply filters by authorization system
+
+1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization system** dropdown, select accounts from a **List** of accounts and **Folders**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
++
+### Apply filters by task type
+
+You can filter user details by type of user, user role, app, or service used, or by resource.
+
+1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Task type** dropdown, select the type of tasks: **All**, **High risk tasks**, or **Delete tasks**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
++
+## Export the results of your query
+
+- To view a report of the results of your query as a comma-separated values (CSV) file, select **Export**, and then select **CSV**.
+
+## Next steps
+
+- To view assigned permissions and usage by users, see [View analytic information about users](cloudknox-usage-analytics-users.md).
+- To view assigned permissions and usage of the group and the group members, see [View analytic information about groups](cloudknox-usage-analytics-groups.md).
+- To view active resources, see [View analytic information about active resources](cloudknox-usage-analytics-active-resources.md).
+- To view the permission usage of access keys for a given user, see [View analytic information about access keys](cloudknox-usage-analytics-access-keys.md).
+- To view assigned permissions and usage of the serverless functions, see [View analytic information about serverless functions](cloudknox-usage-analytics-serverless-functions.md).
active-directory Cloudknox Usage Analytics Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-usage-analytics-groups.md
+
+ Title: View analytic information about groups in CloudKnox Permissions Management
+description: How to view analytic information about groups in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View analytic information about groups
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+The **Analytics** dashboard in CloudKnox Permissions Management (CloudKnox) collects detailed information, analyzes, reports on, and visualizes data about all identity types. System administrators can use the information to make informed decisions about granting permissions and reducing risk on unused permissions for:
+
+- **Users**: Tracks assigned permissions and usage of various identities.
+- **Groups**: Tracks assigned permissions and usage of the group and the group members.
+- **Active resources**: Tracks active resources (used in the last 90 days).
+- **Active tasks**: Tracks active tasks (performed in the last 90 days).
+- **Access keys**: Tracks the permission usage of access keys for a given user.
+- **Serverless functions**: Tracks assigned permissions and usage of the serverless functions.
+
+This article describes how to view usage analytics about groups.
+
+## Create a query to view groups
+
+When you select **Groups**, the **Usage Analytics** dashboard provides a high-level overview of groups.
+
+1. On the main **Analytics** dashboard, select **Groups** from the drop-down list at the top of the screen.
+
+ The following components make up the **Groups** dashboard:
+
+ - **Authorization system type**: Select the authorization you want to use: Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
+ - **Authorization system**: Select from a **List** of accounts and **Folders**.
+ - **Group type**: Select **All**, **ED**, or **Local**.
+ - **Group activity status**: Select **All**, **Active**, or **Inactive**.
+ - **Tasks Type**: Select **All**, **High-risk tasks**, or **Delete tasks**
+ - **Search**: Enter group name to find specific group.
+1. To display the criteria you've selected, select **Apply**.
+ - **Reset filter**: Select to discard your changes.
++
+## View the results of your query
+
+The **Groups** table displays the results of your query:
+
+- **Group Name**: Provides the name of the group.
+ - To view details about the group, select the down arrow.
+- A **Group type** icon displays to the left of the group name to describe the type of group (**ED** or **Local**).
+- The **Domain/Account** name.
+- The **Permission creep index (PCI)**: Provides the following information:
+ - **Index**: A numeric value assigned to the PCI.
+ - **Since**: How many days the PCI value has been at the displayed level.
+- **Tasks**: Displays the number of **Granted** and **Executed** tasks.
+- **Resources**: The number of resources used.
+- **Users**: The number of users who accessed the group.
+- Select the ellipses **(...)** and select **Tags** to add a tag.
+
+## Add a tag to a group
+
+1. Select the ellipses **(...)** and select **Tags**.
+1. From the **Select a tag** dropdown, select a tag.
+1. To create a custom tag select **New custom tag**, add a tag name, and then select **Create**.
+1. In the **Value (Optional)** box, enter a value.
+1. Select the ellipses **(...)** to select **Advanced save** options, and then select **Save**.
+1. To add the tag to the serverless function, select **Add tag**.
+
+## View detailed information about a group
+
+1. Select the down arrow to the left of the **Group name**.
+
+ The list of **Tasks** organized by **Unused** and **Used** displays.
+
+1. Select the arrow to the left of the group name to view details about the task.
+1. Select **Information** (**i**) to view when the task was last used.
+1. From the **Tasks** dropdown, select **All tasks**, **High-risk tasks**, and **Delete tasks**.
+1. The pane on the right displays a list of **Users**, **Policies** for **AWS** and **Roles** for **GCP or AZURE**, and **Tags**.
+
+## Apply filters to your query
+
+There are many filter options within the **Groups** screen, including filters by **Authorization system type**, **Authorization system**, **Group type**, **Group activity status**, and **Tasks type**.
+Filters can be applied in one, two, or all three categories depending on the type of information you're looking for.
+
+### Apply filters by authorization system type
+
+1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
+
+### Apply filters by authorization system
+
+1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization system** dropdown, select accounts from a **List** of accounts and **Folders**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
++
+### Apply filters by group type
+
+You can filter user details by type of user, user role, app, or service used, or by resource.
+
+1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Group type** dropdown, select the type of user: **All**, **ED**, or **Local**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
+
+### Apply filters by group activity status
+
+You can filter user details by type of user, user role, app, or service used, or by resource.
+
+1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Group activity status** dropdown, select the type of user: **All**, **Active**, or **Inactive**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
+
+### Apply filters by tasks type
+
+You can filter user details by type of user, user role, app, or service used, or by resource.
+
+1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Tasks type** dropdown, select the type of user: **All**, **High-risk tasks**, or **Delete tasks**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
++
+## Export the results of your query
+
+- To view a report of the results of your query as a comma-separated values (CSV) file, select **Export**, and then select **CSV**.
+- To view a list of members of the groups in your query, select **Export**, and then select **Memberships**.
+++
+## Next steps
+
+- To view active tasks, see [View analytic information about active tasks](cloudknox-usage-analytics-active-tasks.md).
+- To view assigned permissions and usage by users, see [View analytic information about users](cloudknox-usage-analytics-users.md).
+- To view active resources, see [View analytic information about active resources](cloudknox-usage-analytics-active-resources.md).
+- To view the permission usage of access keys for a given user, see [View analytic information about access keys](cloudknox-usage-analytics-access-keys.md).
+- To view assigned permissions and usage of the serverless functions, see [View analytic information about serverless functions](cloudknox-usage-analytics-serverless-functions.md).
active-directory Cloudknox Usage Analytics Home https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-usage-analytics-home.md
+
+ Title: View analytic information with the Analytics dashboard in CloudKnox Permissions Management
+description: How to use the Analytics dashboard in CloudKnox Permissions Management to view details about users, groups, active resources, active tasks, access keys, and serverless functions.
+++++++ Last updated : 02/23/2022+++
+# View analytic information with the Analytics dashboard
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article provides a brief overview of the Analytics dashboard in CloudKnox Permissions Management (CloudKnox), and the type of analytic information it provides for Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
+
+## Display the Analytics dashboard
+
+- From the CloudKnox home page, select the **Analytics** tab.
+
+ The **Analytics** dashboard displays detailed information about:
+
+ - **Users**: Tracks assigned permissions and usage by users. For more information, see [View analytic information about users](cloudknox-usage-analytics-users.md).
+
+ - **Groups**: Tracks assigned permissions and usage of the group and the group members. For more information, see [View analytic information about groups](cloudknox-usage-analytics-groups.md).
+
+ - **Active Resources**: Tracks resources that have been used in the last 90 days. For more information, see [View analytic information about active resources](cloudknox-usage-analytics-active-resources.md).
+
+ - **Active Tasks**: Tracks tasks that have been performed in the last 90 days. For more information, see [View analytic information about active tasks](cloudknox-usage-analytics-active-tasks.md).
+
+ - **Access Keys**: Tracks the permission usage of access keys for a given user. For more information, see [View analytic information about access keys](cloudknox-usage-analytics-access-keys.md).
+
+ - **Serverless Functions**: Tracks assigned permissions and usage of the serverless functions for AWS only. For more information, see [View analytic information about serverless functions](cloudknox-usage-analytics-serverless-functions.md).
+
+ System administrators can use this information to make decisions about granting permissions and reducing risk on unused permissions.
+++
+## Next steps
+
+- To view active tasks, see [View analytic information about active tasks](cloudknox-usage-analytics-active-tasks.md).
+- To view assigned permissions and usage by users, see [View analytic information about users](cloudknox-usage-analytics-users.md).
+- To view assigned permissions and usage of the group and the group members, see [View analytic information about groups](cloudknox-usage-analytics-groups.md).
+- To view active resources, see [View analytic information about active resources](cloudknox-usage-analytics-active-resources.md).
+- To view the permission usage of access keys for a given user, see [View analytic information about access keys](cloudknox-usage-analytics-access-keys.md).
+- To view assigned permissions and usage of the serverless functions, see [View analytic information about serverless functions](cloudknox-usage-analytics-serverless-functions.md).
active-directory Cloudknox Usage Analytics Serverless Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-usage-analytics-serverless-functions.md
+
+ Title: View analytic information about serverless functions in CloudKnox Permissions Management
+description: How to view analytic information about serverless functions in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View analytic information about serverless functions
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+The **Analytics** dashboard in CloudKnox Permissions Management (CloudKnox) collects detailed information, analyzes, reports on, and visualizes data about all identity types. System administrators can use the information to make informed decisions about granting permissions and reducing risk on unused permissions for:
+
+- **Users**: Tracks assigned permissions and usage of various identities.
+- **Groups**: Tracks assigned permissions and usage of the group and the group members.
+- **Active resources**: Tracks active resources (used in the last 90 days).
+- **Active tasks**: Tracks active tasks (performed in the last 90 days).
+- **Access keys**: Tracks the permission usage of access keys for a given user.
+- **Serverless functions**: Tracks assigned permissions and usage of the serverless functions.
+
+This article describes how to view usage analytics about serverless functions.
+
+## Create a query to view serverless functions
+
+When you select **Serverless functions**, the **Analytics** dashboard provides a high-level overview of tasks used by various identities.
+
+1. On the main **Analytics** dashboard, select **Serverless functions** from the dropdown list at the top of the screen.
+
+ The following components make up the **Serverless functions** dashboard:
+
+ - **Authorization system type**: Select the authorization you want to use: Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
+ - **Authorization system**: Select from a **List** of accounts and **Folders**.
+ - **Search**: Enter criteria to find specific tasks.
+1. Select **Apply** to display the criteria you've selected.
+
+ Select **Reset filter** to discard your changes.
++
+## View the results of your query
+
+The **Serverless functions** table displays the results of your query.
+
+- **Function name**: Provides the name of the serverless function.
+ - To view details about a serverless function, select the down arrow to the left of the function name.
+- A **Function type** icon displays to the left of the function name to describe the type of serverless function, for example **Lambda function**.
+- The **Permission creep index (PCI)**: Provides the following information:
+ - **Index**: A numeric value assigned to the PCI.
+ - **Since**: How many days the PCI value has been at the displayed level.
+- **Tasks**: Displays the number of **Granted** and **Executed** tasks.
+- **Resources**: The number of resources used.
+- **Last activity on**: The date the function was last accessed.
+- Select the ellipses **(...)**, and then select **Tags** to add a tag.
+
+## Add a tag to a serverless function
+
+1. Select the ellipses **(...)** and select **Tags**.
+1. From the **Select a tag** dropdown, select a tag.
+1. To create a custom tag select **New custom tag**, add a tag name, and then select **Create**.
+1. In the **Value (Optional)** box, enter a value.
+1. Select the ellipses **(...)** to select **Advanced save** options, and then select **Save**.
+1. To add the tag to the serverless function, select **Add tag**.
+
+## View detailed information about a serverless function
+
+1. Select the down arrow to the left of the function name to display the following:
+
+ - A list of **Tasks** organized by **Used** and **Unused**.
+ - **Versions**, if a version is available.
+
+1. Select the arrow to the left of the task name to view details about the task.
+1. Select **Information** (**i**) to view when the task was last used.
+1. From the **Tasks** dropdown, select **All tasks**, **High-risk tasks**, and **Delete tasks**.
++
+## Apply filters to your query
+
+You can filter the **Serverless functions** results by **Authorization system type** and **Authorization system**.
+
+### Apply filters by authorization system type
+
+1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
++
+### Apply filters by authorization system
+
+1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization system** dropdown, select accounts from a **List** of accounts and **Folders**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
+++
+## Next steps
+
+- To view active tasks, see [View usage analytics about active tasks](cloudknox-usage-analytics-active-tasks.md).
+- To view assigned permissions and usage by users, see [View analytic information about users](cloudknox-usage-analytics-users.md).
+- To view assigned permissions and usage of the group and the group members, see [View analytic information about groups](cloudknox-usage-analytics-groups.md).
+- To view active resources, see [View analytic information about active resources](cloudknox-usage-analytics-active-resources.md).
+- To view the permission usage of access keys for a given user, see [View analytic information about access keys](cloudknox-usage-analytics-access-keys.md).
active-directory Cloudknox Usage Analytics Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-usage-analytics-users.md
+
+ Title: View analytic information about users in CloudKnox Permissions Management
+description: How to view analytic information about users in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View analytic information about users
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+The **Analytics** dashboard in CloudKnox Permissions Management (CloudKnox) collects detailed information, analyzes, reports on, and visualizes data about all identity types. System administrators can use the information to make informed decisions about granting permissions and reducing risk on unused permissions for:
+
+- **Users**: Tracks assigned permissions and usage of various identities.
+- **Groups**: Tracks assigned permissions and usage of the group and the group members.
+- **Active resources**: Tracks active resources (used in the last 90 days).
+- **Active tasks**: Tracks active tasks (performed in the last 90 days).
+- **Access keys**: Tracks the permission usage of access keys for a given user.
+- **Serverless functions**: Tracks assigned permissions and usage of the serverless functions.
+
+This article describes how to view usage analytics about users.
+
+## Create a query to view users
+
+When you select **Users**, the **Analytics** dashboard provides a high-level overview of tasks used by various identities.
+
+1. On the main **Analytics** dashboard, select **Users** from the drop-down list at the top of the screen.
+
+ The following components make up the **Users** dashboard:
+
+ - **Authorization system type**: Select the authorization you want to use: Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
+ - **Authorization system**: Select from a **List** of accounts and **Folders***.
+ - **Identity type**: Select **All** identity types, **User**, **Role/App/Service a/c** or **Resource**.
+ - **Search**: Enter criteria to find specific tasks.
+1. Select **Apply** to display the criteria you've selected.
+
+ Select **Reset filter** to discard your changes.
++
+## View the results of your query
+
+The **Identities** table displays the results of your query.
+
+- **Name**: Provides the name of the group.
+ - To view details about the group, select the down arrow.
+- The **Domain/Account** name.
+- The **Permission creep index (PCI)**: Provides the following information:
+ - **Index**: A numeric value assigned to the PCI.
+ - **Since**: How many days the PCI value has been at the displayed level.
+- **Tasks**: Displays the number of **Granted** and **Executed** tasks.
+- **Resources**: The number of resources used.
+- **User groups**: The number of users who accessed the group.
+- **Last activity on**: The date the function was last accessed.
+- The ellipses **(...)**: Select **Tags** to add a tag.
+
+ If you're using AWS, another selection is available from the ellipses menu: **Auto Remediate**. You can use this option to remediate your results automatically.
+
+## Add a tag to a user
+
+1. Select the ellipses **(...)** and select **Tags**.
+1. From the **Select a tag** dropdown, select a tag.
+1. To create a custom tag select **New custom tag**, add a tag name, and then select **Create**.
+1. In the **Value (Optional)** box, enter a value.
+1. Select the ellipses **(...)** to select **Advanced save** options, and then select **Save**.
+1. To add the tag to the serverless function, select **Add tag**.
+
+## Set the auto-remediate option (AWS only)
+
+- Select the ellipses **(...)** and select **Auto Remediate**.
+
+ A message displays to confirm that your remediation settings are automatically updated.
+
+## Apply filters to your query
+
+There are many filter options within the **Users** screen, including filters by **Authorization system**, **Identity type**, and **Identity state**.
+Filters can be applied in one, two, or all three categories depending on the type of information you're looking for.
+
+### Apply filters by authorization system type
+
+1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
++
+### Apply filters by authorization system
+
+1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization system** dropdown, select accounts from a **List** of accounts and **Folders**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
+
+### Apply filters by identity type
+
+1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Identity type**, select the type of user: **All**, **User**, **Role/App/Service a/c**, or **Resource**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
+
+### Apply filters by identity subtype
+
+1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Identity subtype**, select the type of user: **All**, **ED**, **Local**, or **Cross-account**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
+
+### Apply filters by identity state
+
+1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Identity state**, select the type of user: **All**, **Active**, or **Inactive**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
++
+### Apply filters by identity filters
+
+1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Identity type**, select: **Risky** or **Inc. in PCI calculation only**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
++
+### Apply filters by task type
+
+You can filter user details by type of user, user role, app, or service used, or by resource.
+
+1. From the **Authorization system type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization system** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Task type**, select the type of user: **All** or **High-risk tasks**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
++
+## Export the results of your query
+
+- To export a report of the results of your query as a comma-separated values (CSV) file, select **Export**, and then select **CSV**.
+- To export the data in a detailed comma-separated values (CSV) file format, select **Export** and then select **CSV (Detailed)**.
+- To export a report of user permissions, select **Export** and then select **Permissions**.
++
+## Next steps
+
+- To view active tasks, see [View analytic information about active tasks](cloudknox-usage-analytics-active-tasks.md).
+- To view assigned permissions and usage of the group and the group members, see [View analytic information about groups](cloudknox-usage-analytics-groups.md).
+- To view active resources, see [View analytic information about active resources](cloudknox-usage-analytics-active-resources.md).
+- To view the permission usage of access keys for a given user, see [View analytic information about access keys](cloudknox-usage-analytics-access-keys.md).
+- To view assigned permissions and usage of the serverless functions, see [View analytic information about serverless functions](cloudknox-usage-analytics-serverless-functions.md).
active-directory Azuread Join Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/azuread-join-sso.md
If you have a hybrid environment, with both Azure AD and on-premises AD, it's li
1. The local security authority (LSA) service enables Kerberos and NTLM authentication on the device. > [!NOTE]
-> Windows Hello for Business requires additional configuration to enable on-premises SSO from an Azure AD joined device. For more information, see [Configure Azure AD joined devices for On-premises Single-Sign On using Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-hybrid-aadj-sso-base).
+> Additional configuration is required when passwordless authentication to Azure AD joined devices is used.
>
-> FIDO2 security key based passwordless authentication with Windows 10 or newer requires additional configuration to enable on-premises SSO from an Azure AD joined device. For more information, see [Enable passwordless security key sign-in to on-premises resources with Azure Active Directory](../authentication/howto-authentication-passwordless-security-key-on-premises.md).
+> For FIDO2 security key based passwordless authentication and Windows Hello for Business Hybrid Cloud Trust, see [Enable passwordless security key sign-in to on-premises resources with Azure Active Directory](../authentication/howto-authentication-passwordless-security-key-on-premises.md).
+>
+> For Windows Hello for Business Hybrid Key Trust, see [Configure Azure AD joined devices for On-premises Single-Sign On using Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-hybrid-aadj-sso-base).
+>
+> For Windows Hello for Business Hybrid Certificate Trust, see [Using Certificates for AADJ On-premises Single-sign On](/windows/security/identity-protection/hello-for-business/hello-hybrid-aadj-sso-cert).
During an access attempt to a resource requesting Kerberos or NTLM in the user's on-premises environment, the device:
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
This Exit code translates to `DSREG_AUTOJOIN_DISC_FAILED` because the extension
1. Verify the required endpoints are accessible from the VM using PowerShell:
- - `curl https://login.microsoftonline.com/ -D -`
- - `curl https://login.microsoftonline.com/<TenantID>/ -D -`
- - `curl https://enterpriseregistration.windows.net/ -D -`
- - `curl https://device.login.microsoftonline.com/ -D -`
- - `curl https://pas.windows.net/ -D -`
+ - `curl https://login.microsoftonline.com// -D`
+ - `curl https://login.microsoftonline.com/<TenantID>// -D`
+ - `curl https://enterpriseregistration.windows.net// -D`
+ - `curl https://device.login.microsoftonline.com// -D`
+ - `curl https://pas.windows.net// -D`
> [!NOTE] > Replace `<TenantID>` with the Azure AD Tenant ID that is associated with the Azure subscription. If you need to find the tenant ID, you can hover over your account name to get the directory / tenant ID, or select **Azure Active Directory > Properties > Directory ID** in the Azure portal.<br/>`enterpriseregistration.windows.net` and `pas.windows.net` should return 404 Not Found, which is expected behavior.
active-directory Cross Tenant Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-overview.md
Previously updated : 02/07/2022 Last updated : 02/23/2022
The output is a summary of all available sign-in events for inbound and outbound
To determine your users' access to external Azure AD organizations, you can use the [Get-MgAuditLogSignIn](/powershell/module/microsoft.graph.reports/get-mgauditlogsignin) cmdlet in the Microsoft Graph PowerShell SDK to view data from your sign-in logs for the last 30 days. For example, run the following command: ```powershell
-Get-MgAuditLogSignIn `
--Filter ΓÇ£ResourceTenantID ne ΓÇÿyour tenant idΓÇÖΓÇ¥ ` --all:$True| `
-group ResourceTenantId,AppDisplayName,UserPrincipalName| `
-select count, @{n=ΓÇÖExt TenantID/App User PairΓÇÖ;e={$_.name}}]
+#Initial connection
+Connect-MgGraph -Scopes "AuditLog.Read.All"
+Select-MgProfile -Name "beta"
+
+#Get external access
+$TenantId = "<replace-with-your-tenant-ID>"
+
+Get-MgAuditLogSignIn -Filter "ResourceTenantId ne '$TenantID'" -All:$True |
+Group-Object ResourceTenantId,AppDisplayName,UserPrincipalName |
+Select-Object count,@{n='Ext TenantID/App User Pair';e={$_.name}}
``` The output is a list of outbound sign-ins initiated by your users to apps in external tenants.
active-directory External Identities Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/external-identities-overview.md
Previously updated : 02/07/2022 Last updated : 02/23/2022
Learn more about [B2B collaboration in Azure AD](what-is-b2b.md).
Azure AD B2C is a Customer Identity and Access Management (CIAM) solution that lets you build user journeys for consumer- and customer-facing apps. If you're a business or individual developer creating customer-facing apps, you can scale to millions of consumers, customers, or citizens by using Azure AD B2C. Developers can use Azure AD B2C as the full-featured CIAM system for their applications.
-With Azure AD B2C, customers can sign in with an identity they've already established (like Facebook or Gmail). With Azure AD B2C, you can completely customize and control how customers sign up, sign in, and manage their profiles when using your applications. For more information, see the Azure AD B2C documentation.
+With Azure AD B2C, customers can sign in with an identity they've already established (like Facebook or Gmail). You can completely customize and control how customers sign up, sign in, and manage their profiles when using your applications.
-Learn more about [Azure AD B2C](../../active-directory-b2c/index.yml).
+Although Azure AD B2C is built on the same technology as Azure AD, it's a separate service with some feature differences. For more information about how an Azure AD B2C tenant differs from an Azure AD tenant, see [Supported Azure AD features](../../active-directory-b2c/supported-azure-ad-features.md) in the [Azure AD B2C documentation](../../active-directory-b2c/index.yml).
## Comparing External Identities feature sets
active-directory Entitlement Management Onboard External User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-onboard-external-user.md
Title: Tutorial - Onboard external users to Azure AD through an approval process
description: Step-by-step tutorial for how to create an access package for external users requiring approvals in Azure Active Directory entitlement management. documentationCenter: ''-+ na
For more information, see [License requirements](entitlement-management-overview
2. In the **Users who can request access** section, click **For users not in your directory** and then click **All users (All connected organizations + any new external users)**.
-3. Ensure that **Require approval** is set to **Yes**.
+3. Because any user who is not yet in your directory can view and submit a request for this access package, **Yes** is mandatory for the **Require approval** setting.
4. The following settings allow you to configure how your approvals work for your external users:
active-directory Smartsheet Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/smartsheet-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
![Screenshot of the Provisioning Mode dropdown list with the Automatic option called out.](common/provisioning-automatic.png)
-5. Under the **Admin Credentials** section, input the **SCIM 2.0 base URL and Access Token** values retrieved earlier from Smartsheet in **Tenant URL** and **Secret Token** respectively.. Click **Test Connection** to ensure Azure AD can connect to Smartsheet. If the connection fails, ensure your Smartsheet account has SysAdmin permissions and try again.
+5. Under the **Admin Credentials** section, input the **SCIM 2.0 base URL** of https://scim.smartsheet.com/v2 and **Access Token** value retrieved earlier from Smartsheet in **Secret Token** respectively. Click **Test Connection** to ensure Azure AD can connect to Smartsheet. If the connection fails, ensure your Smartsheet account has SysAdmin permissions and try again.
![Token](common/provisioning-testconnection-tenanturltoken.png)
Once you've configured provisioning, use the following resources to monitor your
* 06/16/2020 - Added support for enterprise extension attributes "Cost Center", "Division", "Manager" and "Department" for users. * 02/10/2021 - Added support for core attributes "emails[type eq "work"]" for users.
+* 02/12/2022 - Added SCIM base/tenant URL of https://scim.smartsheet.com/v2 for SmartSheet integration under Admin Credentials section.
## Additional resources
Once you've configured provisioning, use the following resources to monitor your
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Credential Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/credential-design.md
To ensure interoperability of your credentials, it's recommended that you work c
{ "mapping": { "first_name": {
- "claim": "$.vc.credentialSubject.firstName",
+ "claim": "$.vc.credentialSubject.firstName"
}, "last_name": { "claim": "$.vc.credentialSubject.lastName",
To ensure interoperability of your credentials, it's recommended that you work c
"vc": { "type": [ "ProofOfNinjaNinja"
- ],
+ ]
} } ```
advisor Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Advisor description: Sample Azure Resource Graph queries for Azure Advisor showing use of resource types and tables to access Azure Advisor related resources and properties. Previously updated : 01/20/2022 Last updated : 02/16/2022
aks Kubernetes Walkthrough Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-walkthrough-portal.md
Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP a
azure-vote-front LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m ```
-To see the Azure Vote app in action, open a web browser to the external IP address of you
+To see the Azure Vote app in action, open a web browser to the external IP address of your service.
:::image type="content" source="media/container-service-kubernetes-walkthrough/azure-voting-application.png" alt-text="Image of browsing to Azure Vote sample application":::
aks Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/nat-gateway.md
To use Managed NAT gateway, you must have the following:
* The `aks-preview` extension version 0.5.31 or later * Kubernetes version 1.20.x or above
+### Install aks-preview CLI extension
+
+You also need the *aks-preview* Azure CLI extension version 0.5.31 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
### Register the `AKS-NATGatewayPreview` feature flag
To create an AKS cluster with a user-assigned NAT Gateway, use `--outbound-type
[az-feature-list]: /cli/azure/feature#az_feature_list [az-provider-register]: /cli/azure/provider#az_provider_register [byo-vnet-azure-cni]: configure-azure-cni.md
-[byo-vnet-kubenet]: configure-kubenet.md
+[byo-vnet-kubenet]: configure-kubenet.md
+[az-extension-add]: /cli/azure/extension#az_extension_add
+[az-extension-update]: /cli/azure/extension#az_extension_update
aks Uptime Sla https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/uptime-sla.md
Create a new cluster, and don't use Uptime SLA:
```azurecli-interactive # Create a new cluster without uptime SLA
-az aks create --resource-group myResourceGroup --name myAKSCluster--node-count 1
+az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 1
``` Use the [`az aks update`][az-aks-update] command to update the existing cluster:
aks Use Azure Dedicated Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-dedicated-hosts.md
The following limitations apply when you integrate Azure Dedicated Host with Azu
* An existing agent pool can't be converted from non-ADH to ADH or ADH to non-ADH. * It is not supported to update agent pool from host group A to host group B.
+* Fault domain count can only be 1.
## Add a Dedicated Host Group to an AKS cluster
Not all host SKUs are available in all regions, and availability zones. You can
az vm list-skus -l eastus2 -r hostGroups/hosts -o table ```
-## Add Dedicated Hosts to the Host Group
+## Create a Host Group
Now create a dedicated host in the host group. In addition to a name for the host, you are required to provide the SKU for the host. Host SKU captures the supported VM series as well as the hardware generation for your dedicated host.
az vm host group create \
--name myHostGroup \ -g myDHResourceGroup \ -z 1\platform-fault-domain-count 2
+--platform-fault-domain-count 1
+```
+
+## Create a Dedicated Host
+
+Now create a dedicated host in the host group. In addition to a name for the host, you are required to provide the SKU for the host. Host SKU captures the supported VM series as well as the hardware generation for your dedicated host.
+
+If you set a fault domain count for your host group, you will need to specify the fault domain for your host.
+
+```azurecli-interactive
+az vm host create \
+--host-group myHostGroup \
+--name myHost \
+--sku DSv3-Type1 \
+--platform-fault-domain 1 \
+-g myDHResourceGroup
+```
+
+## Use a user-assigned Identity
+
+> [!IMPORTANT]
+> A user-assigned Identity with "contributor" role on the Resource Group of the Host Group is required.
+>
+
+First, create a Managed Identity
+
+```azurecli-interactive
+az identity create -g <Resource Group> -n <Managed Identity name>
+```
+
+Assign Managed Identity
+
+```azurecli-interactive
+az role assignment create --assignee <id> --role "Storage Account Key Operator Service Role" --scope <Resource id>
``` ## Create an AKS cluster using the Host Group
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md
It takes a few minutes for the scale operation to complete.
AKS offers a separate feature to automatically scale node pools with a feature called the [cluster autoscaler](cluster-autoscaler.md). This feature can be enabled per node pool with unique minimum and maximum scale counts per node pool. Learn how to [use the cluster autoscaler per node pool](cluster-autoscaler.md#use-the-cluster-autoscaler-with-multiple-node-pools-enabled).
+## Resize a node pool
+
+To increase of number of deployments or run a larger workload, you may want to change the virtual machine scale set plan or resize AKS instances. However, you should not do any direct customizations to these nodes using the IaaS APIs or resources, as any custom changes that are not done via the AKS API will not persist through an upgrade, scale, update or reboot. This means resizing your AKS instances in this manner is not supported.
+
+The recommended method to resize a node pool to the desired SKU size is as follows:
+
+* Create a new node pool with the new SKU size
+* Cordon and drain the nodes in the old node pool in order to move workloads to the new nodes
+* Remove the old node pool.
+
+> [!IMPORTANT]
+> This method is specific to virtual machine scale set-based AKS clusters. When using virtual machine availability sets, you are limited to only one node pool per cluster.
+
+### Create a new node pool with the desired SKU
+
+The following command creates a new node pool with 2 nodes using the `Standard_DS3_v2` VM SKU:
+
+> [!NOTE]
+> Every AKS cluster must contain at least one system node pool with at least one node. In the below example, we are using a `--mode` of `System`, as the cluster is assumed to have only one node pool, necessitating a `System` node pool to replace it. A node pool's mode can be [updated at any time][update-node-pool-mode].
+
+```azurecli-interactive
+az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name mynodepool \
+ --node-count 2 \
+ --node-vm-size Standard_DS3_v2 \
+ --mode System \
+ --no-wait
+```
+
+Be sure to consider other requirements and configure your node pool accordingly. You may need to modify the above command. For a full list of the configuration options, please see the [az aks nodepool add][az-aks-nodepool-add] reference page.
+
+### Cordon the existing nodes
+
+Cordoning marks specified nodes as unschedulable and prevents any additional pods from being added to the nodes.
+
+First, obtain the names of the nodes you'd like to cordon with `kubectl get nodes`. Your output should look similar to the following:
+
+```bash
+NAME STATUS ROLES AGE VERSION
+aks-nodepool1-31721111-vmss000000 Ready agent 7d21h v1.21.9
+aks-nodepool1-31721111-vmss000001 Ready agent 7d21h v1.21.9
+aks-nodepool1-31721111-vmss000002 Ready agent 7d21h v1.21.9
+```
+
+Next, using `kubectl cordon <node-names>`, specify the desired nodes in a space-separated list:
+
+```bash
+kubectl cordon aks-nodepool1-31721111-vmss000000 aks-nodepool1-31721111-vmss000001 aks-nodepool1-31721111-vmss000002
+```
+
+If succesful, your output should look similar to the following:
+
+```bash
+node/aks-nodepool1-31721111-vmss000000 cordoned
+node/aks-nodepool1-31721111-vmss000001 cordoned
+node/aks-nodepool1-31721111-vmss000002 cordoned
+```
+
+### Drain the existing nodes
+
+> [!IMPORTANT]
+> To successfully drain nodes and evict running pods, ensure that any PodDisruptionBudgets (PDBs) allow for at least 1 pod replica to be moved at a time, otherwise the drain/evict operation will fail. To check this, you can run `kubectl get pdb -A` and make sure `ALLOWED DISRUPTIONS` is at least 1 or higher.
+
+Draining nodes will cause pods running on them to be evicted and recreated on the other, schedulable nodes.
+
+To drain nodes, use `kubectl drain <node-names> --ignore-daemonsets --delete-emptydir-data`, again using a space-separated list of node names:
+
+> [!IMPORTANT]
+> Using `--delete-emptydir-data` is required to evict the AKS-created `coredns` and `metrics-server` pods. If this flag isn't used, an error is expected. Please see the [documentation on emptydir][empty-dir] for more information.
+
+```bash
+kubectl drain aks-nodepool1-31721111-vmss000000 aks-nodepool1-31721111-vmss000001 aks-nodepool1-31721111-vmss000002 --ignore-daemonsets --delete-emptydir-data
+```
+
+> [!TIP]
+> By default, your cluster has AKS_managed pod disruption budgets (such as `coredns-pdb` or `konnectivity-agent`) with a `MinAvailable` of 1. If, for example, there are two `coredns` pods running, while one of them is getting recreated and is unavailable, the other is unable to be affected due to the pod disruption budget. This resolves itself after the initial `coredns` pod is scheduled and running, allowing the second pod to be properly evicted and recreated.
+>
+> Consider draining nodes one-by-one for a smoother eviction experience and to avoid throttling. For more information, see [plan for availability using a pod disruption budget][pod-disruption-budget].
+
+After the drain operation finishes, verify pods are running on the new nodepool:
+
+```bash
+kubectl get pods -o wide -A
+```
+
+### Remove the existing node pool
+
+To delete the existing node pool, see the section on [Deleting a node pool](#delete-a-node-pool).
+
+After completion, the final result is the AKS cluster having a single, new node pool with the new, desired SKU size and all the applications and pods properly running.
+ ## Delete a node pool If you no longer need a pool, you can delete it and remove the underlying VM nodes. To delete a node pool, use the [az aks node pool delete][az-aks-nodepool-delete] command and specify the node pool name. The following example deletes the *mynoodepool* created in the previous steps:
Use [proximity placement groups][reduce-latency-ppg] to reduce latency for your
[kubernetes-labels]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ [kubernetes-label-syntax]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set [capacity-reservation-groups]:/azure/virtual-machines/capacity-reservation-associate-virtual-machine-scale-set
+[empty-dir]: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
<!-- INTERNAL LINKS --> [aks-windows]: windows-container-cli.md
Use [proximity placement groups][reduce-latency-ppg] to reduce latency for your
[node-image-upgrade]: node-image-upgrade.md [fips]: /azure/compliance/offerings/offering-fips-140-2 [use-tags]: use-tags.md
+[update-node-pool-mode]: use-system-pools.md#update-existing-cluster-system-and-user-node-pools
+[pod-disruption-budget]: operator-best-practices-scheduler.md#plan-for-availability-using-pod-disruption-budgets
api-management Api Management Revisions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-revisions.md
description: Learn about the concept of revisions in Azure API Management.
documentationcenter: ''
-
Previously updated : 06/12/2020 Last updated : 02/22/2022
Each revision to your API can be accessed using a specially formed URL. Append `
By default, each revision has the same security settings as the current revision. You can deliberately change the policies for a specific revision if you want to have different security applied for each revision. For example, you might want to add a [IP filtering policy](./api-management-access-restriction-policies.md#RestrictCallerIPs) to prevent external callers from accessing a revision that is still under development.
-A revision can be taken offline, which makes it inaccessible to callers even if they try to access the revision through its URL. You can mark a revision as offline using the Azure portal. If you use PowerShell, you can use the `Set-AzApiManagementApiRevision` cmdlet and set the `Path` argument to `$null`.
-
-> [!NOTE]
-> We suggest taking revisions offline when you aren't using them for testing.
- ## Current revision A single revision can be set as the *current* revision. This revision will be the one used for all API requests that don't specify an explicit revision number in the URL. You can roll back to a previous revision by setting that revision as current.
When you set a revision as current you can also optionally specify a public chan
> These properties can only be changed in the current revision. If your edits change any of the above > properties of a non-current revision, the error message `Can't change property for non-current revision` will be displayed.
+## Take a revision offline
+
+A revision can be taken offline, which makes it inaccessible to callers even if they try to access the revision through its URL. You can mark a revision as offline using the Azure portal.
+
+> [!NOTE]
+> We suggest taking revisions offline when you aren't using them for testing.
+ ## Versions and revisions Versions and revisions are distinct features. Each version can have multiple revisions, just like a non-versioned API. You can use revisions without using versions, or the other way around. Typically versions are used to separate API versions with breaking changes, while revisions can be used for minor and non-breaking changes to an API.
api-management Developer Portal Implement Widgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-implement-widgets.md
Use a `widget` scaffold from the `/scaffolds` folder as a starting point to buil
## Rename exported module classes
-Rename the exported module classes by replacing the `Widget` prefix with `ConferenceSession` in these files:
+Rename the exported module classes by replacing the `Widget` prefix with `ConferenceSession` and change the binding name to avoid name collision, in these files:
- `widget.design.module.ts`
For example, in the `widget.design.module.ts` file, change `WidgetDesignModule`
```typescript export class WidgetDesignModule implements IInjectorModule {
+ public register(injector: IInjector): void {
+ injector.bind("widget", WidgetViewModel);
+ injector.bind("widgetEditor", WidgetEditorViewModel);
``` to ```typescript export class ConferenceSessionDesignModule implements IInjectorModule {
+ public register(injector: IInjector): void {
+ injector.bind("conferenceSession", WidgetViewModel);
+ injector.bind("conferenceSessionEditor", WidgetEditorViewModel);
```
From the design-time perspective, any runtime component is just an HTML tag with
```typescript ... createModel: async () => {
- var model = new ConferenceSessionModel();
+ var model = new WidgetModel();
model.sessionNumber = "107"; return model; }
api-management Graphql Validation Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-validation-policies.md
documentationcenter: ''
Previously updated : 10/21/2021 Last updated : 01/21/2022 # API Management policy to validate and authorize GraphQL requests (preview)
-This article provides a reference for a new API Management policy to validate and authorize requests to a [GraphQL API](graphql-api.md) imported to API Management.
+This article provides a reference for an API Management policy to validate and authorize requests to a [GraphQL API](graphql-api.md) imported to API Management.
For more information on adding and configuring policies, see [Policies in API Management](./api-management-policies.md).
Because GraphQL queries use a flattened schema:
* Interfaces * The schema element
-**Authorization elements**
-You can use multiple authorization elements. The most specific path is used to select the appropriate authorization rule for each leaf node in the query.
-* Each authorization can optionally provide a different action.
-* `if` clauses allow the admin to specify conditional actions.
+**Authorize element**
+Configure the `authorize` element to set an appropriate authorization rule for one or more paths.
+* Each rule can optionally provide a different action.
+* Use policy expressions to specify conditional actions.
**Introspection system** The policy for path=`/__*` is the [introspection](https://graphql.org/learn/introspection/) system. You can use it to reject introspection requests (`__schema`, `__type`, etc.).
The policy for path=`/__*` is the [introspection](https://graphql.org/learn/intr
### Policy statement ```xml
-<validate-graphql-request error-variable-name="variable name" max-size="size in bytes" max-depth="query depth">
- <authorize-path="query path, for example: /Query/list Users or /__*" action="allow|remove|reject" />
- <if condition="policy expression" action="allow|remove|reject" />
+<validate-graphql-request error-variable-name="variable name" max-size="size in bytes" max-depth="query depth">
+ <authorize>
+ <rule path="query path, for example: '/listUsers' or '/__*'" action="string or policy expression that evaluates to 'allow|remove|reject|ignore'" />
+ </authorize>
</validate-graphql-request> ```
-### Example
+### Example: Query validation
-In the following example, we validate a GraphQL query and reject:
-* Requests larger than 100 kb or with query depth greater than 4.
-* Access to the introspection system and the `list Users` query.
+This example applies the following validation and authorization rules to a GraphQL query:
+* Requests larger than 100 kb or with query depth greater than 4 are rejected.
+* Requests to the introspection system are rejected.
+* The `/Missions/name` field is removed from requests containing more than two headers.
```xml <validate-graphql-request error-variable-name="name" max-size="102400" max-depth="4">
- <authorize path="/" action="allow" />
- <authorize path="/__*" action="reject" />
- <authorize path="/Query/list Users" action="reject" />
+ <authorize>
+ <rule path="/__*" action="reject" />
+ <rule path="/Missions/name" action="@(context.Request.Headers.Count > 2 ? "remove" : "allow")" />
+ </authorize>
+</validate-graphql-request>
+```
+
+### Example: Mutation validation
+
+This example applies the following validation and authorization rules to a GraphQL mutation:
+* Requests larger than 100 kb or with query depth greater than 4 are rejected.
+* Requests to mutate the `deleteUser` field are denied except when the request is from IP address `198.51.100.1`.
+
+```xml
+<validate-graphql-request error-variable-name="name" max-size="102400" max-depth="4">
+ <authorize>
+ <rule path="/Mutation/deleteUser" action="@(context.Request.IpAddress <> "198.51.100.1" ? "deny" : "allow")" />
+ </authorize>
</validate-graphql-request> ```
In the following example, we validate a GraphQL query and reject:
| Name | Description | Required | | | | -- | | `validate-graphql-request` | Root element. | Yes |
-| `authorize` | Add one or more of these elements to provides field-level authorization with both request- and field-level errors. | Yes |
-| `if` | Add one or more of these elements for conditional changes to the action for a field-level authorization. | No |
+| `authorize` | Add this element to provide field-level authorization with both request- and field-level errors. | No |
+| `rule` | Add one or more of these elements to authorize specific query paths. Each rule can optionally specify a different [action](#request-actions). | No |
### Attributes
In the following example, we validate a GraphQL query and reject:
| `error-variable-name` | Name of the variable in `context.Variables` to log validation errors to. | No | N/A | | `max-size` | Maximum size of the request payload in bytes. Maximum allowed value: 102,400 bytes (100 KB). (Contact [support](https://azure.microsoft.com/support/options/) if you need to increase this limit.) | Yes | N/A | | `max-depth` | An integer. Maximum query depth. | No | 6 |
-| `path` | Query path to execute authorization validation on. | Yes | N/A |
-| `action` | [Action](#request-actions) to perform for the matching field. May be changed if a matching condition is specified. | Yes | N/A |
-| `condition` | Boolean value that determines if the [policy expression](api-management-policy-expressions.md) matches. The first matching condition is used. | No | N/A |
+| `path` | Path to execute authorization validation on. It must follow the pattern: `/type/field`. | Yes | N/A |
+| `action` | [Action](#request-actions) to perform if the rule applies. May be specified conditionally using a policy expression. | No | allow |
### Request actions
Available actions are described in the following table.
|Action |Description | |||
-|`reject` | A request error happens, and the request is not sent to the back end. |
+|`reject` | A request error happens, and the request is not sent to the back end. Additional rules if configured are not applied. |
|`remove` | A field error happens, and the field is removed from the request. | |`allow` | The field is passed to the back end. |
+|`ignore` | The rule is not valid for this case and the next rule is applied. |
### Usage
api-management Validation Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validation-policies.md
documentationcenter: ''
Previously updated : 10/21/2021 Last updated : 02/22/2022 - # API Management policies to validate requests and responses This article provides a reference for the following API Management policies. For information on adding and configuring policies, see [Policies in API Management](./api-management-policies.md).
-Use validation policies to validate API requests and responses against an OpenAPI schema and protect from vulnerabilities such as injection of headers or payload. While not a replacement for a Web Application Firewall, validation policies provide flexibility to respond to another class of threats that are not covered by security products that rely on static, predefined rules.
+Use validation policies to validate REST or SOAP API requests and responses against schemas defined in the API definition or supplementary JSON or XML schemas. Validation policies protect from vulnerabilities such as injection of headers or payload or leaking sensitive data.
+
+While not a replacement for a Web Application Firewall, validation policies provide flexibility to respond to an additional class of threats that arenΓÇÖt covered by security products that rely on static, predefined rules.
## Validation policies -- [Validate content](#validate-content) - Validates the size or JSON schema of a request or response body against the API schema.
+- [Validate content](#validate-content) - Validates the size or content of a request or response body against one or more API schemas. The supported schema formats are JSON and XML.
- [Validate parameters](#validate-parameters) - Validates the request header, query, or path parameters against the API schema. - [Validate headers](#validate-headers) - Validates the response headers against the API schema. - [Validate status code](#validate-status-code) - Validates the HTTP status codes in responses against the API schema.
Use validation policies to validate API requests and responses against an OpenAP
## Actions
-Each validation policy includes an attribute that specifies an action, which API Management takes when validating an entity in an API request or response against the API schema. An action may be specified for elements that are represented in the API schema and, depending on the policy, for elements that aren't represented in the API schema. An action specified in a policy's child element overrides an action specified for its parent.
+Each validation policy includes an attribute that specifies an action, which API Management takes when validating an entity in an API request or response against the API schema.
+
+* An action may be specified for elements that are represented in the API schema and, depending on the policy, for elements that aren't represented in the API schema.
+
+* An action specified in a policy's child element overrides an action specified for its parent.
Available actions:
We recommend performing load tests with your expected production workloads to as
## Validate content
-The `validate-content` policy validates the size or JSON schema of a request or response body against the API schema. Formats other than JSON aren't supported.
+The `validate-content` policy validates the size or content of a request or response body against one or more [supported schemas](#schemas-for-content-validation).
+
+The following table shows the schema formats and request or response content types that the policy supports. Content type values are case insensitive.
+
+| Format | Content types |
+|||
+|JSON | Examples: `application/json`<br/>`application/hal+json` |
+|XML | Example: `application/xml` |
+|SOAP | Allowed values: `application/soap+xml` for SOAP 1.2 APIs<br/>`text/xml` for SOAP 1.1 APIs|
### Policy statement ```xml
-<validate-content unspecified-content-type-action="ignore|prevent|detect" max-size="size in bytes" size-exceeded-action="ignore|prevent|detect" errors-variable-name="variable name">
- <content type="content type string, for example: application/json, application/hal+json" validate-as="json" action="ignore|prevent|detect" />
+<validate-content unspecified-content-type-action="ignore|prevent|detect" max-size="size in bytes" size-exceeded-action="ignore|prevent|detect" errors-variable-name="variable name">
+ <content-type-map any-content-type-value="content type string" missing-content-type-value="content type string">
+ <type from|when="content type string" to="content type string" />
+ </content-type-map>
+ <content type="content type string" validate-as="json|xml|soap" schema-id="schema id" schema-ref="#/local/reference/path" action="ignore|prevent|detect" />
</validate-content> ```
-### Example
+### Examples
-In the following example, the JSON payload in requests and responses is validated in detection mode. Messages with payloads larger than 100 KB are blocked.
+#### JSON schema validation
+
+In the following example, API Management interprets requests with an empty content type header or requests with a content type header `application/hal+json` as requests with the content type `application/json`. Then, API Management performs the validation in the detection mode against a schema defined for the `application/json` content type in the API definition. Messages with payloads larger than 100 KB are blocked.
```xml <validate-content unspecified-content-type-action="prevent" max-size="102400" size-exceeded-action="prevent" errors-variable-name="requestBodyValidation">
+ <content-type-map missing-content-type-value="application/json">
+ <type from="application/hal+json" to="application/json" />
+ </content-type-map>
<content type="application/json" validate-as="json" action="detect" />
- <content type="application/hal+json" validate-as="json" action="detect" />
</validate-content>
+```
+#### SOAP schema validation
+
+In the following example, API Management interprets any request as a request with the content type `application/soap+xml` (the content type that's used by SOAP 1.2 APIs), regardless of the incoming content type. The request could arrive with an empty content type header, content type header of `text/xml` (used by SOAP 1.1 APIs), or another content type header. Then, API Management extracts the XML payload from the SOAP envelope and performs the validation in prevention mode against the schema named "myschema". Messages with payloads larger than 100 KB are blocked.
+
+```xml
+<validate-content unspecified-content-type-action="prevent" max-size="102400" size-exceeded-action="prevent" errors-variable-name="requestBodyValidation">
+ <content-type-map any-content-type-value="application/soap+xml" />
+ <content type="application/soap+xml" validate-as="soap" schema-id="myschema" action="prevent" />
+</validate-content>
``` ### Elements
In the following example, the JSON payload in requests and responses is validate
| Name | Description | Required | | | | -- | | `validate-content` | Root element. | Yes |
-| `content` | Add one or more of these elements to validate the content type in the request or response, and perform the specified action. | No |
+| `content-type-map` | Add this element to map the content type of the incoming request or response to another content type that is used to trigger validation. | No |
+| `content` | Add one or more of these elements to validate the content type in the request or response, or the mapped content type, and perform the specified action. | No |
### Attributes | Name | Description | Required | Default | | -- | - | -- | - |
-| `unspecified-content-type-action` | [Action](#actions) to perform for requests or responses with a content type that isnΓÇÖt specified in the API schema. | Yes | N/A |
-| `max-size` | Maximum length of the body of the request or response in bytes, checked against the `Content-Length` header. If the request body or response body is compressed, this value is the decompressed length. Maximum allowed value: 102,400 bytes (100 KB). (Contact [support](https://azure.microsoft.com/support/options/) if you need to increase this limit.) | Yes | N/A |
-| `size-exceeded-action` | [Action](#actions) to perform for requests or responses whose body exceeds the size specified in `max-size`. | Yes | N/A |
-| `errors-variable-name` | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
-| `type` | Content type to execute body validation for, checked against the `Content-Type` header. This value is case insensitive. If empty, it applies to every content type specified in the API schema. | No | N/A |
-| `validate-as` | Validation engine to use for validation of the body of a request or response with a matching content type. Currently, the only supported value is "json". | Yes | N/A |
-| `action` | [Action](#actions) to perform for requests or responses whose body doesn't match the specified content type. | Yes | N/A |
+| unspecified-content-type-action | [Action](#actions) to perform for requests or responses with a content type that isnΓÇÖt specified in the API schema. | Yes | N/A |
+| max-size | Maximum length of the body of the request or response in bytes, checked against the `Content-Length` header. If the request body or response body is compressed, this value is the decompressed length. Maximum allowed value: 102,400 bytes (100 KB). (Contact [support](https://azure.microsoft.com/support/options/) if you need to increase this limit.) | Yes | N/A |
+| size-exceeded-action | [Action](#actions) to perform for requests or responses whose body exceeds the size specified in `max-size`. | Yes | N/A |
+| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
+| any-content-type-value | Content type used for validation of the body of a request or response, regardless of the incoming content type. | No | N/A |
+| missing-content-type-value | Content type used for validation of the body of a request or response, when the incoming content type is missing or empty. | No | N/A |
+| content-type-map \ type | Add one or more of these elements to map an incoming content type to a content type used for validation of the body of a request or response. Use `from` to specify a known incoming content type, or use `when` with a policy expression to specify any incoming content type that matches a condition. Overrides the mapping in `any-content-type-value` and `missing-content-type-value`, if specified. | No | N/A |
+| content \ type | Content type to execute body validation for, checked against the content type header or the value mapped in `content-type-mapping`, if specified. If empty, it applies to every content type specified in the API schema.<br/><br/>To validate SOAP requests and responses (`validate-as` attribute set to "soap"), set `type` to `application/soap+xml` for SOAP 1.2 APIs or `text/xml` for SOAP 1.1 APIs. | No | N/A |
+| validate-as | Validation engine to use for validation of the body of a request or response with a matching `type`. Supported values: "json", "xml", "soap".<br/><br/>When "soap" is specified, the XML from the request or response is extracted from the SOAP envelope and validated against an XML schema. | Yes | N/A |
+| schema-id | Name of an existing schema that was [added](#schemas-for-content-validation) to the API Management instance for content validation. If not specified, the default schema from the API definition is used. | No | N/A |
+| schema-ref| For a JSON schema specified in `schema-id`, optional reference to a valid local reference path in the JSON document. Example: `#/components/schemas/address`. The attribute should return a JSON object that API Management handles as a valid JSON schema.<br/><br/> For an XML schema, `schema-ref` isn't supported, and any top-level schema element can be used as the root of the XML request or response payload. The validation checks that all elements starting from the XML request or response payload root adhere to the provided XML schema. | No | N/A |
+| action | [Action](#actions) to perform for requests or responses whose body doesn't match the specified content type. | Yes | N/A |
+
+### Schemas for content validation
+
+By default, validation of request or response content uses JSON or XML schemas from the API definition. These schemas can be specified manually or generated automatically when importing an API from an OpenAPI or WSDL specification into API Management.
+
+Using the `validate-content` policy, you may optionally validate against one or more JSON or XML schemas that youΓÇÖve added to your API Management instance and that aren't part of the API definition. A schema that you add to API Management can be reused across many APIs.
+
+To add a schema to your API Management instance using the Azure portal:
+
+1. In the [portal](https://portal.azure.com), navigate to your API Management instance.
+1. In the **APIs** section of the left-hand menu, select **Schemas** > **+ Add**.
+1. In the **Create schema** window, do the following:
+ 1. Enter a **Name** for the schema.
+ 1. In **Schema type**, select **JSON** or **XML**.
+ 1. Enter a **Description**.
+ 1. In **Create method**, do one of the following:
+ * Select **Create new** and enter or paste the schema.
+ * Select **Import from file** or **Import from URL** and enter a schema location.
+ > [!NOTE]
+ > To import a schema from URL, the schema needs to be accessible over the internet from the browser.
+ 1. Select **Save**.
++
+ :::image type="content" source="media/validation-policies/add-schema.png" alt-text="Create schema":::
+
+After the schema is created, it appears in the list on the **Schemas** page. Select a schema to view its properties or to edit in a schema editor.
+
+> [!NOTE]
+> * A schema may cross-reference another schema that is added to the API Management instance.
+> * Open-source tools to resolve WSDL and XSD schema references and to batch-import generated schemas to API Management are available on [GitHub](https://github.com/Azure-Samples/api-management-schema-import).
+ ### Usage
In this example, all query and path parameters are validated in the prevention m
| Name | Description | Required | Default | | -- | - | -- | - | | `specified-parameter-action` | [Action](#actions) to perform for request parameters specified in the API schema. <br/><br/> When provided in a `headers`, `query`, or `path` element, the value overrides the value of `specified-parameter-action` in the `validate-parameters` element. | Yes | N/A |
-| `unspecified-parameter-action` | [Action](#actions) to perform for request parameters that are not specified in the API schema. <br/><br/>When provided in a `headers`or `query` element, the value overrides the value of `unspecified-parameter-action` in the `validate-parameters` element. | Yes | N/A |
+| `unspecified-parameter-action` | [Action](#actions) to perform for request parameters that arenΓÇÖt specified in the API schema. <br/><br/>When provided in a `headers`or `query` element, the value overrides the value of `unspecified-parameter-action` in the `validate-parameters` element. | Yes | N/A |
| `errors-variable-name` | Name of the variable in `context.Variables` to log validation errors to. | No | N/A | | `name` | Name of the parameter to override validation action for. This value is case insensitive. | Yes | N/A | | `action` | [Action](#actions) to perform for the parameter with the matching name. If the parameter is specified in the API schema, this value overrides the higher-level `specified-parameter-action` configuration. If the parameter isnΓÇÖt specified in the API schema, this value overrides the higher-level `unspecified-parameter-action` configuration.| Yes | N/A |
The `validate-headers` policy validates the response headers against the API sch
| Name | Description | Required | Default | | -- | - | -- | - | | `specified-header-action` | [Action](#actions) to perform for response headers specified in the API schema. | Yes | N/A |
-| `unspecified-header-action` | [Action](#actions) to perform for response headers that are not specified in the API schema. | Yes | N/A |
+| `unspecified-header-action` | [Action](#actions) to perform for response headers that arenΓÇÖt specified in the API schema. | Yes | N/A |
| `errors-variable-name` | Name of the variable in `context.Variables` to log validation errors to. | No | N/A | | `name` | Name of the header to override validation action for. This value is case insensitive. | Yes | N/A | | `action` | [Action](#actions) to perform for header with the matching name. If the header is specified in the API schema, this value overrides value of `specified-header-action` in the `validate-headers` element. Otherwise, it overrides value of `unspecified-header-action` in the validate-headers element. | Yes | N/A |
The `validate-status-code` policy validates the HTTP status codes in responses a
| Name | Description | Required | Default | | -- | - | -- | - |
-| `unspecified-status-code-action` | [Action](#actions) to perform for HTTP status codes in responses that are not specified in the API schema. | Yes | N/A |
+| `unspecified-status-code-action` | [Action](#actions) to perform for HTTP status codes in responses that arenΓÇÖt specified in the API schema. | Yes | N/A |
| `errors-variable-name` | Name of the variable in `context.Variables` to log validation errors to. | No | N/A | | `code` | HTTP status code to override validation action for. | Yes | N/A |
-| `action` | [Action](#actions) to perform for the matching status code, which is not specified in the API schema. If the status code is specified in the API schema, this override does not take effect. | Yes | N/A |
+| `action` | [Action](#actions) to perform for the matching status code, which isnΓÇÖt specified in the API schema. If the status code is specified in the API schema, this override doesnΓÇÖt take effect. | Yes | N/A |
### Usage
This policy can be used in the following policy [sections](./api-management-howt
## Validation errors+
+API Management generates validation errors in the following format:
+
+```
+{
+ "Name": string,
+ "Type": string,
+ "ValidationRule": string,
+ "Details": string,
+ "Action": string
+}
+
+```
+ The following table lists all possible errors of the validation policies. * **Details**: Can be used to investigate errors. Not meant to be shared publicly.
api-management Websocket Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/websocket-api.md
Below are the current restrictions of WebSocket support in API Management:
* WebSocket APIs are not supported yet in the Consumption tier. * WebSocket APIs are not supported yet in the [self-hosted gateway](./self-hosted-gateway-overview.md). * Azure CLI, PowerShell, and SDK currently do not support management operations of WebSocket APIs.
+* 200 active connections limit per unit.
### Unsupported policies
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md
Title: App Service Environment overview
-description: Overview on the App Service Environment
+description: This article discusses the Azure App Service Environment feature of Azure App Service.
Last updated 01/26/2022 + # App Service Environment overview
-The Azure App Service Environment is an Azure App Service feature that provides a fully isolated and dedicated environment for securely running App Service apps at high scale. This capability can host your:
+An App Service Environment is an Azure App Service feature that provides a fully isolated and dedicated environment for running App Service apps securely at high scale.
+
+> [!NOTE]
+> This article covers the features, benefits, and use cases of App Service Environment v3, which is used with App Service Isolated v2 plans.
+>
+
+An App Service Environment can host your:
- Windows web apps - Linux web apps - Docker containers (Windows and Linux) - Functions-- Logic Apps (Standard)-
-> [!NOTE]
-> This article is about the App Service Environment v3 which is used with Isolated v2 App Service plans
->
+- Logic apps (Standard)
App Service Environments are appropriate for application workloads that require: - High scale. - Isolation and secure network access. - High memory utilization.-- High requests per second (RPS). You can make multiple App Service Environments in a single Azure region or across multiple Azure regions. This flexibility makes an App Service Environment ideal for horizontally scaling stateless applications with a high RPS requirement.
+- High requests per second (RPS). You can create multiple App Service Environments in a single Azure region or across multiple Azure regions. This flexibility makes an App Service Environment ideal for horizontally scaling stateless applications with a high RPS requirement.
-App Service Environment host applications from only one customer and do so in one of their virtual networks. Customers have fine-grained control over inbound and outbound application network traffic. Applications can establish high-speed secure connections over VPNs to on-premises corporate resources.
+An App Service Environment can host applications from only one customer, and they do so on one of their virtual networks. Customers have fine-grained control over inbound and outbound application network traffic. Applications can establish high-speed secure connections over VPNs to on-premises corporate resources.
## Usage scenarios
-The App Service Environment has many use cases including:
+App Service Environments have many use cases, including:
-- Internal line-of-business applications-- Applications that need more than 30 App Service plan instances-- Single tenant system to satisfy internal compliance or security requirements-- Network isolated application hosting-- Multi-tier applications
+- Internal line-of-business applications.
+- Applications that need more than 30 App Service plan instances.
+- Single-tenant systems to satisfy internal compliance or security requirements.
+- Network-isolated application hosting.
+- Multi-tier applications.
-There are many networking features that enable apps in the multi-tenant App Service to reach network isolated resources or become network isolated themselves. These features are enabled at the application level. With an App Service Environment, there's no added configuration required for the apps to be in the virtual network. The apps are deployed into a network-isolated environment that is already in a virtual network. If you really need a complete isolation story, you can also get your App Service Environment deployed onto dedicated hardware.
+There are many networking features that enable apps in a multi-tenant App Service to reach network-isolated resources or become network-isolated themselves. These features are enabled at the application level. With an App Service Environment, no added configuration is required for the apps to be on a virtual network. The apps are deployed into a network-isolated environment that's already on a virtual network. If you really need a complete isolation story, you can also deploy your App Service Environment onto dedicated hardware.
## Dedicated environment
-The App Service Environment is a single tenant deployment of the Azure App Service that runs in your virtual network.
+An App Service Environment is a single-tenant deployment of Azure App Service that runs on your virtual network.
-Applications are hosted in App Service plans, which are created in an App Service Environment. The App Service plan is essentially a provisioning profile for an application host. As you scale your App Service plan out, you create more application hosts with all of the apps in that App Service plan on each host. A single App Service Environment v3 can have up to 200 total App Service plan instances across all of the App Service plans combined. A single Isolated v2 App Service plan can have up to 100 instances by itself.
+Applications are hosted in App Service plans, which are created in an App Service Environment. An App Service plan is essentially a provisioning profile for an application host. As you scale out your App Service plan, you create more application hosts with all the apps in that App Service plan on each host. A single App Service Environment v3 can have up to 200 total App Service plan instances across all the App Service plans combined. A single App Service Isolated v2 (Iv2) plan can have up to 100 instances by itself.
-When you're deploying on dedicated hardware (hosts), you're limited in scaling across all App Service plans to the amount of cores in this type of environment. An App Service Environment deployed on dedicated hosts has 132 vCores available. I1v2 uses 2 vCores, I2v2 uses 4 vCores, and I3v2 uses 8 vCores per instance.
+When you're deploying onto dedicated hardware (hosts), you're limited in scaling across all App Service plans to the number of cores in this type of environment. An App Service Environment that's deployed on dedicated hosts has 132 vCores available. I1v2 uses two vCores, I2v2 uses four vCores, and I3v2 uses eight vCores per instance.
## Virtual network support
-The App Service Environment feature is a deployment of the Azure App Service into a single subnet in a customer's virtual network. When you deploy an app into an App Service Environment, the app will be exposed on the inbound address assigned to the App Service Environment. If your App Service Environment is deployed with an internal virtual IP (VIP), then the inbound address for all of the apps will be an address in the App Service Environment subnet. If your App Service Environment is deployed with an external VIP, then the inbound address will be an internet-addressable address and your apps will be in public DNS.
+The App Service Environment feature is a deployment of Azure App Service into a single subnet on a virtual network. When you deploy an app into an App Service Environment, the app is exposed on the inbound address that's assigned to the App Service Environment. If your App Service Environment is deployed with an internal virtual IP (VIP) address, the inbound address for all the apps will be an address in the App Service Environment subnet. If your App Service Environment is deployed with an external VIP address, the inbound address will be an internet-addressable address, and your apps will be in a public Domain Name System.
+
+The number of addresses that are used by an App Service Environment v3 in its subnet will vary, depending on the number of instances and the amount of traffic. Some infrastructure roles are automatically scaled, depending on the number of App Service plans and the load. The recommended size for your App Service Environment v3 subnet is a `/24` Classless Inter-Domain Routing (CIDR) block with 256 addresses in it, because that size can host an App Service Environment v3 that's scaled out to its limit.
-The number of addresses used by an App Service Environment v3 in its subnet will vary based on how many instances you have along with how much traffic. There are infrastructure roles that are automatically scaled depending on the number of App Service plans and the load. The recommended size for your App Service Environment v3 subnet is a `/24` CIDR block with 256 addresses in it as that can host an App Service Environment v3 scaled out to its limit.
+The apps in an App Service Environment don't need any features enabled to access resources on the same virtual network that the App Service Environment is in. If the App Service Environment virtual network is connected to another network, the apps in the App Service Environment can access resources in those extended networks. Traffic can be blocked by user configuration on the network.
-The apps in an App Service Environment don't need any features enabled to access resources in the same virtual network that the App Service Environment is in. If the App Service Environment virtual network is connected to another network, then the apps in the App Service Environment can access resources in those extended networks. Traffic can be blocked by user configuration on the network.
+The multi-tenant version of Azure App Service contains numerous features to enable your apps to connect to your various networks. With those networking features, your apps can act as though they're deployed on a virtual network. The apps in an App Service Environment v3 don't need any added configuration to be on the virtual network.
-The multi-tenant version of Azure App Service contains numerous features to enable your apps to connect to your various networks. Those networking features enable your apps to act as if they were deployed in a virtual network. The apps in an App Service Environment v3 don't need any configuration to be in the virtual network. A benefit of using an App Service Environment over the multi-tenant service is that any network access controls to the App Service Environment hosted apps is external to the application configuration. With the apps in the multi-tenant service, you must enable the features on an app-by-app basis and use Role-based access control or policy to prevent any configuration changes.
+A benefit of using an App Service Environment instead of a multi-tenant service is that any network access controls for the App Service Environment-hosted apps are external to the application configuration. With the apps in the multi-tenant service, you must enable the features on an app-by-app basis and use role-based access control or a policy to prevent any configuration changes.
## Feature differences
-Compared to earlier versions of the App Service Environment, there are some differences with App Service Environment v3:
+App Service Environment v3 differs from earlier versions in the following ways:
-- There are no networking dependencies in the customer virtual network. You can secure all inbound and outbound as desired. Outbound traffic can be routed also as desired. -- You can deploy it enabled for zone redundancy. Zone redundancy can only be set during creation and only in regions where all App Service Environment v3 dependencies are zone redundant. -- You can deploy it on a dedicated host group. Host group deployments aren't zone redundant. -- Scaling is much faster than with App Service Environment v2. While scaling still isn't immediate as in the multi-tenant service, it's a lot faster.-- Front end scaling adjustments are no longer required. The App Service Environment v3 front ends automatically scale to meet needs and are deployed on better hosts.-- Scaling no longer blocks other scale operations within the App Service Environment v3 instance. Only one scale operation can be in effect for a combination of OS and size. For example, while your Windows small App Service plan was scaling, you could kick off a scale operation to run at the same time on a Windows medium or anything else other than Windows small. -- Apps in an internal VIP App Service Environment v3 can be reached across global peering. Access across global peering was not possible with previous versions.
+- There are no networking dependencies on the customer's virtual network. You can secure all inbound and outbound traffic and route outbound traffic as you want.
+- You can deploy an App Service Environment v3 that's enabled for zone redundancy. You set zone redundancy only during creation and only in regions where all App Service Environment v3 dependencies are zone redundant.
+- You can deploy an App Service Environment v3 on a dedicated host group. Host group deployments aren't zone redundant.
+- Scaling is much faster than with an App Service Environment v2. Although scaling still isn't immediate, as in the multi-tenant service, it's a lot faster.
+- Front-end scaling adjustments are no longer required. App Service Environment v3 front ends automatically scale to meet your needs and are deployed on better hosts.
+- Scaling no longer blocks other scale operations within the App Service Environment v3. Only one scale operation can be in effect for a combination of OS and size. For example, while your Windows small App Service plan is scaling, you could kick off a scale operation to run at the same time on a Windows medium or anything else other than Windows small.
+- You can reach apps in an internal-VIP App Service Environment v3 across global peering. Such access wasn't possible in earlier versions.
-There are a few features that are not available in App Service Environment v3 that were available in earlier versions of the App Service Environment. In App Service Environment v3, you can't:
+A few features that were available in earlier versions of App Service Environment aren't available in App Service Environment v3. For example, you can no longer do the following:
-- send SMTP traffic. You can still have email triggered alerts but your app can't send outbound traffic on port 25-- deploy your apps with FTP-- use remote debug with your apps-- monitor your traffic with Network Watcher or NSG Flow-- configure a IP-based TLS/SSL binding with your apps-- configure custom domain suffix-- backup/restore operation on a storage account behind a firewall
+- Send SMTP traffic. You can still have email triggered alerts but your app can't send outbound traffic on port 25.
+- Deploy your apps by using FTP.
+- Use remote debugging with your apps.
+- Monitor your traffic with Network Watcher or network security group (NSG) flow logs.
+- Configure an IP-based Transport Layer Security (TLS) or Secure Sockets Layer (SSL) binding with your apps.
+- Configure a custom domain suffix.
+- Perform a backup and restore operation on a storage account behind a firewall.
## Pricing
-With App Service Environment v3, there is a different pricing model depending on the type of App Service Environment deployment you have. The three pricing models are:
+With App Service Environment v3, the pricing model varies depending on the type of App Service Environment deployment you have. The three pricing models are:
-- **App Service Environment v3**: If App Service Environment is empty, there is a charge as if you had one instance of Windows I1v2. The one instance charge isn't an additive charge but is only applied if the App Service Environment is empty.-- **Zone redundant App Service Environment v3**: There's a minimum charge of nine instances. There's no added charge for availability zone support if you have nine or more App Service plan instances. If you've fewer than nine instances (of any size) across App Service plans in the zone redundant App Service Environment, the difference between nine and the running instance count is charged as additional Windows I1v2 instances.-- **Dedicated host App Service Environment v3**: With a dedicated host deployment, you're charged for two dedicated hosts per our pricing at App Service Environment v3 creation then a small percentage of the Isolated v2 rate per core charge as you scale.
+- **App Service Environment v3**: If the App Service Environment is empty, there's a charge as though you have one instance of Windows I1v2. The one instance charge isn't an additive charge but is applied only if the App Service Environment is empty.
+- **Zone redundant App Service Environment v3**: There's a minimum charge of nine instances. There's no added charge for availability zone support if you have nine or more App Service plan instances. If you have fewer than nine instances (of any size) across App Service plans in the zone redundant App Service Environment, the difference between nine and the running instance count is charged as additional Windows I1v2 instances.
+- **Dedicated host App Service Environment v3**: With a dedicated host deployment, you're charged for two dedicated hosts per our pricing when you create the App Service Environment v3 and then, as you scale, you're charged a small percentage of the Isolated v2 rate per core.
-Reserved Instance pricing for Isolated v2 is available and is described in [How reservation discounts apply to Azure App Service](../../cost-management-billing/reservations/reservation-discount-app-service.md). The pricing, along with reserved instance pricing, is available at [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/windows/) under **Isolated v2 plan**.
+Reserved Instance pricing for Isolated v2 is available and is described in [How reservation discounts apply to Azure App Service](../../cost-management-billing/reservations/reservation-discount-app-service.md). The pricing, along with Reserved Instance pricing, is available at [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/windows/) under the Isolated v2 plan.
## Regions
-The App Service Environment v3 is available in the following regions.
+App Service Environment v3 is available in the following regions:
| Normal and dedicated host regions | Availability zone regions | |||
The App Service Environment v3 is available in the following regions.
## App Service Environment v2
-App Service Environment has three versions: App Service Environment v1, App Service Environment v2, and App Service Environment v3. The preceding information was based on App Service Environment v3. To learn more about App Service Environment v2, see [App Service Environment v2 introduction](./intro.md).
+App Service Environment has three versions: App Service Environment v1, App Service Environment v2, and App Service Environment v3. The information in this article is based on App Service Environment v3. To learn more about App Service Environment v2, see [App Service Environment v2 introduction](./intro.md).
app-service Overview Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md
Learn [how to configure application routing](./configure-vnet-integration-routin
We recommend that you use the **Route All** configuration setting to enable routing of all traffic. Using the configuration setting allows you to audit the behavior with [a built-in policy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F33228571-70a4-4fa1-8ca1-26d0aba8d6ef). The existing WEBSITE_VNET_ROUTE_ALL app setting can still be used, and you can enable all traffic routing with either setting.
+#### Configuration routing
+
+When you are using virtual network integration, you can configure how parts of the configuration traffic is managed. By default, the mentioned configurations will go directly to the internet unless you actively configure it to be routed through the virtual network integration.
+
+##### Content storage
+
+Bringing you own storage for content in often used in Functions where [content storage](./../azure-functions/configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network) is configured as part of the Functions app.
+
+To route content storage traffic through the virtual network integration, you need to add an app setting named `WEBSITE_CONTENTOVERVNET` with the value `1`. In addition to adding the app setting, you must also ensure that any firewall or Network Security Group configured on traffic from the subnet allow traffic to port 443 and 445.
+
+##### Container image pull
+
+When using custom containers for Linux, you can pull the container over the virtual network integration. To route the container pull traffic through the virtual network integration, you must add an app setting named `WEBSITE_PULL_IMAGE_OVER_VNET` with the value `true`.
+ #### Network routing You can use route tables to route outbound traffic from your app to wherever you want. Route tables affect your destination traffic. When **Route All** is disabled in [application routing](#application-routing), only private traffic (RFC1918) is affected by your route tables. Common destinations can include firewall devices or gateways. Routes that are set on your integration subnet won't affect replies to inbound app requests.
app-service Quickstart Dotnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-dotnetcore.md
Title: "Quickstart: Deploy an ASP.NET web app"
description: Learn how to run web apps in Azure App Service by deploying your first ASP.NET app. ms.assetid: b1e6bd58-48d1-4007-9d6c-53fd6db061e3 Previously updated : 11/08/2021- Last updated : 02/08/2022+ zone_pivot_groups: app-service-ide adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021
In this quickstart, you'll learn how to create and deploy your first ASP.NET web
### [.NET Framework 4.8](#tab/netframework48) - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).-- <a href="https://www.visualstudio.com/downloads" target="_blank">Visual Studio 2022</a> with the **ASP.NET and web development** workload (make sure the optional checkbox **.NET Framework project and item templates** is selected).
+- <a href="https://www.visualstudio.com/downloads" target="_blank">Visual Studio 2022</a> with the **ASP.NET and web development** workload (ensure the optional checkbox **.NET Framework project and item templates** is selected).
--
If you've already installed Visual Studio 2022:
</a> > [!NOTE]
-> Visual Studio Code is cross-platform, however; .NET Framework is not. If you're developing .NET Framework apps with Visual Studio Code, consider using a Windows machine to satisfy the build dependencies.
+> Visual Studio Code is cross-platform code editor, however; .NET Framework is not. If you're developing .NET Framework apps with Visual Studio Code, consider using a Windows machine to satisfy the build dependencies.
If you've already installed Visual Studio 2022:
### [.NET 6.0](#tab/net60) 1. Open Visual Studio and then select **Create a new project**.
-1. In **Create a new project**, find, and choose **ASP.NET Core Web App**, then select **Next**.
+1. In **Create a new project**, find, and select **ASP.NET Core Web App**, then select **Next**.
1. In **Configure your new project**, name the application _MyFirstAzureWebApp_, and then select **Next**. :::image type="content" source="./media/quickstart-dotnet/configure-webapp-net.png" alt-text="Visual Studio - Configure ASP.NET 6.0 web app." lightbox="media/quickstart-dotnet/configure-webapp-net.png" border="true"::: 1. Select **.NET Core 6.0 (Long-term support)**.
-1. Make sure **Authentication Type** is set to **None**. Select **Create**.
+1. Ensure **Authentication Type** is set to **None**. Select **Create**.
:::image type="content" source="media/quickstart-dotnet/vs-additional-info-net60.png" alt-text="Visual Studio - Additional info when selecting .NET Core 6.0." lightbox="media/quickstart-dotnet/vs-additional-info-net60.png" border="true":::
If you've already installed Visual Studio 2022:
### [.NET Framework 4.8](#tab/netframework48) 1. Open Visual Studio and then select **Create a new project**.
-1. In **Create a new project**, find, and choose **ASP.NET Web Application (.NET Framework)**, then select **Next**.
+1. In **Create a new project**, find, and select **ASP.NET Web Application (.NET Framework)**, then select **Next**.
1. In **Configure your new project**, name the application _MyFirstAzureWebApp_, and then select **Create**. :::image type="content" source="media/quickstart-dotnet/configure-webapp-netframework48.png" alt-text="Visual Studio - Configure ASP.NET Framework 4.8 web app." lightbox="media/quickstart-dotnet/configure-webapp-netframework48.png" border="true"::: 1. Select the **MVC** template.
-1. Make sure **Authentication** is set to **No Authentication**. Select **Create**.
+1. Ensure **Authentication** is set to **No Authentication**. Select **Create**.
:::image type="content" source="media/quickstart-dotnet/vs-mvc-no-auth-netframework48.png" alt-text="Visual Studio - Select the MVC template." lightbox="media/quickstart-dotnet/vs-mvc-no-auth-netframework48.png" border="true":::
If you've already installed Visual Studio 2022:
1. In Visual Studio Code, open the <a href="https://code.visualstudio.com/docs/editor/integrated-terminal" target="_blank">Terminal</a> window by typing `Ctrl` + `` ` ``.
-1. In the terminal in Visual Studio Code, create a new .NET web app using the [`dotnet new webapp`](/dotnet/core/tools/dotnet-new#web-options) command.
+1. In Visual Studio Code terminal, create a new .NET web app using the [`dotnet new webapp`](/dotnet/core/tools/dotnet-new#web-options) command.
### [.NET 6.0](#tab/net60)
Follow these steps to create your App Service resources and publish your project
:::image type="content" source="media/quickstart-dotnet/vs-publish-target-Azure.png" alt-text="Visual Studio - Publish the web app and target Azure." lightbox="media/quickstart-dotnet/vs-publish-target-Azure.png" border="true":::
-1. Choose the **Specific target**, either **Azure App Service (Linux)** or **Azure App Service (Windows)**. Then, click **Next**.
+1. Choose the **Specific target**, either **Azure App Service (Linux)** or **Azure App Service (Windows)**. Then, select **Next**.
> [!IMPORTANT] > When targeting ASP.NET Framework 4.8, use **Azure App Service (Windows)**.
Follow these steps to create your App Service resources and publish your project
:::image type="content" source="media/quickstart-dotnet/web-app-name.png" border="true" alt-text="Visual Studio - Create app resources dialog." lightbox="media/quickstart-dotnet/web-app-name.png" :::
- Once the wizard completes, the Azure resources are created for you and you are ready to publish your ASP.NET Core project.
+ Once the wizard completes, the Azure resources are created for you and you're ready to publish your ASP.NET Core project.
-1. In the **Publish** dialog, make sure your new App Service app is selected in **App Service instance**, then select **Finish**. Visual Studio creates a publish profile for you for the selected App Service app.
-1. In the **Publish** page, select **Publish**. If you see a warning message, click **Continue**.
+1. In the **Publish** dialog, ensure your new App Service app is selected in **App Service instance**, then select **Finish**. Visual Studio creates a publish profile for you for the selected App Service app.
+1. In the **Publish** page, select **Publish**. If you see a warning message, select **Continue**.
Visual Studio builds, packages, and publishes the app to Azure, and then launches the app in the default browser.
Follow these steps to create your App Service resources and publish your project
1. Select **Create a new App Service plan**, provide a name, and select the **F1 Free** [pricing tier][app-service-pricing-tier]. 1. Select **Skip for now** for the Application Insights resource.
-1. In the popup **Always deploy the workspace "MyFirstAzureWebApp" to \<app-name>"**, select **Yes**. This way, as long as you're in the same workspace, Visual Studio Code deploys to the same App Service app each time.
+1. In the popup **Always deploy the workspace "MyFirstAzureWebApp" to \<app-name>"**, select **Yes** so that Visual Studio Code deploys to the same App Service app every time you're in that workspace.
1. When publishing completes, select **Browse Website** in the notification and select **Open** when prompted. ### [.NET 6.0](#tab/net60)
Follow these steps to create your App Service resources and publish your project
az webapp up --sku F1 --name <app-name> --os-type <os> ```
- - If the `az` command isn't recognized, be sure you have the Azure CLI installed as described in [Prerequisites](#prerequisites).
+ - If the `az` command isn't recognized, ensure you have the Azure CLI installed as described in [Prerequisites](#prerequisites).
- Replace `<app-name>` with a name that's unique across all of Azure (*valid characters are `a-z`, `0-9`, and `-`*). A good pattern is to use a combination of your company name and an app identifier. - The `--sku F1` argument creates the web app on the **Free** [pricing tier][app-service-pricing-tier]. Omit this argument to use a faster premium tier, which incurs an hourly cost. - Replace `<os>` with either `linux` or `windows`. You must use `windows` when targeting *ASP.NET Framework 4.8*. - You can optionally include the argument `--location <location-name>` where `<location-name>` is an available Azure region. You can retrieve a list of allowable regions for your Azure account by running the [`az account list-locations`](/cli/azure/appservice#az_appservice_list_locations) command.
- The command may take a few minutes to complete. While running, it provides messages about creating the resource group, the App Service plan, and hosting app, configuring logging, then performing ZIP deployment. It then outputs a message with the app's URL:
+ The command might take a few minutes to complete. While running, it provides messages about creating the resource group, the App Service plan, and hosting app, configuring logging, then performing ZIP deployment. Then it shows a message with the app's URL:
```azurecli You can launch the app at http://<app-name>.azurewebsites.net
Follow these steps to create your App Service resources and publish your project
New-AzWebApp -Name <app-name> -Location westeurope ```
- - Replace `<app-name>` with a name that's unique across all of Azure (*valid characters are `a-z`, `0-9`, and `-`*). A good pattern is to use a combination of your company name and an app identifier.
+ - Replace `<app-name>` with a name that's unique across all of Azure (*valid characters are `a-z`, `0-9`, and `-`*). A combination of your company name and an app identifier is a good pattern.
- You can optionally include the parameter `-Location <location-name>` where `<location-name>` is an available Azure region. You can retrieve a list of allowable regions for your Azure account by running the [`Get-AzLocation`](/powershell/module/az.resources/get-azlocation) command.
- The command may take a few minutes to complete. While running, it creates a resource group, an App Service plan, and the App Service resource.
+ The command might take a few minutes to complete. While running, it creates a resource group, an App Service plan, and the App Service resource.
<!-- ### [Deploy to Linux](#tab/linux)
Follow these steps to update and redeploy your web app:
Save your changes. 1. In Visual Studio Code, open the [**Command Palette**](https://code.visualstudio.com/docs/getstarted/userinterface#_command-palette), <kbd>Ctrl</kbd>+<kbd>Shift</kbd>+<kbd>P</kbd>.
-1. Search for and select "Azure App Service: Deploy to Web App". Remember that your told Visual Studio Code to remember the app to deploy your workspace to in an earlier step.
+1. Search for and select "Azure App Service: Deploy to Web App".
1. Select **Deploy** when prompted. 1. When publishing completes, select **Browse Website** in the notification and select **Open** when prompted.
application-gateway Application Gateway Autoscaling Zone Redundant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-autoscaling-zone-redundant.md
Application Gateway and WAF can be configured to scale in two modes: -- **Autoscaling** - With autoscaling enabled, the Application Gateway and WAF v2 SKUs scale up or down based on application traffic requirements. This mode offers better elasticity to your application and eliminates the need to guess the application gateway size or instance count. This mode also allows you to save cost by not requiring the gateway to run at peak-provisioned capacity for expected maximum traffic load. You must specify a minimum and optionally maximum instance count. Minimum capacity ensures that Application Gateway and WAF v2 don't fall below the minimum instance count specified, even without traffic. Each instance is roughly equivalent to 10 more reserved Capacity Units. Zero signifies no reserved capacity and is purely autoscaling in nature. You can also optionally specify a maximum instance count, which ensures that the Application Gateway doesn't scale beyond the specified number of instances. You'll only be billed for the amount of traffic served by the Gateway. The instance counts can range from 0 to 125. The default value for maximum instance count is 20 if not specified.
+- **Autoscaling** - With autoscaling enabled, the Application Gateway and WAF v2 SKUs scale up or down based on application traffic requirements. This mode offers better elasticity to your application and eliminates the need to guess the application gateway size or instance count. This mode also allows you to save cost by not requiring the gateway to run at peak-provisioned capacity for expected maximum traffic load. You must specify a minimum and optionally maximum instance count. Minimum capacity ensures that Application Gateway and WAF v2 don't fall below the minimum instance count specified, even without traffic. Each instance is roughly equivalent to 10 more reserved Capacity Units. Zero signifies no reserved capacity and is purely autoscaling in nature. You can also optionally specify a maximum instance count, which ensures that the Application Gateway doesn't scale beyond the specified number of instances. You'll only be billed for the amount of traffic served by the Gateway. The instance counts can range from 0 to 125. The default value for maximum instance count is 20 if not specified.
- **Manual** - You can also choose Manual mode where the gateway won't autoscale. In this mode, if there's more traffic than what Application Gateway or WAF can handle, it could result in traffic loss. With manual mode, specifying instance count is mandatory. Instance count can vary from 1 to 125 instances. ## Autoscaling and High Availability
Even if you configure autoscaling with zero minimum instances the service will s
However, creating a new instance can take some time (around six or seven minutes). If you don't want to have this downtime, you can configure a minimum instance count of two, ideally with Availability Zone support. This way you'll have at least two instances in your Azure Application Gateway under normal circumstances. So if one of them had a problem the other will try to handle the traffic while a new instance is being created. An Azure Application Gateway instance can support around 10 Capacity Units, so depending on how much traffic you typically have you might want to configure your minimum instance autoscaling setting to a value higher than two.
+For scale-in events, Application Gateway will drain existing connections for 5 minutes on the instance that is subject for removal. After 5 minutes, existing connections will be closed and the instance removed. Any new connections during or after the 5 minute scale-in time will be established to other existing instances on the same gateway.
## Next steps
applied-ai-services Concept Accuracy Confidence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-accuracy-confidence.md
Last updated 02/15/2022-+ # Interpret and improve accuracy and confidence for custom models
applied-ai-services Concept Composed Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-composed-models.md
Last updated 02/15/2022-+ recommendations: false
applied-ai-services Concept Custom Neural https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-neural.md
Last updated 02/15/2022-+ recommendations: false
applied-ai-services Concept Custom Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-template.md
Last updated 02/15/2022-+ recommendations: false
applied-ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom.md
Previously updated : 02/15/2022- Last updated : 02/23/2022+ recommendations: false # Form Recognizer custom models
Custom models can be one of two types, [**custom template**](concept-custom-temp
### Custom neural model
-The custom neural (custom document) model is a deep learning model type that relies on a base model trained on a large collection of documents. This model is then fine-tuned or adapted to your data when you train the model with a labeled dataset. Custom neural models support structured, semi-structured, and unstructured documents to extract fields. Custom neural models currently support English-language documents. When choosing between the two model types, start with a neural model if it meets your functional needs. See [neural models](concept-custom-neural.md) to learn more about custom document models.
+The custom neural (custom document) model is a deep learning model type that relies on a base model trained on a large collection of documents. This model is then fine-tuned or adapted to your data when you train the model with a labeled dataset. Custom neural models support structured, semi-structured, and unstructured documents to extract fields. Custom neural models currently support English-language documents. When you're choosing between the two model types, start with a neural model if it meets your functional needs. See [neural models](concept-custom-neural.md) to learn more about custom document models.
+
+### Build mode
+
+The build custom model operation has added support for the *template* and *neural* custom models. Previous versions of the REST API and SDKs only supported a single build mode that is now known as the *template* mode.
+
+* Template models only accept documents that have the same basic page structureΓÇöa uniform visual appearanceΓÇöor the same relative positioning of elements within the document.
+
+* Neural models support documents that have the same information, but different page structures. Examples of these documents include United States W2 forms, which share the same information, but may vary in appearance by the company that created the document. Neural models currently only support English text.
+
+This table provides links to the build mode programming language SDK references and code samples on GitHub:
+
+|Programming language | SDK reference | Code sample |
+||||
+| C#/.NET | [DocumentBuildMode Struct](/dotnet/api/azure.ai.formrecognizer.documentanalysis.documentbuildmode?view=azure-dotnet-preview&preserve-view=true#properties) | [Sample_BuildCustomModelAsync.cs](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/tests/samples/Sample_BuildCustomModelAsync.cs)
+|Java| [DocumentBuildMode Class](/java/api/com.azure.ai.formrecognizer.administration.models.documentbuildmode?view=azure-java-preview&preserve-view=true#fields) | [BuildModel.java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/src/samples/java/com/azure/ai/formrecognizer/administration/BuildModel.java)|
+|JavaScript | [DocumentBuildMode type](/javascript/api/@azure/ai-form-recognizer/documentbuildmode?view=azure-node-preview&preserve-view=true)| [buildModel.js](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4-beta/javascript/buildModel.js)|
+|Python | [DocumentBuildMode Enum](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.documentbuildmode?view=azure-python-preview&preserve-view=true#fields)| [sample_build_model.py](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b3/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2-beta/sample_build_model.py)|
## Model features
This table compares the supported data extraction areas:
|Custom template| Γ£ö | Γ£ö | Γ£ö |&#10033; | Γ£ö | |Custom neural| Γ£ö| Γ£ö |**n/a**| **n/a** | **n/a** |
-**Table symbols**: Γ£ö ΓÇö supported; &#10033; ΓÇö preview; **n/a** ΓÇö currently unavailable
+**Table symbols**: ✔—supported; ✱—preview; **n/a—currently unavailable
> [!TIP] > When choosing between the two model types, start with a custom neural model if it meets your functional needs. See [custom neural](concept-custom-neural.md ) to learn more about custom neural models.
applied-ai-services Concept General Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-general-document.md
Last updated 02/15/2022-+ recommendations: false <!-- markdownlint-disable MD033 -->
applied-ai-services Build Custom Model V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/build-custom-model-v3.md
Last updated 02/16/2022-+ # Build your training dataset for a custom model
You now have all the documents in your dataset labeled. If you look at the stora
With your dataset labeled, you're now ready to train your model. Select the train button in the upper-right corner.
-1. On the train model dialog, provide a unique model ID and, optionally, a description.
+1. On the train model dialog, provide a unique model ID and, optionally, a description. The model ID accepts a string data type.
1. For the build mode, select the type of model you want to train. Learn more about the [model types and capabilities](../concept-custom.md).
applied-ai-services Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/managed-identities.md
Previously updated : 01/26/2022 Last updated : 02/22/2022
-# Create and use managed identities with Form Recognizer
+# Managed identities for Form Recognizer
-> [!IMPORTANT]
-> Azure RBAC (Azure role-based access control) assignment is currently in preview and not recommended for production workloads. Certain features may not be supported or have constrained capabilities. Azure RBAC assignments are used to grant permissions for managed identity.
+Managed identities for Azure resources are service principals that create an Azure Active Directory (Azure AD) identity and specific permissions for Azure managed resources:
-## What is managed identity?
+* You can use managed identities to grant access to any resource that supports Azure AD authentication, including your own applications. Unlike security keys and authentication tokens, managed identities eliminate the need for developers to manage credentials.
-Azure managed identity is a service principal. It creates an Azure Active Directory (Azure AD) identity and specific permissions for Azure managed resources. You can use a managed identity to grant access to any resource that supports Azure AD authentication. To grant access, assign a role to a managed identity using [Azure RBAC](../../role-based-access-control/overview.md) (Azure role-based access control). There's no added cost to use managed identity in Azure.
+* To grant access to an Azure resource, assign an Azure role to a managed identity using [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
-Managed identity supports both privately and publicly accessible Azure blob storage accounts. For storage accounts with public access, you can opt to use a shared access signature (SAS) to grant limited access. In this article, you'll learn to enable a system-assigned managed identity for your Form Recognizer instance.
+* There's no added cost to use managed identities in Azure.
-## Private storage account access
-> [!NOTE]
->
-> Form Recognizer only supports system-assigned managed identities today. User-assigned managed identities is on the roadmap and will be enabled in the near future.
+> [!TIP]
+> Managed identities eliminate the need for you to manage credentials, including Shared Access Signature (SAS) tokens. Managed identities are a safer way to grant access to data without having credentials in your code.
+## Private storage account access
Private Azure storage account access and authentication are supported by [managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md). If you have an Azure storage account, protected by a Virtual Network (VNet) or firewall, Form Recognizer can't directly access your storage account data. However, once a managed identity is enabled, Form Recognizer can access your storage account using an assigned managed identity credential.
To get started, you'll need:
* A [**Form Recognizer**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) or [**Cognitive Services**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource in the Azure portal. For detailed steps, _see_ [Create a Cognitive Services resource using the Azure portal](../../cognitive-services/cognitive-services-apis-create-account.md?tabs=multiservice%2cwindows).
-* An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) in the same region as your Form Recognizer resource. You'll create containers to store and organize your blob data within your storage account.
+* An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) in the same region as your Form Recognizer resource. You'll create containers to store and organize your blob data within your storage account.
* If your storage account is behind a firewall, **you must enable the following configuration**: </br></br>
To get started, you'll need:
## Managed identity assignments
-There are two types of managed identity: **system-assigned** and **user-assigned**. Currently, Form Recognizer is supported by system-assigned managed identity. A system-assigned managed identity is **enabled** directly on a service instance. It isn't enabled by default; you have to go to your resource and update the identity setting. The system-assigned managed identity is tied to your resource throughout its lifecycle. If you delete your resource, the managed identity will be deleted as well.
+There are two types of managed identity: **system-assigned** and **user-assigned**. Currently, Form Recognizer supports system-assigned managed identity:
+
+* A system-assigned managed identity is **enabled** directly on a service instance. It isn't enabled by default; you must go to your resource and update the identity setting.
+
+* The system-assigned managed identity is tied to your resource throughout its lifecycle. If you delete your resource, the managed identity will be deleted as well.
In the following steps, we'll enable a system-assigned managed identity and grant Form Recognizer limited access to your Azure blob storage account.
In the following steps, we'll enable a system-assigned managed identity and gran
1. In the main window, toggle the **System assigned Status** tab to **On**.
+## Grant access to your storage account
+
+You need to grant Form Recognizer access to your storage account before it can create, read, or delete blobs. Now that you've enabled Form Recognizer with a system-assigned managed identity, you can use Azure role-based access control (Azure RBAC), to give Form Recognizer access to Azure storage. The **Storage Blob Data Reader** role gives Form Recognizer (represented by the system-assigned managed identity) read and list access to the blob container and data.
+ 1. Under **Permissions** select **Azure role assignments**: :::image type="content" source="media/managed-identities/enable-system-assigned-managed-identity-portal.png" alt-text="Screenshot: enable system-assigned managed identity in Azure portal.":::
In the following steps, we'll enable a system-assigned managed identity and gran
> > If you're unable to assign a role in the Azure portal because the Add > Add role assignment option is disabled or you get the permissions error, "you do not have permissions to add role assignment at this scope", check that you're currently signed in as a user with an assigned a role that has Microsoft.Authorization/roleAssignments/write permissions such as Owner or User Access Administrator at the Storage scope for the storage resource.
- 7. Next, you're going to assign a **Storage Blob Data Reader** role to your Form Recognizer service resource. In the **Add role assignment** pop-up window complete the fields as follows and select **Save**:
+1. Next, you're going to assign a **Storage Blob Data Reader** role to your Form Recognizer service resource. In the **Add role assignment** pop-up window complete the fields as follows and select **Save**:
| Field | Value| ||--|
- |**Scope**| ***Storage***|
- |**Subscription**| ***The subscription associated with your storage resource***.|
- |**Resource**| ***The name of your storage resource***|
- |**Role** | ***Storage Blob Data Reader***ΓÇöallows for read access to Azure Storage blob containers and data.|
+ |**Scope**| **_Storage_**|
+ |**Subscription**| **_The subscription associated with your storage resource_**.|
+ |**Resource**| **_The name of your storage resource_**|
+ |**Role** | **_Storage Blob Data Reader_**ΓÇöallows for read access to Azure Storage blob containers and data.|
:::image type="content" source="media/managed-identities/add-role-assignment-window.png" alt-text="Screenshot: add role assignments page in the Azure portal.":::
In the following steps, we'll enable a system-assigned managed identity and gran
:::image type="content" source="media/managed-identities/assigned-roles-window.png" alt-text="Screenshot: Azure role assignments window.":::
- That's it! You've completed the steps to enable a system-assigned managed identity. With this identity credential, you can grant Form Recognizer-specific access rights to documents and files stored in your BYOS account.
+ That's it! You've completed the steps to enable a system-assigned managed identity. With managed identity and Azure RBAC, you granted Form Recognizer specific access rights to your storage resource without having to manage credentials such as SAS tokens.
## Learn more about managed identity > [!div class="nextstepaction"]
-> [Managed identities for Azure resources: frequently asked questions - Azure AD](../../active-directory/managed-identities-azure-resources/managed-identities-faq.md)
+> [Access Azure Storage form a web app using managed identities](/azure/app-service/scenario-secure-app-access-storage?toc=/azure/applied-ai-services/form-recognizer/toc.json&bc=/azure/applied-ai-services/form-recognizer/breadcrumb/toc.json )
applied-ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/service-limits.md
Last updated 02/15/2022-+ # Form Recognizer service Quotas and Limits
applied-ai-services V3 Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/v3-migration-guide.md
Last updated 02/15/2022-+ recommendations: false
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md
Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookW
Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/delete | Deletes a Hybrid Runbook Worker. - ## Next steps
-* To learn how to configure your runbooks to automate processes in your on-premises datacenter or other cloud environment, see [Run runbooks on a Hybrid Runbook Worker](automation-hrw-run-runbooks.md).
+To learn about Azure VM extensions, see:
+
+ - [Azure VM extensions and features for Windows](/azure/virtual-machines/extensions/features-windows).
+ - [Azure VM extensions and features for Linux](/azure/virtual-machines/extensions/features-linux).
-* To learn how to troubleshoot your Hybrid Runbook Workers, see [Troubleshoot Hybrid Runbook Worker issues](troubleshoot/extension-based-hybrid-runbook-worker.md).
+To learn about VM extensions for Arc-enabled servers, see:
+- [VM extension management with Azure Arc-enabled servers](/azure/azure-arc/servers/manage-vm-extensions).
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
This page is updated monthly, so revisit it regularly. If you're looking for ite
To strengthen the overall Azure Automation security posture, the built-in RBAC Reader role would not have access to Automation account keys through the API call - `GET /automationAccounts/agentRegistrationInformation`. Read [here](/azure/automation/automation-role-based-access-control#reader) for more information. +
+### Restore deleted Automation Accounts
+
+**Type:** New change
+
+Users can now restore an Automation account deleted within 30 days. Read [here](/azure/automation/delete-account?tabs=azure-portal#restore-a-deleted-automation-account) for more information.
++ ## December 2021 ### New scripts added for Azure VM management based on Azure Monitor Alert
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Arc-enabled Kubernetes description: Sample Azure Resource Graph queries for Azure Arc-enabled Kubernetes showing use of resource types and tables to access Azure Arc-enabled Kubernetes related resources and properties. Previously updated : 01/20/2022 Last updated : 02/16/2022
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
description: "This tutorial shows how to use GitOps with Flux v2 to manage confi
keywords: "GitOps, Flux, Kubernetes, K8s, Azure, Arc, AKS, Azure Kubernetes Service, containers, devops" Previously updated : 1/24/2022 Last updated : 2/22/2022 --
GitOps with Flux v2 can be enabled in Azure Kubernetes Service (AKS) managed clu
This tutorial describes how to use GitOps in a Kubernetes cluster. Before you dive in, take a moment to [learn how GitOps with Flux works conceptually](./conceptual-gitops-flux2.md).
-General availability of Azure Arc-enabled Kubernetes includes GitOps with Flux v1. The public preview of GitOps with Flux v2, documented here, is available in both Azure Arc-enabled Kubernetes and AKS. Flux v2 is the way forward, and Flux v1 will eventually be deprecated.
+General availability of Azure Arc-enabled Kubernetes includes GitOps with Flux v1. The public preview of GitOps with Flux v2, documented here, is available in both AKS and Azure Arc-enabled Kubernetes. Flux v2 is the way forward, and Flux v1 will eventually be deprecated.
+
+>[!IMPORTANT]
+>GitOps with Flux v2 is in public preview. In preparation for general availability, features are still being added to the preview. One important feature, multi-tenancy, could be a breaking change for some users. To prepare yourself for the release of multi-tenancy, [please review these details](#multi-tenancy).
## Prerequisites
To manage GitOps through the Azure CLI or the Azure portal, you need the followi
* Read and write permissions on the `Microsoft.ContainerService/managedClusters` resource type. * Registration of your subscription with the `AKS-ExtensionManager` feature flag. Use the following command:
- ```azurecli
+ ```console
az feature register --namespace Microsoft.ContainerService --name AKS-ExtensionManager ```
To manage GitOps through the Azure CLI or the Azure portal, you need the followi
* Azure CLI version 2.15 or later. [Install the Azure CLI](/cli/azure/install-azure-cli) or use the following commands to update to the latest version:
- ```azurecli
+ ```console
az version az upgrade ``` * Registration of the following Azure service providers. (It's OK to re-register an existing provider.)
- ```azurecli
+ ```console
az provider register --namespace Microsoft.Kubernetes az provider register --namespace Microsoft.ContainerService az provider register --namespace Microsoft.KubernetesConfiguration
To manage GitOps through the Azure CLI or the Azure portal, you need the followi
Registration is an asynchronous process and should finish within 10 minutes. Use the following code to monitor the registration process:
- ```azurecli
+ ```console
az provider show -n Microsoft.KubernetesConfiguration -o table
- ```
- ```output
Namespace RegistrationPolicy RegistrationState -- - Microsoft.KubernetesConfiguration RegistrationRequired Registered
The GitOps agents require TCP on port 443 (`https://:443`) to function. The agen
Install the latest `k8s-configuration` and `k8s-extension` CLI extension packages:
-```azurecli
+```console
az extension add -n k8s-configuration az extension add -n k8s-extension ``` To update these packages, use the following commands:
-```azurecli
+```console
az extension update -n k8s-configuration az extension update -n k8s-extension ``` To see the list of az CLI extensions installed and their versions, use the following command:
-```azurecli
+```console
az extension list -o table
-```
-```output
Experimental ExtensionType Name Path Preview Version - -- -- -- -- -- False whl connectedk8s C:\Users\somename\.azure\cliextensions\connectedk8s False 1.2.0
In the following example:
If the `microsoft.flux` extension isn't already installed in the cluster, it will be installed.
-```azurecli
+```console
az k8s-configuration flux create -g flux-demo-rg -c flux-demo-arc -n gitops-demo --namespace gitops-demo -t connectedClusters --scope cluster -u https://github.com/fluxcd/flux2-kustomize-helm-example --branch main --kustomization name=infra path=./infrastructure prune=true --kustomization name=apps path=./apps/staging prune=true dependsOn=["infra"]
-```
-```output
Command group 'k8s-configuration flux' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus Warning! https url is being used without https auth params, ensure the repository url provided is not a private repo 'Microsoft.Flux' extension not found on the cluster, installing it now. This may take a few minutes...
Creating the flux configuration 'gitops-demo' in the cluster. This may take a fe
Show the configuration after time to finish reconciliations.
-```azurecli
+```console
az k8s-configuration flux show -g flux-demo-rg -c flux-demo-arc -n gitops-demo -t connectedClusters
-```
-```output
Command group 'k8s-configuration flux' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus { "complianceState": "Compliant",
statefulset.apps/redis-master 1/1 95m
You can delete the Flux configuration by using the following command. This action deletes both the `fluxConfigurations` resource in Azure and the Flux configuration objects in the cluster. Because the Flux configuration was originally created with the `prune=true` parameter for the kustomization, all of the objects created in the cluster based on manifests in the Git repository will be removed when the Flux configuration is removed.
-```azurecli
+```console
az k8s-configuration flux delete -g flux-demo-rg -c flux-demo-arc -n gitops-demo -t connectedClusters --yes ```
If the Flux extension was created automatically when the Flux configuration was
For an Azure Arc-enabled Kubernetes cluster, use this command:
-```azurecli
+```console
az k8s-extension delete -g flux-demo-rg -c flux-demo-arc -n flux -t connectedClusters --yes ```
The `source`, `helm`, `kustomize`, and `notification` Flux controllers are insta
Here's an example for including the [Flux image-reflector and image-automation controllers](https://fluxcd.io/docs/components/image/). If the Flux extension was created automatically when a Flux configuration was first created, the extension name will be `flux`.
-```azurecli
+```console
az k8s-extension create -g <cluster_resource_group> -c <cluster_name> -t <connectedClusters or managedClusters> --name flux --extension-type microsoft.flux --config image-automation-controller.enabled=true image-reflector-controller.enabled=true ```
For a description of all parameters that Flux supports, see the [official Flux d
You can see the full list of parameters that the `k8s-configuration flux` CLI command supports by using the `-h` parameter:
-```azurecli
+```console
az k8s-configuration flux -h
-```
-```output
Group az k8s-configuration flux : Commands to manage Flux v2 Kubernetes configurations. This command group is in preview and under development. Reference and support levels:
Commands:
Here are the parameters for the `k8s-configuration flux create` CLI command:
-```azurecli
+```console
az k8s-configuration flux create -h
-```
-```output
This command is from the following extension: k8s-configuration Command
kubectl create secret generic -n flux-config my-custom-secret --from-file=identi
For both cases, when you create the Flux configuration, use `--local-auth-ref my-custom-secret` in place of the other authentication parameters:
-```azurecli
+```console
az k8s-configuration flux create -g <cluster_resource_group> -c <cluster_name> -n <config_name> -t connectedClusters --scope cluster --namespace flux-config -u <git-repo-url> --kustomization name=kustomization1 --local-auth-ref my-custom-secret ``` Learn more about using a local Kubernetes secret with these authentication methods:
Learn more about using a local Kubernetes secret with these authentication metho
* [Bucket static authentication](https://fluxcd.io/docs/components/source/buckets/#static-authentication) >[!NOTE]
->If you need Flux to access the source through your proxy, you'll need to update the Azure Arc agents with the proxy settings. For more information, see [Connect using an outbound proxy server](./quickstart-connect-cluster.md?tabs=azure-cli#connect-using-an-outbound-proxy-server).
+>If you need Flux to access the source through your proxy, you'll need to update the Azure Arc agents with the proxy settings. For more information, see [Connect using an outbound proxy server](./quickstart-connect-cluster.md?tabs=azure-cli-connect-using-an-outbound-proxy-server).
### Git implementation
By using `az k8s-configuration flux create`, you can create one or more kustomiz
You can also use `az k8s-configuration flux kustomization` to create, update, list, show, and delete kustomizations in a Flux configuration:
-```azurecli
+```console
az k8s-configuration flux kustomization -h
-```
-```output
Group az k8s-configuration flux kustomization : Commands to manage Kustomizations associated with Flux v2 Kubernetes configurations.
Commands:
Here are the kustomization creation options:
-```azurecli
+```console
az k8s-configuration flux kustomization create -h
-```
-```output
This command is from the following extension: k8s-configuration Command
spec:
By using this annotation, the HelmRelease that is deployed will be patched with the reference to the configured source. Note that only GitRepository source is supported for this currently.
+## Multi-tenancy
+
+Flux v2 supports [multi-tenancy](https://github.com/fluxcd/flux2-multi-tenancy). This capability will be integrated into Azure GitOps with Flux v2 prior to general availability.
+
+>[!NOTE]
+>This will be a breaking change if you have any cross-namespace sourceRef for HelmRelease, Kustomization, ImagePolicy, or other objects. To prepare for the release of this multi-tenancy feature, take one of these actions:
+>
+>* (Recommended) Assure that all sourceRef are to objects within the same namespace as the GitOps configuration.
+>* If you need time to migrate, you can opt-out of multi-tenancy.
+
+### Update manifests for multi-tenancy
+
+LetΓÇÖs say we deploy a `fluxConfiguration` to one of our Kubernetes clusters in the **cluster-config** namespace with cluster scope. We configure the source to sync the https://github.com/fluxcd/flux2-kustomize-helm-example repo. This is the same sample Git repo used in the tutorial earlier in this doc. After Flux syncs the repo, it will deploy the resources described in the manifests (yamls). Two of the manifests describe HelmRelease and HelmRepository objects.
+
+```yaml
+apiVersion: helm.toolkit.fluxcd.io/v2beta1
+kind: HelmRelease
+metadata:
+ name: nginx
+ namespace: nginx
+spec:
+ releaseName: nginx-ingress-controller
+ chart:
+ spec:
+ chart: nginx-ingress-controller
+ sourceRef:
+ kind: HelmRepository
+ name: bitnami
+ namespace: flux-system
+ version: "5.6.14"
+ interval: 1h0m0s
+ install:
+ remediation:
+ retries: 3
+ # Default values
+ # https://github.com/bitnami/charts/blob/master/bitnami/nginx-ingress-controller/values.yaml
+ values:
+ service:
+ type: NodePort
+```
+
+```yaml
+apiVersion: source.toolkit.fluxcd.io/v1beta1
+kind: HelmRepository
+metadata:
+ name: bitnami
+ namespace: flux-system
+spec:
+ interval: 30m
+ url: https://charts.bitnami.com/bitnami
+```
+
+By default, the Flux extension will deploy the `fluxConfigurations` by impersonating the **flux-applier** service account that is deployed only in the **cluster-config** namespace. Using the above manifests, when multi-tenancy is enabled the HelmRelease would be blocked. This is because the HelmRelease is in the **nginx** namespace and is referencing a HelmRepository in the **flux-system** namespace. Also, the Flux helm-controller cannot apply the HelmRelease, because there is no **flux-applier** service account in the **nginx** namespace.
+
+To work with multi-tenancy, the correct approach is to deploy all Flux objects into the same namespace as the `fluxConfigurations`. This avoids the cross-namespace reference issue, and allows the Flux controllers to get the permissions to apply the objects. Thus, for a GitOps configuration created in the **cluster-config** namespace, the above manifests would change to these:
+
+```yaml
+apiVersion: helm.toolkit.fluxcd.io/v2beta1
+kind: HelmRelease
+metadata:
+ name: nginx
+ namespace: cluster-config
+spec:
+ releaseName: nginx-ingress-controller
+ targetNamespace: nginx
+ chart:
+ spec:
+ chart: nginx-ingress-controller
+ sourceRef:
+ kind: HelmRepository
+ name: bitnami
+ namespace: cluster-config
+ version: "5.6.14"
+ interval: 1h0m0s
+ install:
+ remediation:
+ retries: 3
+ # Default values
+ # https://github.com/bitnami/charts/blob/master/bitnami/nginx-ingress-controller/values.yaml
+ values:
+ service:
+ type: NodePort
+```
+
+```yaml
+apiVersion: source.toolkit.fluxcd.io/v1beta1
+kind: HelmRepository
+metadata:
+ name: bitnami
+ namespace: cluster-config
+spec:
+ interval: 30m
+ url: https://charts.bitnami.com/bitnami
+```
+
+### Opt out of multi-tenancy
+
+Multi-tenancy will be enabled by default to assure security by default in your clusters. However, if you need to disable multi-tenancy, you can opt out by creating or updating the `microsoft.flux` extension in your clusters with "--configuration-settings multiTenancy.enforce=false".
+
+```console
+az k8s-extension create --extension-type microsoft.flux --configuration-settings multiTenancy.enforce=false -c CLUSTER_NAME -g RESOURCE_GROUP -n flux -t <managedClusters or connectedClusters>
+
+or
+
+az k8s-extension update --configuration-settings multiTenancy.enforce=false -c CLUSTER_NAME -g RESOURCE_GROUP -n flux -t <managedClusters or connectedClusters>
+```
+ ## Migrate from Flux v1 If you've been using Flux v1 in Azure Arc-enabled Kubernetes or AKS clusters and want to migrate to using Flux v2 in the same clusters, you first need to delete the Flux v1 `sourceControlConfigurations` from the clusters. The `microsoft.flux` cluster extension won't be installed if there are `sourceControlConfigurations` resources installed in the cluster. Use these az CLI commands to find and then delete existing `sourceControlConfigurations` in a cluster:
-```azurecli
+```console
az k8s-configuration list --cluster-name <Arc or AKS cluster name> --cluster-type <connectedClusters OR managedClusters> --resource-group <resource group name> az k8s-configuration delete --name <configuration name> --cluster-name <Arc or AKS cluster name> --cluster-type <connectedClusters OR managedClusters> --resource-group <resource group name> ```
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Arc description: Sample Azure Resource Graph queries for Azure Arc showing use of resource types and tables to access Azure Arc related resources and properties. Previously updated : 01/20/2022 Last updated : 02/16/2022
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-overview.md
Title: Overview of the Azure Connected Machine agent description: This article provides a detailed overview of the Azure Arc-enabled servers agent available, which supports monitoring virtual machines hosted in hybrid environments. Previously updated : 01/19/2022 Last updated : 02/23/2022
The following versions of the Windows and Linux operating system are officially
* Oracle Linux 7 (x64) > [!WARNING]
-> The Linux hostname or Windows computer name cannot use one of the reserved words or trademarks in the name, otherwise attempting to register the connected machine with Azure will fail. See [Resolve reserved resource name errors](../../azure-resource-manager/templates/error-reserved-resource-name.md) for a list of the reserved words.
+> The Linux hostname or Windows computer name cannot use one of the reserved words or trademarks in the name, otherwise attempting to register the connected machine with Azure will fail. For a list of reserved words, see [Resolve reserved resource name errors](../../azure-resource-manager/templates/error-reserved-resource-name.md).
> [!NOTE]
-> While Azure Arc-enabled servers supports Amazon Linux, the following do not support this distro:
+> While Azure Arc-enabled servers supports Amazon Linux, the following do not support this distribution:
> > * The Dependency agent used by Azure Monitor VM insights > * Azure Automation Update Management
Azure Arc-enabled servers depend on the following Azure resource providers in yo
* **Microsoft.GuestConfiguration** * **Microsoft.HybridConnectivity**
-If they are not registered, you can register them using the following commands:
+If these resource providers are not already registered, you can register them using the following commands:
Azure PowerShell:
az provider register --namespace 'Microsoft.GuestConfiguration'
az provider register --namespace 'Microsoft.HybridConnectivity' ```
-You can also register the resource providers in the Azure portal by following the steps under [Azure portal](../../azure-resource-manager/management/resource-providers-and-types.md#azure-portal).
+You can also register the resource providers in the [Azure portal](../../azure-resource-manager/management/resource-providers-and-types.md#azure-portal).
### Transport Layer Security 1.2 protocol
URLs:
|`dc.services.visualstudio.com`|Agent telemetry| |`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com`|Notification service|
-For a list of IP addresses for each service tag/region, see the JSON file - [Azure IP Ranges and Service Tags ΓÇô Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519). Microsoft publishes weekly updates containing each Azure Service and the IP ranges it uses. This information in the JSON file is the current point-in-time list of the IP ranges that correspond to each service tag. The IP addresses are subject to change. If IP address ranges are required for your firewall configuration, then the **AzureCloud** Service Tag should be used to allow access to all Azure services. Do not disable security monitoring or inspection of these URLs, allow them as you would other Internet traffic.
+For a list of IP addresses for each service tag/region, see the JSON file [Azure IP Ranges and Service Tags ΓÇô Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519). Microsoft publishes weekly updates containing each Azure Service and the IP ranges it uses. This information in the JSON file is the current point-in-time list of the IP ranges that correspond to each service tag. The IP addresses are subject to change. If IP address ranges are required for your firewall configuration, then the **AzureCloud** Service Tag should be used to allow access to all Azure services. Do not disable security monitoring or inspection of these URLs, allow them as you would other Internet traffic.
-For more information, review [Service tags overview](../../virtual-network/service-tags-overview.md).
+For more information, see [Virtual network service tags](../../virtual-network/service-tags-overview.md).
## Installation and configuration
-Connecting machines in your hybrid environment directly with Azure can be accomplished using different methods depending on your requirements. The following table highlights each method to determine which works best for your organization.
-
-> [!IMPORTANT]
-> The Connected Machine agent cannot be installed on an Azure Windows virtual machine. If you attempt to, the installation detects this and rolls back.
+Connecting machines in your hybrid environment directly with Azure can be accomplished using different methods, depending on your requirements and the tools you prefer to use. The following table highlights each method so that you can determine which works best for your deployment.
| Method | Description | |--|-|
-| Interactively | Manually install the agent on a single or small number of machines following the steps in [Connect machines from Azure portal](onboard-portal.md).<br> From the Azure portal, you can generate a script and execute it on the machine to automate the install and configuration steps of the agent.|
-| At scale | Install and configure the agent for multiple machines following the [Connect machines using a Service Principal](onboard-service-principal.md).<br> This method creates a service principal to connect machines non-interactively.|
-| At scale | Install and configure the agent for multiple machines following the method [Connect hybrid machines to Azure from Automation Update Management](onboard-update-management-machines.md).<br> This method creates a service principal, and installs and configures the agent for multiple machines managed with Azure Automation Update Management to connect machines non-interactively. |
-| At scale | Install and configure the agent for multiple machines following the method [Using Windows PowerShell DSC](onboard-dsc.md).<br> This method uses a service principal to connect machines non-interactively with PowerShell DSC. |
+| Interactively | Manually install the agent on a single or small number of machines by [connecting machines using a deployment script](onboard-portal.md).<br> From the Azure portal, you can generate a script and execute it on the machine to automate the install and configuration steps of the agent.|
+| Interactively | [Connect machines from Windows Admin Center](onboard-windows-admin-center.md) |
+| Interactively or at scale | [Connect machines using PowerShell](onboard-powershell.md) |
+| Interactively or at scale | [Connect machines using Windows PowerShell Desired State Configuration (DSC)](onboard-dsc.md) |
+| At scale | [Connect machines using a service principal](onboard-service-principal.md) to install the agent at scale non-interactively.|
+| At scale | [Connect machines by running PowerShell scripts with Configuration Manager](onboard-configuration-manager-powershell.md)
+| At scale | [Connect machines with a Configuration Manager custom task sequence](onboard-configuration-manager-custom-task.md)
+| At scale | [Connect machines from Automation Update Management](onboard-update-management-machines.md) to create a service principal that installs and configures the agent for multiple machines managed with Azure Automation Update Management to connect machines non-interactively. |
++++
+> [!IMPORTANT]
+> The Connected Machine agent cannot be installed on an Azure Windows virtual machine. If you attempt to, the installation detects this and rolls back.
## Connected Machine agent technical overview
Connecting machines in your hybrid environment directly with Azure can be accomp
The Connected Machine agent for Windows can be installed by using one of the following three methods:
-* Double-click the file `AzureConnectedMachineAgent.msi`.
+* Running the file `AzureConnectedMachineAgent.msi`.
* Manually by running the Windows Installer package `AzureConnectedMachineAgent.msi` from the Command shell. * From a PowerShell session using a scripted method.
azure-arc Onboard Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-service-principal.md
Title: Connect hybrid machines to Azure at scale description: In this article, you learn how to connect machines to Azure using Azure Arc-enabled servers using a service principal. Previously updated : 02/16/2022 Last updated : 02/23/2022
If you don't have an Azure subscription, create a [free account](https://azure.m
You can create a service principal in the Azure portal or by using Azure PowerShell. > [!NOTE]
-> To create a service principal and assign roles, your account must be a member of the **Owner** or **User Access Administrator** role in the subscription that you want to use for onboarding.
+> To assign Arc-enabled server roles, your account must be a member of the **Owner** or **User Access Administrator** role in the subscription that you want to use for onboarding.
### Azure portal
azure-arc Plan At Scale Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/plan-at-scale-deployment.md
Title: How to plan and deploy Azure Arc-enabled servers description: Learn how to enable a large number of machines to Azure Arc-enabled servers to simplify configuration of essential security, management, and monitoring capabilities in Azure. Previously updated : 08/27/2021 Last updated : 02/22/2022
Next, we add to the foundation laid in phase 1 by preparing for and deploying th
|Task |Detail |Duration | |--|-||
-| Download the pre-defined installation script | Review and customize the pre-defined installation script for at-scale deployment of the Connected Machine agent to support your automated deployment requirements.<br><br> Sample at-scale onboarding resources:<br><br> <ul><li> [At-scale basic deployment script](onboard-service-principal.md)</ul></li> <ul><li>[At-scale onboarding VMware vSphere Windows Server VMs](https://github.com/microsoft/azure_arc/blob/main/docs/azure_arc_jumpstart/azure_arc_servers/scaled_deployment/vmware_scaled_powercli_win/_index.md)</ul></li> <ul><li>[At-scale onboarding VMware vSphere Linux VMs](https://github.com/microsoft/azure_arc/blob/main/docs/azure_arc_jumpstart/azure_arc_servers/scaled_deployment/vmware_scaled_powercli_linux/_index.md)</ul></li> <ul><li>[At-scale onboarding AWS EC2 instances using Ansible](https://github.com/microsoft/azure_arc/blob/main/docs/azure_arc_jumpstart/azure_arc_servers/scaled_deployment/aws_scaled_ansible/_index.md)</ul></li> <ul><li>[At-scale deployment using PowerShell remoting](./onboard-powershell.md) (Windows only)</ul></li>| One or more days depending on requirements, organizational processes (for example, Change and Release Management), and automation method used. |
+| Download the pre-defined installation script | Review and customize the pre-defined installation script for at-scale deployment of the Connected Machine agent to support your automated deployment requirements.<br><br> Sample at-scale onboarding resources:<br><br> <ul><li> [At-scale basic deployment script](onboard-service-principal.md)</ul></li> <ul><li>[At-scale onboarding VMware vSphere Windows Server VMs](https://github.com/microsoft/azure_arc/blob/main/docs/azure_arc_jumpstart/azure_arc_servers/scaled_deployment/vmware_scaled_powercli_win/_index.md)</ul></li> <ul><li>[At-scale onboarding VMware vSphere Linux VMs](https://github.com/microsoft/azure_arc/blob/main/docs/azure_arc_jumpstart/azure_arc_servers/scaled_deployment/vmware_scaled_powercli_linux/_index.md)</ul></li> <ul><li>[At-scale onboarding AWS EC2 instances using Ansible](https://github.com/microsoft/azure_arc/blob/main/docs/azure_arc_jumpstart/azure_arc_servers/scaled_deployment/aws_scaled_ansible/_index.md)</ul></li> | One or more days depending on requirements, organizational processes (for example, Change and Release Management), and automation method used. |
| [Create service principal](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale) |Create a service principal to connect machines non-interactively using Azure PowerShell or from the portal.| One hour | | Deploy the Connected Machine agent to your target servers and machines |Use your automation tool to deploy the scripts to your servers and connect them to Azure.| One or more days depending on your release plan and if following a phased rollout. |
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Arc-enabled servers description: Sample Azure Resource Graph queries for Azure Arc-enabled servers showing use of resource types and tables to access Azure Arc-enabled servers related resources and properties. Previously updated : 01/20/2022 Last updated : 02/16/2022
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
Update the Python code file `init.py`, depending on the interface used by your f
# [ASGI](#tab/asgi) ```python
-app=FastAPI("Test")
+app=fastapi.FastAPI()
-@app.route("/api/HandleApproach")
-def test():
- return "Hello!"
+@app.get("/hello/{name}")
+async def get_name(
+ name: str,):
+ return {
+ "name": name,}
-def main(req: func.HttpRequest, context) -> func.HttpResponse:
- logging.info('Python HTTP trigger function processed a request.')
- return func.AsgiMiddleware(app).handle(req, context)
+def main(req: func.HttpRequest, context: func.Context) -> func.HttpResponse:
+ return AsgiMiddleware(app).handle(req, context)
``` # [WSGI](#tab/wsgi)
def main(req: func.HttpRequest, context) -> func.HttpResponse:
```python app=Flask("Test")
-@app.route("/api/WrapperApproach")
-def test():
- return "Hello!"
+@app.route("/hello/<name>", methods=['GET'])
+def hello(name: str):
+ return f"hello {name}"
def main(req: func.HttpRequest, context) -> func.HttpResponse: logging.info('Python HTTP trigger function processed a request.') return func.WsgiMiddleware(app).handle(req, context) ```
+For a full example, see [Using Flask Framework with Azure Functions](/samples/azure-samples/flask-app-on-azure-functions/azure-functions-python-create-flask-app/).
azure-maps Azure Maps Event Grid Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-event-grid-integration.md
Azure Maps integrates with Azure Event Grid, so that users can send event notifications to other services and trigger downstream processes. The purpose of this article is to help you configure your business applications to listen to Azure Maps events. This allows users to react to critical events in a reliable, scalable, and secure manner. For example, users can build an application to update a database, create a ticket, and deliver an email notification, every time a device enters a geofence.
-Azure Event Grid is a fully managed event routing service, which uses a publish-subscribe model. Event Grid has built-in support for Azure services like [Azure Functions](../azure-functions/functions-overview.md) and [Azure Logic Apps](../azure-functions/functions-overview.md). It can deliver event alerts to non-Azure services using webhooks. For a complete list of the event handlers that Event Grid supports, see [An introduction to Azure Event Grid](../event-grid/overview.md).
+> [!NOTE]
+> The Geofence API async event requires the region property of your Azure Maps account be set to ***Global***. When creating an Azure Maps account in the Azure portal, this isn't given as an option. For more information, see [Create an Azure Maps account with a global region](tutorial-geofence.md#create-an-azure-maps-account-with-a-global-region).
+Azure Event Grid is a fully managed event routing service, which uses a publish-subscribe model. Event Grid has built-in support for Azure services like [Azure Functions](../azure-functions/functions-overview.md) and [Azure Logic Apps](../azure-functions/functions-overview.md). It can deliver event alerts to non-Azure services using webhooks. For a complete list of the event handlers that Event Grid supports, see [An introduction to Azure Event Grid](../event-grid/overview.md).
![Azure Event Grid functional model](./media/azure-maps-event-grid-integration/azure-event-grid-functional-model.png) - ## Azure Maps events types
-Event grid uses [event subscriptions](../event-grid/concepts.md#event-subscriptions) to route event messages to subscribers. An Azure Maps account emits the following event types:
+Event grid uses [event subscriptions](../event-grid/concepts.md#event-subscriptions) to route event messages to subscribers. An Azure Maps account emits the following event types:
| Event type | Description | | - | -- |
The following example shows the schema for GeofenceResult:
Applications that handle Azure Maps geofence events should follow a few recommended practices:
+* The Geofence API async event requires the region property of your Azure Maps account be set to ***Global***. When creating an Azure Maps account in the Azure portal, this isn't given as an option. For more information, see [Create an Azure Maps account with a global region](tutorial-geofence.md#create-an-azure-maps-account-with-a-global-region).
* Configure multiple subscriptions to route events to the same event handler. It's important not to assume that events are from a particular source. Always check the message topic to ensure that the message came from the source that you expect. * Use the `X-Correlation-id` field in the response header to understand if your information about objects is up to date. Messages can arrive out of order or after a delay. * When a GET or a POST request in the Geofence API is called with the mode parameter set to `EnterAndExit`, then an Enter or Exit event is generated for each geometry in the geofence for which the status has changed from the previous Geofence API call.
Applications that handle Azure Maps geofence events should follow a few recommen
To learn more about how to use geofencing to control operations at a construction site, see: > [!div class="nextstepaction"]
-> [Set up a geofence by using Azure Maps](tutorial-geofence.md)
+> [Set up a geofence by using Azure Maps](tutorial-geofence.md)
azure-maps Tutorial Geofence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-geofence.md
Title: 'Tutorial: Create a geofence and track devices on a Microsoft Azure Map'
description: Tutorial on how to set up a geofence. See how to track devices relative to the geofence by using the Azure Maps Spatial service Previously updated : 10/28/2021 Last updated : 02/28/2021
Azure Maps provides a number of services to support the tracking of equipment en
> [!div class="checklist"] >
+> * Create an Azure Maps account with a global region.
> * Upload [Geofencing GeoJSON data](geofence-geojson.md) that defines the construction site areas you want to monitor. You'll use the [Data Upload API](/rest/api/maps/data-v2/upload-preview) to upload geofences as polygon coordinates to your Azure Maps account. > * Set up two [logic apps](../event-grid/handler-webhooks.md#logic-apps) that, when triggered, send email notifications to the construction site operations manager when equipment enters and exits the geofence area. > * Use [Azure Event Grid](../event-grid/overview.md) to subscribe to enter and exit events for your Azure Maps geofence. You set up two webhook event subscriptions that call the HTTP endpoints defined in your two logic apps. The logic apps then send the appropriate email notifications of equipment moving beyond or entering the geofence.
Azure Maps provides a number of services to support the tracking of equipment en
## Prerequisites
-1. [Create an Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account).
-2. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key.
+* This tutorial uses the [Postman](https://www.postman.com/) application, but you can use a different API development environment.
-This tutorial uses the [Postman](https://www.postman.com/) application, but you can use a different API development environment.
+## Create an Azure Maps account with a global region
+
+The Geofence API async event requires the region property of your Azure Maps account be set to ***Global***. This isn't given as an option when creating an Azure Maps account in the Azure portal, however you do have several other options for creating a new Azure Maps account with the *global* region setting. This section lists the three methods that can be used to create an Azure Maps account with the region set to *global*.
+
+> [!NOTE]
+> The `location` property in both the ARM template and PowerShell `New-AzMapsAccount` command refer to the same property as the `Region` field in the Azure portal.
+
+### Use an ARM template to create an Azure Maps account with a global region
+
+You will need to [Create your Azure Maps account using an ARM template](how-to-create-template.md), making sure to set `location` to `global` in the `resources` section of the ARM template.
+
+### Use PowerShell to create an Azure Maps account with a global region
+
+```powershell
+New-AzMapsAccount -ResourceGroupName your-Resource-Group -Name name-of-maps-account -SkuName g2 -Location global
+```
+
+### Use Azure CLI to create an Azure Maps account with a global region
+
+The Azure CLI command [az maps account create](/cli/azure/maps/account?view=azure-cli-latest#az-maps-account-create) doesnΓÇÖt have a location property, but defaults to ΓÇ£globalΓÇ¥, making it useful for creating an Azure Maps account with a global region setting for use with the Geofence API async event.
## Upload geofencing GeoJSON data
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
The following tables provide a quick comparison of the Azure Monitor agents for
## Azure Monitor agent
-The [Azure Monitor agent](azure-monitor-agent-overview.md) is meant to replace the Log Analytics agent, Azure Diagnostic extension and Telegraf agent for both Windows and Linux machines. It can send data to both Azure Monitor Logs and Azure Monitor Metrics and uses [Data Collection Rules (DCR)](data-collection-rule-overview.md) which provide a more scalable method of configuring data collection and destinations for each agent.
+The [Azure Monitor agent](azure-monitor-agent-overview.md) is meant to replace the Log Analytics agent, Azure Diagnostic extension and Telegraf agent for both Windows and Linux machines. It can send data to both Azure Monitor Logs and Azure Monitor Metrics and uses [Data Collection Rules (DCR)](../essentials/data-collection-rule-overview.md) which provide a more scalable method of configuring data collection and destinations for each agent.
Use the Azure Monitor agent if you need to: - Collect guest logs and metrics from any machine in Azure, in other clouds, or on-premises. ([Azure Arc-enabled servers](../../azure-arc/servers/overview.md) required for machines outside of Azure.) -- Manage data collection configuration centrally, using [data collection rules](./data-collection-rule-overview.md) and use Azure Resource Manager (ARM) templates or policies for management overall.
+- Manage data collection configuration centrally, using [data collection rules](../essentials/data-collection-rule-overview.md) and use Azure Resource Manager (ARM) templates or policies for management overall.
- Send data to Azure Monitor Logs and Azure Monitor Metrics (preview) for analysis with Azure Monitor. - Use Windows event filtering or multi-homing for logs on Windows and Linux.
azure-monitor Azure Monitor Agent Data Collection Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-data-collection-endpoint.md
+
+ Title: Using data collection endpoints with Azure Monitor agent (preview)
+description: Use data collection endpoints to uniquely configure ingestion settings for your machines.
+++ Last updated : 1/5/2022++++
+# Using data collection endpoints with Azure Monitor agent (preview)
+[Data Collection Endpoints (DCEs)](../essentials/data-collection-endpoint-overview.md) allow you to uniquely configure ingestion settings for your machines, giving you greater control over your networking requirements.
+
+## Create data collection endpoint
+See [Data collection endpoints in Azure Monitor (preview)](../essentials/data-collection-endpoint-overview.md) for details on data collection endpoints and how to create them.
+
+## Create endpoint association in Azure portal
+Use **Data collection rules** in the portal to associate endpoints with a resource (e.g. a virtual machine) or a set of resources. Create a new rule or open an existing rule. In the **Resources** tab, click on the **Data collection endpoint** drop-down to associate an existing endpoint for your resource in the same region (or select multiple resources in the same region to bulk-assign an endpoint for them). Doing this creates an association per resource which links the endpoint to the resource. The Azure Monitor agent running on these resources will now start using the endpoint instead for uploading data to Azure Monitor.
+
+[![Data Collection Rule virtual machines](media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png)](../agents/media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png#lightbox)
++
+> [!NOTE]
+> The data collection endpoint should be created in the **same region** where your virtual machines exist.
++
+## Create endpoint and association using REST API
+
+> [!NOTE]
+> The data collection endpoint should be created in the **same region** where your virtual machines exist.
+
+1. Create data collection endpoint(s) using these [DCE REST APIs](/cli/azure/monitor/data-collection/endpoint).
+2. Create association(s) to link the endpoint(s) to your target machines or resources, using these [DCRA REST APIs](/rest/api/monitor/datacollectionruleassociations/create#examples).
++
+## Sample data collection endpoint
+The sample data collection endpoint below is for virtual machines with Azure Monitor agent, with public network access disabled so that agent only uses private links to communicate and send data to Azure Monitor/Log Analytics.
+
+```json
+{
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx/resourceGroups/myResourceGroup/providers/Microsoft.Insights/dataCollectionEndpoints/myCollectionEndpoint",
+ "name": "myCollectionEndpoint",
+ "type": "Microsoft.Insights/dataCollectionEndpoints",
+ "location": "eastus",
+ "tags": {
+ "tag1": "A",
+ "tag2": "B"
+ },
+ "properties": {
+ "configurationAccess": {
+ "endpoint": "https://mycollectionendpoint-abcd.eastus-1.control.monitor.azure.com"
+ },
+ "logsIngestion": {
+ "endpoint": "https://mycollectionendpoint-abcd.eastus-1.ingest.monitor.azure.com"
+ },
+ "networkAcls": {
+ "publicNetworkAccess": "Disabled"
+ }
+ },
+ "systemData": {
+ "createdBy": "user1",
+ "createdByType": "User",
+ "createdAt": "yyyy-mm-ddThh:mm:ss.sssssssZ",
+ "lastModifiedBy": "user2",
+ "lastModifiedByType": "User",
+ "lastModifiedAt": "yyyy-mm-ddThh:mm:ss.sssssssZ"
+ },
+ "etag": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+}
+```
+
+## Enable network isolation for the Azure Monitor Agent
+You can use data collection endpoints to enable the Azure Monitor agent to communicate to the internet via private links. To do so, you must:
+1. Create data collection endpoint(s), at least one per region, as shown above
+2. Add the data collection endpoints to a new or existing [Azure Monitor Private Link Scopes (AMPLS)](../logs/private-link-configure.md#connect-azure-monitor-resources) resource. This adds the DCE endpoints to your private DNS zone (see [how to validate](../logs/private-link-configure.md#review-and-validate-your-private-link-setup)) and allows communication via private links. You can do this from either the AMPLS resource or from within an existing DCE resource's 'Network Isolation' tab.
+ > [!NOTE]
+ > Other Azure Monitor resources like the Log Analytics workspace(s) configured in your data collection rules that you wish to send data to, must be part of this same AMPLS resource.
+3. For your data collection endpoint(s), ensure **Accept access from public networks not connected through a Private Link Scope** option is set to **No** under the 'Network Isolation' tab of your endpoint resource in Azure portal, as shown below. This ensures that public internet access is disabled, and network communication only happen via private links.
+4. Associate the data collection endpoints to the target resources, using the data collection rules experience in Azure portal. This results in the agent using the configured the data collection endpoint(s) for network communications. See [Configure data collection for the Azure Monitor agent](../agents/data-collection-rule-azure-monitor-agent.md).
+
+ ![Data collection endpoint network isolation](media/azure-monitor-agent-dce/data-collection-endpoint-network-isolation.png)
+
+## Next steps
+- [Associate endpoint to machines](../agents/data-collection-rule-azure-monitor-agent.md#create-rule-and-association-in-azure-portal)
+- [Add endpoint to AMPLS resource](../logs/private-link-configure.md#connect-azure-monitor-resources)
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-overview.md
The methods for defining data collection for the existing agents are distinctly
- The Log Analytics agent gets its configuration from a Log Analytics workspace. It's easy to centrally configure but difficult to define independent definitions for different virtual machines. It can only send data to a Log Analytics workspace. - Diagnostic extension has a configuration for each virtual machine. It's easy to define independent definitions for different virtual machines but difficult to centrally manage. It can only send data to Azure Monitor Metrics, Azure Event Hubs, or Azure Storage. For Linux agents, the open-source Telegraf agent is required to send data to Azure Monitor Metrics.
-The Azure Monitor agent uses [data collection rules](data-collection-rule-overview.md) to configure data to collect from each agent. Data collection rules enable manageability of collection settings at scale while still enabling unique, scoped configurations for subsets of machines. They're independent of the workspace and independent of the virtual machine, which allows them to be defined once and reused across machines and environments. See [Configure data collection for the Azure Monitor agent](data-collection-rule-azure-monitor-agent.md).
+The Azure Monitor agent uses [data collection rules](../essentials/data-collection-rule-overview.md) to configure data to collect from each agent. Data collection rules enable manageability of collection settings at scale while still enabling unique, scoped configurations for subsets of machines. They're independent of the workspace and independent of the virtual machine, which allows them to be defined once and reused across machines and environments. See [Configure data collection for the Azure Monitor agent](data-collection-rule-azure-monitor-agent.md).
## Should I switch to the Azure Monitor agent? The Azure Monitor agent replaces the [legacy agents for Azure Monitor](agents-overview.md). To start transitioning your VMs off the current agents to the new agent, consider the following factors:
The following table shows the current support for the Azure Monitor agent with A
| Azure Monitor feature | Current support | More information | |:|:|:| | [VM insights](../vm/vminsights-overview.md) | Private preview | [Sign-up link](https://aka.ms/amadcr-privatepreviews) |
-| [Connect using private links](data-collection-endpoint-overview.md#enable-network-isolation-for-the-azure-monitor-agent) | Public preview | No sign-up needed |
+| [Connect using private links](azure-monitor-agent-data-collection-endpoint.md) | Public preview | No sign-up needed |
| [VM insights guest health](../vm/vminsights-health-overview.md) | Public preview | Available only on the new agent | | [SQL insights](../insights/sql-insights-overview.md) | Public preview | Available only on the new agent |
There's no cost for the Azure Monitor agent, but you might incur charges for the
The Azure Monitor agent doesn't require any keys but instead requires a [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#system-assigned-managed-identity). You must have a system-assigned managed identity enabled on each virtual machine before you deploy the agent. ## Networking
-The Azure Monitor agent supports Azure service tags (both AzureMonitor and AzureResourceManager tags are required). It supports connecting via private links, direct proxies and Log Analytics gateway as described below.
+The Azure Monitor agent supports Azure service tags (both AzureMonitor and AzureResourceManager tags are required). It supports connecting via **direct proxies, Log Analytics gateway and private links** as described below.
### Proxy configuration If the machine connects through a proxy server to communicate over the internet, review requirements below to understand the network configuration required.
The Azure Monitor agent extensions for Windows and Linux can communicate either
> [!IMPORTANT] > Proxy configuration is not supported for [Azure Monitor Metrics (preview)](../essentials/metrics-custom-overview.md) as a destination. As such, if you are sending metrics to this destination, it will use the public internet without any proxy.
-1. Use this flowchart to determine the values of the *setting* and *protectedSetting* parameters first.
+1. Use this flowchart to determine the values of the *settings* and *protectedSettings* parameters first.
- ![Flowchart to determine the values of setting and protectedSetting parameters when you enable the extension.](media/azure-monitor-agent-overview/proxy-flowchart.png)
+ ![Flowchart to determine the values of settings and protectedSettings parameters when you enable the extension.](media/azure-monitor-agent-overview/proxy-flowchart.png)
-2. After the values for the *setting* and *protectedSetting* parameters are determined, provide these additional parameters when you deploy the Azure Monitor agent by using PowerShell commands. The following examples are for Azure virtual machines.
+2. After the values for the *settings* and *protectedSettings* parameters are determined, provide these additional parameters when you deploy the Azure Monitor agent by using PowerShell commands. The following examples are for Azure virtual machines.
| Parameter | Value | |:|:|
- | Setting | A JSON object from the preceding flowchart converted to a string. Skip if not applicable. An example is {"proxy":{"mode":"application","address":"http://[address]:[port]","auth": false}}. |
- | ProtectedSetting | A JSON object from the preceding flowchart converted to a string. Skip if not applicable. An example is {"proxy":{"username": "[username]","password": "[password]"}}. |
+ | settingsHashtable | A JSON object from the preceding flowchart converted to a hashtable. Skip if not applicable. An example is {"proxy":{"mode":"application","address":"http://[address]:[port]","auth": false}}. |
+ | protectedSettingsHashtable | A JSON object from the preceding flowchart converted to a hashtable. Skip if not applicable. An example is {"proxy":{"username": "[username]","password": "[password]"}}. |
# [Windows VM](#tab/PowerShellWindows) ```powershell
-Set-AzVMExtension -ExtensionName AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.0 -Setting <settingString> -ProtectedSetting <protectedSettingString>
+$settingsHashtable = @{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": false}};
+$protectedSettingsHashtable = @{"proxy":{"username": "[username]","password": "[password]"}};
+
+Set-AzVMExtension -ExtensionName AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.0 -Settings <settingsHashtable> -ProtectedSettings <protectedSettingsHashtable>
``` # [Linux VM](#tab/PowerShellLinux) ```powershell
-Set-AzVMExtension -ExtensionName AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.5 -Setting <settingString> -ProtectedSetting <protectedSettingString>
+$settingsHashtable = @{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": false}};
+$protectedSettingsHashtable = @{"proxy":{"username": "[username]","password": "[password]"}};
+
+Set-AzVMExtension -ExtensionName AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.5 -Settings <settingsHashtable> -ProtectedSettings <protectedSettingsHashtable>
``` # [Windows Arc enabled server](#tab/PowerShellWindowsArc) ```powershell
-New-AzConnectedMachineExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Setting <settingString> -ProtectedSetting <protectedSettingString>
+$settingsHashtable = @{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": false}};
+$protectedSettingsHashtable = @{"proxy":{"username": "[username]","password": "[password]"}};
+
+New-AzConnectedMachineExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Settings <settingsHashtable> -ProtectedSettings <protectedSettingsHashtable>
``` # [Linux Arc enabled server](#tab/PowerShellLinuxArc) ```powershell
-New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Setting <settingString> -ProtectedSetting <protectedSettingString>
+$settingsHashtable = @{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": false}};
+$protectedSettingsHashtable = @{"proxy":{"username": "[username]","password": "[password]"}};
+
+New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Settings <settingsHashtable> -ProtectedSettings <protectedSettingsHashtable>
```
-### Log Analytics gateway configuration
-1. Follow the instructions above to configure proxy settings on the agent and provide the IP address and port number corresponding to the gateway server. If you have deployed multiple gateway servers behind a load balancer, the agent proxy configuration is the virtual IP address of the load balancer instead.
-2. Add the **configuration endpoint URL** to fetch data collection rules to the allow list for the gateway
- `Add-OMSGatewayAllowedHost -Host global.handler.control.monitor.azure.com`
- `Add-OMSGatewayAllowedHost -Host <gateway-server-region-name>.handler.control.monitor.azure.com`
- (If using private links on the agent, you must also add the [dce endpoints](./data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint))
-3. Add the **data ingestion endpoint URL** to the allow list for the gateway
- `Add-OMSGatewayAllowedHost -Host <log-analytics-workspace-id>.ods.opinsights.azure.com`
-3. Restart the **OMS Gateway** service to apply the changes
- `Stop-Service -Name <gateway-name>`
- `Start-Service -Name <gateway-name>`
--
-### Private link configuration
-To configure the agent to use private links for network communications with Azure Monitor, you can use [Azure Monitor Private Links Scopes (AMPLS)](../logs/private-link-security.md) and [data collection endpoints](./data-collection-endpoint-overview.md) to enable required network isolation. [View steps to configure network isolation for the agent](./data-collection-endpoint-overview.md#enable-network-isolation-for-the-azure-monitor-agent)
+## Private link configuration
+To configure the agent to use private links for network communications with Azure Monitor, you can use [Azure Monitor Private Links Scopes (AMPLS)](../logs/private-link-security.md) and [data collection endpoints](azure-monitor-agent-data-collection-endpoint.md) to enable required network isolation.
## Next steps
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
Title: Configure data collection for the Azure Monitor agent description: Describes how to create a data collection rule to collect data from virtual machines using the Azure Monitor agent. -- Last updated 07/16/2021
Last updated 07/16/2021
Data Collection Rules (DCR) define data coming into Azure Monitor and specify where it should be sent. This article describes how to create a data collection rule to collect data from virtual machines using the Azure Monitor agent.
-For a complete description of data collection rules, see [Data collection rules in Azure Monitor](data-collection-rule-overview.md).
+For a complete description of data collection rules, see [Data collection rules in Azure Monitor](../essentials/data-collection-rule-overview.md).
> [!NOTE] > This article describes how to configure data for virtual machines with the Azure Monitor agent only.
For example, consider an environment with a set of virtual machines running a li
![Diagram shows virtual machines hosting line of business application and SQL Server associated with data collection rules named central-i t-default and lob-app for line of business application and central-i t-default and s q l for SQL Server.](media/data-collection-rule-azure-monitor-agent/associations.png)
-## Permissions required to create data collection rules and associations
-When using programmatic methods to create data collection rules and associations (i.e. mehtods other than Azure portal), you require the below permissions:
-
-| Built-in Role | Scope(s) | Reason |
-|:|:|:|
-| [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) | <ul><li>Subscription and/or</li><li>Resource group and/or </li><li>An existing data collection rule</li></ul> | To create or edit data collection rules |
-| <ul><li>[Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor)</li><li>[Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator)</li></ul> | <ul><li>Virtual machines, virtual machine scale sets</li><li>Arc-enabled servers</li></ul> | To deploy associations (i.e. to assign rules to the machine) |
-| Any role that includes the action *Microsoft.Resources/deployments/** | <ul><li>Subscription and/or</li><li>Resource group and/or </li><li>An existing data collection rule</li></ul> | To deploy ARM templates |
## Create rule and association in Azure portal
Additionally, choose the appropriate **Platform Type** which specifies the type
In the **Resources** tab, add the resources (virtual machines, virtual machine scale sets, Arc for servers) that should have the Data Collection Rule applied. The Azure Monitor Agent will be installed on resources that don't already have it installed, and will enable Azure Managed Identity as well. ### Private link configuration using data collection endpoints (preview)
-If you need network isolation using private links for collecting data using agents from your resources, simply select existing endpoints (or create a new endpoint) from the same region for the respective resource(s) as shown below. See [how to create data collection endpoint](./data-collection-endpoint-overview.md).
+If you need network isolation using private links for collecting data using agents from your resources, simply select existing endpoints (or create a new endpoint) from the same region for the respective resource(s) as shown below. See [how to create data collection endpoint](../essentials/data-collection-endpoint-overview.md).
[![Data Collection Rule virtual machines](media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png#lightbox)
On the **Collect and deliver** tab, click **Add data source** to add a data sour
[![Data source basic](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-basic-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-basic-updated.png#lightbox)
-To specify other logs and performance counters from the [currently supported data sources](azure-monitor-agent-overview.md#data-sources-and-destinations) or to filter events using XPath queries, select **Custom**. You can then specify an [XPath ](https://www.w3schools.com/xml/xpath_syntax.asp) for any specific values to collect. See [Sample DCR](data-collection-rule-overview.md#sample-data-collection-rule) for examples.
+To specify other logs and performance counters from the [currently supported data sources](azure-monitor-agent-overview.md#data-sources-and-destinations) or to filter events using XPath queries, select **Custom**. You can then specify an [XPath ](https://www.w3schools.com/xml/xpath_syntax.asp) for any specific values to collect. See [Sample DCR](data-collection-rule-sample-agent.md) for an example.
[![Data source custom](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-custom-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-custom-updated.png#lightbox)
Follow the steps below to create a data collection rule and association
> [!NOTE] > If you wish to send data to Log Analytics, you must create the data collection rule in the **same region** where your Log Analytics workspace resides. The rule can be associated to machines in other supported region(s).
-1. Manually create the DCR file using the JSON format shown in [Sample DCR](data-collection-rule-overview.md#sample-data-collection-rule).
+1. Manually create the DCR file using the JSON format shown in [Sample DCR](data-collection-rule-sample-agent.md).
2. Create the rule using the [REST API](/rest/api/monitor/datacollectionrules/create#examples).
This is enabled as part of Azure CLI **monitor-control-service** Extension. [Vie
## Next steps - Learn more about the [Azure Monitor Agent](azure-monitor-agent-overview.md).-- Learn more about [data collection rules](data-collection-rule-overview.md).
+- Learn more about [data collection rules](../essentials/data-collection-rule-overview.md).
azure-monitor Data Collection Rule Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-overview.md
- Title: Data Collection Rules in Azure Monitor
-description: Overview of data collection rules (DCRs) in Azure Monitor including their contents and structure and how you can create and work with them.
--- Previously updated : 02/08/2022----
-# Data collection rules in Azure Monitor
-Data Collection Rules (DCR) define data coming into Azure Monitor and specify where that data should be sent or stored. This article provides an overview of data collection rules including their contents and structure and how you can create and work with them.
-
-## Input sources
-Data collection rules currently support the following input sources:
--- Azure Monitor Agent running on virtual machines, virtual machine scale sets and Azure Arc for servers. See [Configure data collection for the Azure Monitor agent (preview)](../agents/data-collection-rule-azure-monitor-agent.md).---
-## Components of a data collection rule
-A data collection rule includes the following components.
-
-| Component | Description |
-|:|:|
-| Data sources | Unique source of monitoring data with its own format and method of exposing its data. Examples of a data source include Windows event log, performance counters, and syslog. Each data source matches a particular data source type as described below. |
-| Streams | Unique handle that describes a set of data sources that will be transformed and schematized as one type. Each data source requires one or more streams, and one stream may be used by multiple data sources. All data sources in a stream share a common schema. Use multiple streams for example, when you want to send a particular data source to multiple tables in the same Log Analytics workspace. |
-| Destinations | Set of destinations where the data should be sent. Examples include Log Analytics workspace and Azure Monitor Metrics. |
-| Data flows | Definition of which streams should be sent to which destinations. |
-
-The following diagram shows the components of a data collection rule and their relationship
-
-[![Diagram of DCR](media/data-collection-rule-overview/data-collection-rule-components.png)](media/data-collection-rule-overview/data-collection-rule-components.png#lightbox)
-
-### Data source types
-Each data source has a data source type. Each type defines a unique set of properties that must be specified for each data source. The data source types currently available are shown in the following table.
-
-| Data source type | Description |
-|:|:|
-| extension | VM extension-based data source, used exclusively by Log Analytics solutions and Azure services ([View agent supported services and solutions](./azure-monitor-agent-overview.md#supported-services-and-features)) |
-| performanceCounters | Performance counters for both Windows and Linux |
-| syslog | Syslog events on Linux |
-| windowsEventLogs | Windows event log |
-
-## Supported regions
-Data collection rules are stored regionally, and are available in all public regions where Log Analytics is supported, as well as the Azure Government and China clouds. Air-gapped clouds are not yet supported.
-
-## Limits
-For limits that apply to each data collection rule, see [Azure Monitor service limits](../service-limits.md#data-collection-rules).
-
-## Data resiliency and high availability
-Data Collection Rules as a service is deployed regionally. A rule gets created and stored in the region you specify, and is backed up to the [paired-region](../../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) within the same Geo.
-Additionally, the service is deployed to all 3 [availability zones](../../availability-zones/az-overview.md#availability-zones) within the region, making it a **zone-redundant service** which further adds to high availability.
--
-**Single region data residency**: The previewed feature to enable storing customer data in a single region is currently only available in the Southeast Asia Region (Singapore) of the Asia Pacific Geo and Brazil South (Sao Paulo State) Region of Brazil Geo. Single region residency is enabled by default in these regions.
-
-## Create a DCR
-You can currently use any of the following methods to create a DCR:
--- [Use the Azure portal](../agents/data-collection-rule-azure-monitor-agent.md) to create a data collection rule and have it associated with one or more virtual machines.-- Directly edit the data collection rule in JSON and [submit using the REST API](/rest/api/monitor/datacollectionrules).-- Create DCR and associations with [Azure CLI](https://github.com/Azure/azure-cli-extensions/blob/master/src/monitor-control-service/README.md).-- Create DCR and associations with Azure PowerShell.
- - [Get-AzDataCollectionRule](https://github.com/Azure/azure-powershell/blob/master/src/Monitor/Monitor/help/Get-AzDataCollectionRule.md)
- - [New-AzDataCollectionRule](https://github.com/Azure/azure-powershell/blob/master/src/Monitor/Monitor/help/New-AzDataCollectionRule.md)
- - [Set-AzDataCollectionRule](https://github.com/Azure/azure-powershell/blob/master/src/Monitor/Monitor/help/Set-AzDataCollectionRule.md)
- - [Update-AzDataCollectionRule](https://github.com/Azure/azure-powershell/blob/master/src/Monitor/Monitor/help/Update-AzDataCollectionRule.md)
- - [Remove-AzDataCollectionRule](https://github.com/Azure/azure-powershell/blob/master/src/Monitor/Monitor/help/Remove-AzDataCollectionRule.md)
- - [Get-AzDataCollectionRuleAssociation](https://github.com/Azure/azure-powershell/blob/master/src/Monitor/Monitor/help/Get-AzDataCollectionRuleAssociation.md)
- - [New-AzDataCollectionRuleAssociation](https://github.com/Azure/azure-powershell/blob/master/src/Monitor/Monitor/help/New-AzDataCollectionRuleAssociation.md)
- - [Remove-AzDataCollectionRuleAssociation](https://github.com/Azure/azure-powershell/blob/master/src/Monitor/Monitor/help/Remove-AzDataCollectionRuleAssociation.md)
-
-## Sample data collection rule
-The sample data collection rule below is for virtual machines with Azure Monitor agent and has the following details:
--- Performance data
- - Collects specific Processor, Memory, Logical Disk, and Physical Disk counters every 15 seconds and uploads every minute.
- - Collects specific Process counters every 30 seconds and uploads every 5 minutes.
-- Windows events
- - Collects Windows security events and uploads every minute.
- - Collects Windows application and system events and uploads every 5 minutes.
-- Syslog
- - Collects Debug, Critical, and Emergency events from cron facility.
- - Collects Alert, Critical, and Emergency events from syslog facility.
-- Destinations
- - Sends all data to a Log Analytics workspace named centralWorkspace.
-
-> [!NOTE]
-> For an explanation of XPaths that are used to specify event collection in data collection rules, see [Limit data collection with custom XPath queries](data-collection-rule-azure-monitor-agent.md#limit-data-collection-with-custom-xpath-queries)
--
-```json
-{
- "location": "eastus",
- "properties": {
- "dataSources": {
- "performanceCounters": [
- {
- "name": "cloudTeamCoreCounters",
- "streams": [
- "Microsoft-Perf"
- ],
- "scheduledTransferPeriod": "PT1M",
- "samplingFrequencyInSeconds": 15,
- "counterSpecifiers": [
- "\\Processor(_Total)\\% Processor Time",
- "\\Memory\\Committed Bytes",
- "\\LogicalDisk(_Total)\\Free Megabytes",
- "\\PhysicalDisk(_Total)\\Avg. Disk Queue Length"
- ]
- },
- {
- "name": "appTeamExtraCounters",
- "streams": [
- "Microsoft-Perf"
- ],
- "scheduledTransferPeriod": "PT5M",
- "samplingFrequencyInSeconds": 30,
- "counterSpecifiers": [
- "\\Process(_Total)\\Thread Count"
- ]
- }
- ],
- "windowsEventLogs": [
- {
- "name": "cloudSecurityTeamEvents",
- "streams": [
- "Microsoft-Event"
- ],
- "scheduledTransferPeriod": "PT1M",
- "xPathQueries": [
- "Security!*"
- ]
- },
- {
- "name": "appTeam1AppEvents",
- "streams": [
- "Microsoft-Event"
- ],
- "scheduledTransferPeriod": "PT5M",
- "xPathQueries": [
- "System!*[System[(Level = 1 or Level = 2 or Level = 3)]]",
- "Application!*[System[(Level = 1 or Level = 2 or Level = 3)]]"
- ]
- }
- ],
- "syslog": [
- {
- "name": "cronSyslog",
- "streams": [
- "Microsoft-Syslog"
- ],
- "facilityNames": [
- "cron"
- ],
- "logLevels": [
- "Debug",
- "Critical",
- "Emergency"
- ]
- },
- {
- "name": "syslogBase",
- "streams": [
- "Microsoft-Syslog"
- ],
- "facilityNames": [
- "syslog"
- ],
- "logLevels": [
- "Alert",
- "Critical",
- "Emergency"
- ]
- }
- ]
- },
- "destinations": {
- "logAnalytics": [
- {
- "workspaceResourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace",
- "name": "centralWorkspace"
- }
- ]
- },
- "dataFlows": [
- {
- "streams": [
- "Microsoft-Perf",
- "Microsoft-Syslog",
- "Microsoft-Event"
- ],
- "destinations": [
- "centralWorkspace"
- ]
- }
- ]
- }
- }
-```
--
-## Next steps
--- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) and an association to it from a virtual machine using the Azure Monitor agent.
azure-monitor Data Collection Rule Sample Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-sample-agent.md
+
+ Title: Sample data collection rule - agent
+description: Sample data collection rule for Azure Monitor agent
+ Last updated : 02/15/2022++++
+# Sample data collection rule - agent
+The sample [data collection rule](../essentials/data-collection-rule-overview.md) below is for virtual machines with Azure Monitor agent and has the following details:
+
+- Performance data
+ - Collects specific Processor, Memory, Logical Disk, and Physical Disk counters every 15 seconds and uploads every minute.
+ - Collects specific Process counters every 30 seconds and uploads every 5 minutes.
+- Windows events
+ - Collects Windows security events and uploads every minute.
+ - Collects Windows application and system events and uploads every 5 minutes.
+- Syslog
+ - Collects Debug, Critical, and Emergency events from cron facility.
+ - Collects Alert, Critical, and Emergency events from syslog facility.
+- Destinations
+ - Sends all data to a Log Analytics workspace named centralWorkspace.
+
+> [!NOTE]
+> For an explanation of XPaths that are used to specify event collection in data collection rules, see [Limit data collection with custom XPath queries](../agents/data-collection-rule-azure-monitor-agent.md#limit-data-collection-with-custom-xpath-queries)
+
+## Sample DCR
+
+```json
+{
+ "location": "eastus",
+ "properties": {
+ "dataSources": {
+ "performanceCounters": [
+ {
+ "name": "cloudTeamCoreCounters",
+ "streams": [
+ "Microsoft-Perf"
+ ],
+ "scheduledTransferPeriod": "PT1M",
+ "samplingFrequencyInSeconds": 15,
+ "counterSpecifiers": [
+ "\\Processor(_Total)\\% Processor Time",
+ "\\Memory\\Committed Bytes",
+ "\\LogicalDisk(_Total)\\Free Megabytes",
+ "\\PhysicalDisk(_Total)\\Avg. Disk Queue Length"
+ ]
+ },
+ {
+ "name": "appTeamExtraCounters",
+ "streams": [
+ "Microsoft-Perf"
+ ],
+ "scheduledTransferPeriod": "PT5M",
+ "samplingFrequencyInSeconds": 30,
+ "counterSpecifiers": [
+ "\\Process(_Total)\\Thread Count"
+ ]
+ }
+ ],
+ "windowsEventLogs": [
+ {
+ "name": "cloudSecurityTeamEvents",
+ "streams": [
+ "Microsoft-Event"
+ ],
+ "scheduledTransferPeriod": "PT1M",
+ "xPathQueries": [
+ "Security!*"
+ ]
+ },
+ {
+ "name": "appTeam1AppEvents",
+ "streams": [
+ "Microsoft-Event"
+ ],
+ "scheduledTransferPeriod": "PT5M",
+ "xPathQueries": [
+ "System!*[System[(Level = 1 or Level = 2 or Level = 3)]]",
+ "Application!*[System[(Level = 1 or Level = 2 or Level = 3)]]"
+ ]
+ }
+ ],
+ "syslog": [
+ {
+ "name": "cronSyslog",
+ "streams": [
+ "Microsoft-Syslog"
+ ],
+ "facilityNames": [
+ "cron"
+ ],
+ "logLevels": [
+ "Debug",
+ "Critical",
+ "Emergency"
+ ]
+ },
+ {
+ "name": "syslogBase",
+ "streams": [
+ "Microsoft-Syslog"
+ ],
+ "facilityNames": [
+ "syslog"
+ ],
+ "logLevels": [
+ "Alert",
+ "Critical",
+ "Emergency"
+ ]
+ }
+ ]
+ },
+ "destinations": {
+ "logAnalytics": [
+ {
+ "workspaceResourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace",
+ "name": "centralWorkspace"
+ }
+ ]
+ },
+ "dataFlows": [
+ {
+ "streams": [
+ "Microsoft-Perf",
+ "Microsoft-Syslog",
+ "Microsoft-Event"
+ ],
+ "destinations": [
+ "centralWorkspace"
+ ]
+ }
+ ]
+ }
+ }
+```
++
+## Next steps
+
+- [Create a data collection rule](../agents/data-collection-rule-azure-monitor-agent.md) and an association to it from a virtual machine using the Azure Monitor agent.
azure-monitor Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/gateway.md
To provide high availability for directly connected or Operations Management gro
The computer that runs the Log Analytics gateway requires the agent to identify the service endpoints that the gateway needs to communicate with. The agent also needs to direct the gateway to report to the same workspaces that the agents or Operations Manager management group behind the gateway are configured with. This configuration allows the gateway and the agent to communicate with their assigned workspace.
-A gateway can be multihomed to up to ten workspaces using the Azure Monitor Agent and [data dollection rules](./data-collection-rule-azure-monitor-agent.md). Using the legacy Microsoft Monitor Agent, you can only multihome up to four workspaces as that is the total number of workspaces the legacy Windows agent supports.
+A gateway can be multihomed to up to ten workspaces using the Azure Monitor Agent and [data collection rules](./data-collection-rule-azure-monitor-agent.md). Using the legacy Microsoft Monitor Agent, you can only multihome up to four workspaces as that is the total number of workspaces the legacy Windows agent supports.
Each agent must have network connectivity to the gateway so that agents can automatically transfer data to and from the gateway. Avoid installing the gateway on a domain controller. Linux computers that are behind a gateway server cannot use the [wrapper script installation](../agents/agent-linux.md#install-the-agent-using-wrapper-script) method to install the Log Analytics agent for Linux. The agent must be downloaded manually, copied to the computer, and installed manually because the gateway only supports communicating with the Azure services mentioned earlier.
To configure the Azure Monitor agent (installed on the gateway server) to use th
2. Add the **configuration endpoint URL** to fetch data collection rules to the allow list for the gateway `Add-OMSGatewayAllowedHost -Host global.handler.control.monitor.azure.com` `Add-OMSGatewayAllowedHost -Host <gateway-server-region-name>.handler.control.monitor.azure.com`
- (If using private links on the agent, you must also add the [dce endpoints](./data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint))
+ (If using private links on the agent, you must also add the [dce endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint))
3. Add the **data ingestion endpoint URL** to the allow list for the gateway `Add-OMSGatewayAllowedHost -Host <log-analytics-workspace-id>.ods.opinsights.azure.com` 3. Restart the **OMS Gateway** service to apply the changes
azure-monitor Resource Manager Data Collection Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/resource-manager-data-collection-rules.md
Last updated 02/07/2022
# Resource Manager template samples for data collection rules in Azure Monitor
-This article includes sample [Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md) to create an association between a [data collection rule](data-collection-rule-overview.md) and the [Azure Monitor agent](./azure-monitor-agent-overview.md). Each sample includes a template file and a parameters file with sample values to provide to the template.
+This article includes sample [Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md) to create an association between a [data collection rule](../essentials/data-collection-rule-overview.md) and the [Azure Monitor agent](./azure-monitor-agent-overview.md). Each sample includes a template file and a parameters file with sample values to provide to the template.
[!INCLUDE [azure-monitor-samples](../../../includes/azure-monitor-resource-manager-samples.md)]
azure-monitor Action Groups Create Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups-create-resource-manager-template.md
Previously updated : 12/14/2021 Last updated : 2/23/2022
azure-monitor Action Groups Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups-logic-app.md
description: Learn how to create a logic app action to process Azure Monitor ale
Previously updated : 02/19/2021 Last updated : 2/23/2022 # How to trigger complex actions with Azure Monitor alerts
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
Title: Create and manage action groups in the Azure portal
description: Learn how to create and manage action groups in the Azure portal. Previously updated : 02/10/2022 Last updated : 2/23/2022
azure-monitor Alerts Action Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-action-rules.md
Title: Alert processing rules for Azure Monitor alerts description: Understanding what alert processing rules in Azure Monitor are and how to configure and manage them. Previously updated : 02/02/2022 Last updated : 2/23/2022
azure-monitor Alerts Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-activity-log.md
Title: Create, view, and manage activity log alerts in Azure Monitor
description: Create activity log alerts by using the Azure portal, an Azure Resource Manager template, and Azure PowerShell. Previously updated : 11/08/2021 Last updated : 2/23/2022
azure-monitor Alerts Automatic Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-automatic-migration.md
Title: Understand how the automatic migration process for your Azure Monitor classic alerts works description: Learn how the automatic migration process works. Previously updated : 02/14/2021 Last updated : 2/23/2022 # Understand the automatic migration process for your classic alert rules
azure-monitor Alerts Classic Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-classic-portal.md
description: Learn how to use Azure portal, CLI or PowerShell to create, view an
Previously updated : 09/06/2021 Last updated : 2/23/2022 # Create, view, and manage classic metric alerts using Azure Monitor
azure-monitor Alerts Classic.Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-classic.overview.md
Title: Overview of classic alerts in Azure Monitor description: Classic alerts are being deprecated. Alerts enable you to monitor Azure resource metrics, events, or logs and be notified when a condition you specify is met. Previously updated : 02/14/2021 Last updated : 2/23/2022 # What are classic alerts in Microsoft Azure?
azure-monitor Alerts Dynamic Thresholds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-dynamic-thresholds.md
description: Create Alerts with machine learning based Dynamic Thresholds
Previously updated : 01/12/2021 Last updated : 2/23/2022 # Metric Alerts with Dynamic Thresholds in Azure Monitor
azure-monitor Alerts Log Api Switch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-api-switch.md
description: Learn how to switch to the log alerts management to ScheduledQueryR
Previously updated : 02/22/2022 Last updated : 2/23/2022 # Upgrade legacy rules management to the current Log Alerts API from legacy Log Analytics Alert API
azure-monitor Alerts Log Create Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-create-templates.md
description: Learn how to use a Resource Manager template to create a log alert
Previously updated : 07/12/2021 Last updated : 2/23/2022 # Create a log alert with a Resource Manager template
azure-monitor Alerts Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-query.md
description: Recommendations for writing efficient alert queries
Previously updated : 09/22/2020 Last updated : 2/23/2022 # Optimizing log alert queries This article describes how to write and convert [Log Alert](./alerts-unified-log.md) queries to achieve optimal performance. Optimized queries reduce latency and load of alerts, which run frequently.
azure-monitor Alerts Log Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-webhook.md
Previously updated : 09/22/2020 Last updated : 2/23/2022 # Webhook actions for log alert rules
azure-monitor Alerts Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log.md
description: Use Azure Monitor to create, view, and manage log alert rules
Previously updated : 01/25/2022 Last updated : 2/23/2022 # Create, view, and manage log alerts using Azure Monitor
azure-monitor Alerts Manage Alerts Previous Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alerts-previous-version.md
Title: View and manage log alert rules created in previous versions| Microsoft D
description: Use the Azure Monitor portal to manage log alert rules created in earlier versions Previously updated : 12/14/2021 Last updated : 2/23/2022 # Manage alert rules created in previous versions
azure-monitor Alerts Managing Alert Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-managing-alert-instances.md
Title: Manage alert instances in Azure Monitor description: Managing alert instances across Azure Previously updated : 09/24/2018 Last updated : 2/23/2022
azure-monitor Alerts Managing Alert States https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-managing-alert-states.md
Title: Manage alert and smart group states
description: Managing the states of the alert and smart group instances Previously updated : 09/24/2018 Last updated : 2/23/2022
azure-monitor Alerts Managing Smart Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-managing-smart-groups.md
Title: Manage smart groups (preview) description: Managing Smart Groups created over your alert instances Previously updated : 09/24/2018 Last updated : 2/23/2022
azure-monitor Alerts Metric Create Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-create-templates.md
Previously updated : 8/02/2021 Last updated : 2/23/2022 # Create a metric alert with a Resource Manager template
azure-monitor Alerts Metric Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-logs.md
description: Tutorial on creating near-real time metric alerts on popular log an
Previously updated : 06/15/2021 Last updated : 2/23/2022
azure-monitor Alerts Metric Multiple Time Series Single Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-multiple-time-series-single-rule.md
description: Alert at scale using a single alert rule for multiple time series
Previously updated : 01/11/2021 Last updated : 2/23/2022 # Monitor multiple time-series in a single metric alert rule
azure-monitor Alerts Metric Near Real Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-near-real-time.md
Previously updated : 10/14/2021 Last updated : 2/23/2022 # Supported resources for metric alerts in Azure Monitor
azure-monitor Alerts Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric.md
description: Learn how to use Azure portal or CLI to create, view, and manage me
Previously updated : 11/07/2021 Last updated : 2/23/2022 # Create, view, and manage metric alerts using Azure Monitor
azure-monitor Alerts Prepare Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-prepare-migration.md
description: Learn how to modify your webhooks, logic apps, and runbooks to prep
Previously updated : 02/14/2021 Last updated : 2/23/2022 # Prepare your logic apps and runbooks for migration of classic alert rules
azure-monitor Alerts Rate Limiting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-rate-limiting.md
description: Understand how Azure limits the number of possible SMS, email, Azur
Previously updated : 3/12/2018 Last updated : 2/23/2022 # Rate limiting for Voice, SMS, emails, Azure App push notifications and webhook posts
azure-monitor Alerts Resource Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-resource-move.md
Previously updated : 02/14/2021 Last updated : 2/23/2022 # How to update alert rules or alert processing rules when their target resource moves to a different Azure region
azure-monitor Alerts Smart Detections Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-smart-detections-migration.md
Title: Upgrade Azure Monitor Application Insights smart detection to alerts (Preview) | Microsoft Docs description: Learn about the steps required to upgrade your Azure Monitor Application Insights smart detection to alert rules Previously updated : 05/30/2021 Last updated : 2/23/2022 # Migrate Azure Monitor Application Insights smart detection to alerts (Preview)
azure-monitor Alerts Smartgroups Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-smartgroups-overview.md
Title: Smart groups (preview) description: Smart Groups are aggregations of alerts that help you reduce alert noise Previously updated : 05/15/2018 Last updated : 2/23/2022 # Smart groups (preview)
azure-monitor Alerts Sms Behavior https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-sms-behavior.md
Previously updated : 02/16/2018 Last updated : 2/23/2022 # SMS Alert Behavior in Action Groups
azure-monitor Alerts Troubleshoot Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-log.md
description: Common issues, errors, and resolutions for log alert rules in Azure
Previously updated : 01/25/2022 Last updated : 2/23/2022
azure-monitor Alerts Troubleshoot Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-metric.md
description: Common issues with Azure Monitor metric alerts and possible solutio
Previously updated : 2/15/2022 Last updated : 2/23/2022 # Troubleshooting problems in Azure Monitor metric alerts
azure-monitor Alerts Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot.md
description: Common issues with Azure Monitor alerts and possible solutions.
Previously updated : 03/16/2020 Last updated : 2/23/2022 # Troubleshooting problems in Azure Monitor alerts
azure-monitor Alerts Understand Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-understand-migration.md
Title: Understand migration for Azure Monitor alerts description: Understand how the alerts migration works and troubleshoot problems. Previously updated : 09/06/2021 Last updated : 2/23/2022
azure-monitor Alerts Unified Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-unified-log.md
description: Trigger emails, notifications, call websites URLs (webhooks), or au
Previously updated : 01/25/2022 Last updated : 2/23/2022 # Log alerts in Azure Monitor
azure-monitor Alerts Using Migration Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-using-migration-tool.md
description: Learn how to use the voluntary migration tool to migrate your class
Previously updated : 02/14/2020 Last updated : 2/23/2022 # Use the voluntary migration tool to migrate your classic alert rules
azure-monitor Alerts Webhooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-webhooks.md
description: Learn how to reroute Azure metric alerts to other, non-Azure system
Previously updated : 09/06/2021 Last updated : 2/23/2022 # Call a webhook with a classic metric alert in Azure Monitor
azure-monitor Api Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/api-alerts.md
Title: Using Log Analytics Alert REST API description: The Log Analytics Alert REST API allows you to create and manage alerts in Log Analytics, which is part of Log Analytics. This article provides details of the API and several examples for performing different operations. Previously updated : 09/22/2020 Last updated : 2/23/2022
azure-monitor It Service Management Connector Secure Webhook Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/it-service-management-connector-secure-webhook-connections.md
Title: IT Service Management Connector - Secure Export in Azure Monitor description: This article shows you how to connect your ITSM products/services with Secure Export in Azure Monitor to centrally monitor and manage ITSM work items. Previously updated : 09/08/2020 Last updated : 2/23/2022
azure-monitor Itsm Connector Secure Webhook Connections Azure Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsm-connector-secure-webhook-connections-azure-configuration.md
Title: IT Service Management Connector - Secure Export in Azure Monitor - Azure Configurations description: This article shows you how to configure Azure in order to connect your ITSM products/services with Secure Export in Azure Monitor to centrally monitor and manage ITSM work items. Previously updated : 01/03/2021 Last updated : 2/23/2022
azure-monitor Itsmc Connections Cherwell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections-cherwell.md
Title: Connect Cherwell with IT Service Management Connector description: This article provides information about how to Cherwell with the IT Service Management Connector (ITSMC) in Azure Monitor to centrally monitor and manage the ITSM work items. Previously updated : 12/21/2020 Last updated : 2/23/2022
azure-monitor Itsmc Connections Provance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections-provance.md
Title: Connect Provance with IT Service Management Connector description: This article provides information about how to Provance with the IT Service Management Connector (ITSMC) in Azure Monitor to centrally monitor and manage the ITSM work items. Previously updated : 12/21/2020 Last updated : 2/23/2022
azure-monitor Itsmc Connections Scsm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections-scsm.md
Title: Connect SCSM with IT Service Management Connector description: This article provides information about how to SCSM with the IT Service Management Connector (ITSMC) in Azure Monitor to centrally monitor and manage the ITSM work items. Previously updated : 12/21/2020 Last updated : 2/23/2022
azure-monitor Itsmc Connections Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections-servicenow.md
Title: Connect ServiceNow with IT Service Management Connector description: Learn how to connect ServiceNow with the IT Service Management Connector (ITSMC) in Azure Monitor to centrally monitor and manage ITSM work items. Previously updated : 12/21/2020 Last updated : 2/23/2022
azure-monitor Itsmc Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections.md
Title: IT Service Management Connector in Azure Monitor description: This article provides information about how to connect your ITSM products/services with the IT Service Management Connector (ITSMC) in Azure Monitor to centrally monitor and manage the ITSM work items. Previously updated : 05/12/2020 Last updated : 2/23/2022
azure-monitor Itsmc Connector Deletion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connector-deletion.md
Title: Delete unused ITSM connectors description: This article provides an explanation of how to delete ITSM connectors and the action groups that are associated with it. Previously updated : 12/29/2020 Last updated : 2/23/2022
azure-monitor Itsmc Dashboard Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-dashboard-errors.md
description: Learn about common errors that exist in the IT Service Management C
Previously updated : 01/18/2021 Last updated : 2/23/2022
azure-monitor Itsmc Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-dashboard.md
description: Learn how to use the IT Service Management Connector dashboard to i
Previously updated : 01/15/2021 Last updated : 2/23/2022
azure-monitor Itsmc Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-definition.md
Title: IT Service Management Connector in Log Analytics description: This article provides an overview of IT Service Management Connector (ITSMC) and information about using it to monitor and manage ITSM work items in Log Analytics and resolve problems quickly. Previously updated : 05/24/2018 Last updated : 2/23/2022
azure-monitor Itsmc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-overview.md
Title: IT Service Management Connector overview description: This article provides an overview of IT Service Management Connector (ITSMC). Previously updated : 12/16/2020 Last updated : 2/23/2022
azure-monitor Itsmc Resync Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-resync-servicenow.md
description: Reset the connection to ServiceNow so alerts in Microsoft Azure can
Previously updated : 01/17/2021 Last updated : 2/23/2022 # How to manually fix sync problems
azure-monitor Itsmc Secure Webhook Connections Bmc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-secure-webhook-connections-bmc.md
Title: IT Service Management Connector - Secure Export in Azure Monitor - Configuration with BMC description: This article shows you how to connect your ITSM products/services with BMC on Secure Export in Azure Monitor. Previously updated : 12/31/2020 Last updated : 2/23/2022
azure-monitor Itsmc Secure Webhook Connections Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-secure-webhook-connections-servicenow.md
Title: IT Service Management Connector - Secure Export in Azure Monitor - Configuration with ServiceNow description: This article shows you how to connect your ITSM products/services with ServiceNow on Secure Export in Azure Monitor. Previously updated : 12/31/2020 Last updated : 2/23/2022
azure-monitor Itsmc Service Manager Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-service-manager-script.md
Title: Create web app for Service Management Connector description: Create a Service Manager Web app using an automated script to connect with IT Service Management Connector in Azure, and centrally monitor and manage the ITSM work items. Previously updated : 12/06/2021 Last updated : 2/23/2022
azure-monitor Itsmc Synced Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-synced-data.md
Title: Data synced from your ITSM product to LA Workspace description: This article provides an overview of Data synced from your ITSM product to LA Workspace. Previously updated : 12/29/2020 Last updated : 2/23/2022
azure-monitor Itsmc Troubleshoot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-troubleshoot-overview.md
description: Learn how to resolve common problems in IT Service Management Conne
Previously updated : 04/12/2020 Last updated : 2/23/2022 # Troubleshoot problems in IT Service Management Connector
azure-monitor Monitoring Classic Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/monitoring-classic-retirement.md
description: Description of the retirement of classic monitoring services and fu
Previously updated : 02/14/2021 Last updated : 2/23/2022 # Unified alerting & monitoring in Azure Monitor replaces classic alerting & monitoring
azure-monitor Resource Manager Alerts Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/resource-manager-alerts-log.md
description: Sample Azure Resource Manager templates to deploy Azure Monitor log
Previously updated : 07/12/2021 Last updated : 2/23/2022
azure-monitor Resource Manager Alerts Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/resource-manager-alerts-metric.md
Previously updated : 02/10/2022 Last updated : 2/23/2022 # Resource Manager template samples for metric alert rules in Azure Monitor
azure-monitor Annotations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/annotations.md
Title: Release annotations for Application Insights | Microsoft Docs
description: Learn how to create annotations to track deployment or other significant events with Application Insights. Last updated 07/20/2021- # Release annotations for Application Insights
azure-monitor Apm Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/apm-tables.md
Title: Azure Monitor Application Insights workspace-based resource schema
description: Learn about the new table structure and schema for Azure Monitor Application Insights workspace-based resources. Last updated 05/09/2020- # Workspace-based resource changes
-Prior to the introduction of [workspace-based Application Insights resources](create-workspace-resource.md), Application Insights data was stored separate from other log data in Azure Monitor. Both are based on Azure Data Explorer and use the same Kusto Query Language (KQL). This is described in [Logs in Azure Monitor](../logs/data-platform-logs.md).
+Prior to the introduction of [workspace-based Application Insights resources](create-workspace-resource.md), Application Insights data was stored separate from other log data in Azure Monitor. Both are based on Azure Data Explorer and use the same Kusto Query Language (KQL). With workspace-based Application Insights resources data is stored in a Log Analytics workspace with other monitoring data and application data. This simplifies your configuration by allowing you to more easily analyze data across multiple solutions and to leverage the capabilities of workspaces.
+
+## Classic data structure
+The structure of a Log Analytics workspace is described in [Log Analytics workspace overview](../logs/log-analytics-workspace-overview.md). For a classic application, the data is not stored in a Log Analytics workspace. It uses the same query language, and you create and run queries by using the same Log Analytics tool in the Azure portal. Data items for classic applications are stored separately from each other. The general structure is the same as for workspace-based applications, although the table and column names are different.
+
+> [!NOTE]
+> The classic Application Insights experience includes backward compatibility for your resource queries, workbooks, and log-based alerts. To query or view against the [new workspace-based table structure or schema](../app/apm-tables.md), you must first go to your Log Analytics workspace. During the preview, selecting **Logs** from within the Application Insights panes will give you access to the classic Application Insights query experience. For more information, see [Query scope](../logs/scope.md).
-With workspace-based Application Insights resources data is stored in a Log Analytics workspace with other monitoring data and application data. This simplifies your configuration by allowing you to more easily analyze data across multiple solutions and to leverage the capabilities of workspaces.
+[![Diagram that shows the Azure Monitor Logs structure for Application Insights.](../logs/media/data-platform-logs/logs-structure-ai.png)](../logs/media/data-platform-logs/logs-structure-ai.png#lightbox)
## Table structure
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
Title: Application Insights overview description: Learn how Application Insights in Azure Monitor provides performance management and usage tracking of your live web application.-- Last updated 01/10/2022
azure-monitor App Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md
Last updated 03/15/2019 ms.devlang: csharp, java, javascript, python -
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
ms.devlang: csharp Last updated 10/12/2021- # Application Insights for ASP.NET Core applications
azure-monitor Asp Net Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-exceptions.md
ms.devlang: csharp Last updated 05/19/2021- # Diagnose exceptions in web apps with Application Insights
azure-monitor Asp Net Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md
ms.devlang: csharp Last updated 05/08/2019- # Explore .NET/.NET Core and Python trace logs in Application Insights
azure-monitor Asp Net Troubleshoot No Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-troubleshoot-no-data.md
ms.devlang: csharp Last updated 05/21/2020- # Troubleshooting no data - Application Insights for .NET/.NET Core
azure-monitor Asp Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net.md
Last updated 10/12/2021 ms.devlang: csharp - # Configure Application Insights for your ASP.NET website
azure-monitor Auto Collect Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/auto-collect-dependencies.md
ms.devlang: csharp, java, javascript Last updated 05/06/2020- # Dependency auto-collection
azure-monitor Automate Custom Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/automate-custom-reports.md
Title: Automate custom reports with Application Insights data
description: Automate custom daily/weekly/monthly reports with Azure Monitor Application Insights data Last updated 05/20/2019-
azure-monitor Automate With Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/automate-with-logic-apps.md
Title: Automate Azure Application Insights processes by using Logic Apps
description: Learn how you can quickly automate repeatable processes by adding the Application Insights connector to your logic app. Last updated 03/11/2019- # Automate Application Insights processes by using Logic Apps
azure-monitor Availability Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-alerts.md
Title: Set up availability alerts with Azure Application Insights | Microsoft Do
description: Learn how to set up web tests in Application Insights. Get alerts if a website becomes unavailable or responds slowly. Last updated 06/19/2019-
azure-monitor Availability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-overview.md
Title: Application Insights availability tests
description: Set up recurring web tests to monitor availability and responsiveness of your app or website. Last updated 07/13/2021- # Application Insights availability tests
azure-monitor Availability Private Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-private-test.md
Title: Private availability testing - Azure Monitor Application Insights
description: Learn how to use availability tests on internal servers that run behind a firewall with private testing. Last updated 05/14/2021- # Private testing
azure-monitor Azure Functions Supported Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-functions-supported-features.md
Title: Azure Application Insights - Azure Functions Supported Features description: Application Insights Supported Features for Azure Functions -- Last updated 4/23/2019- ms.devlang: csharp
azure-monitor Azure Vm Vmss Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-vm-vmss-apps.md
Last updated 08/26/2019 ms.devlang: csharp, java, javascript, python - # Deploy the Azure Monitor Application Insights Agent on Azure virtual machines and Azure virtual machine scale sets
$publicCfgHashtable =
@{ "appFilter"= ".*"; "machineFilter"= ".*";
- "virtualPathFilter": ".*",
- "instrumentationSettings" : {
- "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/" # Application Insights connection string, create new Application Insights resource if you don't have one. https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/microsoft.insights%2Fcomponents
+ "virtualPathFilter"= ".*";
+ "instrumentationSettings" = @{
+ "connectionString"= "InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/" # Application Insights connection string, create new Application Insights resource if you don't have one. https://ms.portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/microsoft.insights%2Fcomponents
} } )
azure-monitor Change Analysis Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/change-analysis-troubleshoot.md
Title: Troubleshoot Application Change Analysis - Azure Monitor description: Learn how to troubleshoot problems in Application Change Analysis. -- Last updated 02/17/2022 - # Troubleshoot Application Change Analysis (preview)
azure-monitor Change Analysis Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/change-analysis-visualizations.md
Title: Visualizations for Application Change Analysis - Azure Monitor description: Learn how to use visualizations in Application Change Analysis in Azure Monitor. -- Last updated 01/11/2022- # Visualizations for Application Change Analysis (preview)
azure-monitor Change Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/change-analysis.md
Title: Use Application Change Analysis in Azure Monitor to find web-app issues | Microsoft Docs description: Use Application Change Analysis in Azure Monitor to troubleshoot application issues on live sites on Azure App Service. -- Last updated 01/11/2022 - # Use Application Change Analysis in Azure Monitor (preview)
azure-monitor Cloudservices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/cloudservices.md
ms.devlang: csharp Last updated 09/05/2018- # Application Insights for Azure cloud services
azure-monitor Codeless Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md
Title: Monitor your apps without code changes - auto-instrumentation for Azure M
description: Overview of auto-instrumentation for Azure Monitor Application Insights - codeless application performance management Last updated 08/31/2021- # What is auto-instrumentation for Azure Monitor application insights?
azure-monitor Configuration With Applicationinsights Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/configuration-with-applicationinsights-config.md
Last updated 05/22/2019 ms.devlang: csharp -
azure-monitor Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/console.md
Last updated 05/21/2020 ms.devlang: csharp -
azure-monitor Continuous Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/continuous-monitoring.md
Title: Continuous monitoring of your DevOps release pipeline with Azure Pipeline
description: Provides instructions to quickly set up continuous monitoring with Application Insights Last updated 05/01/2020- # Add continuous monitoring to your release pipeline
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
description: Learn about the steps required to upgrade your Azure Monitor Applic
Last updated 09/23/2020 - # Migrate to workspace-based Application Insights resources
azure-monitor Create New Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-new-resource.md
description: Manually set up Application Insights monitoring for a new live appl
Last updated 02/10/2021 - # Create an Application Insights resource
azure-monitor Custom Data Correlation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-data-correlation.md
Title: Azure Application Insights | Microsoft Docs description: Correlate data from Application Insights to other datasets, such as data enrichment or lookup tables, non-Application Insights data sources, and custom data. -- Last updated 08/08/2018-
azure-monitor Custom Operations Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-operations-tracking.md
ms.devlang: csharp Last updated 11/26/2019-
azure-monitor Data Model Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-context.md
Title: Azure Application Insights Telemetry Data Model - Telemetry Context | Mic
description: Application Insights telemetry context data model Last updated 05/15/2017-
azure-monitor Data Model Dependency Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-dependency-telemetry.md
Title: Azure Monitor Application Insights Dependency Data Model
description: Application Insights data model for dependency telemetry Last updated 04/17/2017-
azure-monitor Data Model Event Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-event-telemetry.md
Title: Azure Application Insights Telemetry Data Model - Event Telemetry | Micro
description: Application Insights data model for event telemetry Last updated 04/25/2017-
azure-monitor Data Model Exception Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-exception-telemetry.md
Title: Azure Application Insights Exception Telemetry Data model
description: Application Insights data model for exception telemetry Last updated 04/25/2017-
azure-monitor Data Model Metric Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-metric-telemetry.md
Title: Data model for metric telemetry - Azure Application Insights
description: Application Insights data model for metric telemetry Last updated 04/25/2017-
azure-monitor Data Model Request Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-request-telemetry.md
Title: Data model for request telemetry - Azure Application Insights
description: Application Insights data model for request telemetry Last updated 01/07/2019-
azure-monitor Data Model Trace Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-trace-telemetry.md
Title: Azure Application Insights Data Model - Trace Telemetry
description: Application Insights data model for trace telemetry Last updated 04/25/2017-
azure-monitor Data Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model.md
Title: Azure Application Insights Telemetry Data Model | Microsoft Docs
description: Application Insights data model overview documentationcenter: .net- - ibiza Last updated 10/14/2019 - # Application Insights telemetry data model
azure-monitor Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/devops.md
Title: Web app performance monitoring - Azure Application Insights
description: How Application Insights fits into the DevOps cycle Last updated 12/21/2018- # Deep diagnostics for web apps and services with Application Insights
azure-monitor Diagnostic Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/diagnostic-search.md
Title: Using Search in Azure Application Insights | Microsoft Docs
description: Search and filter raw telemetry sent by your web app. Last updated 07/30/2019- # Using Search in Application Insights
azure-monitor Distributed Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/distributed-tracing.md
Title: Distributed Tracing in Azure Application Insights | Microsoft Docs
description: Provides information about Microsoft's support for distributed tracing through our partnership in the OpenCensus project -- Last updated 09/17/2018-
azure-monitor Eventcounters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/eventcounters.md
description: Monitor system and custom .NET/.NET Core EventCounters in Applicati
Last updated 09/20/2019 - # EventCounters introduction
azure-monitor Export Data Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/export-data-model.md
Title: Azure Application Insights Data Model | Microsoft Docs
description: Describes properties exported from continuous export in JSON, and used as filters. Last updated 01/08/2019- # Application Insights Export Data Model
azure-monitor Export Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/export-power-bi.md
Title: Export to Power BI from Azure Application Insights | Microsoft Docs
description: Analytics queries can be displayed in Power BI. Last updated 08/10/2018- # Feed Power BI from Application Insights
azure-monitor Get Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/get-metric.md
Title: Get-Metric in Azure Monitor Application Insights description: Learn how to effectively use the GetMetric() call to capture locally pre-aggregated metrics for .NET and .NET Core applications with Azure Monitor Application Insights - Last updated 04/28/2020 ms.devlang: csharp
azure-monitor Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-addresses.md
Title: IP addresses used by Azure Monitor
description: Server firewall exceptions required by Application Insights Last updated 01/27/2020- # IP addresses used by Azure Monitor
azure-monitor Java 2X Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-agent.md
Last updated 01/10/2019 ms.devlang: java -- # Monitor dependencies, caught exceptions, and method execution times in Java web apps
azure-monitor Java 2X Collectd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-collectd.md
Last updated 03/14/2019 ms.devlang: java --- # collectd: Linux performance metrics in Application Insights [Deprecated]
azure-monitor Java 2X Filter Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-filter-telemetry.md
Last updated 3/14/2019 ms.devlang: java -- # Filter telemetry in your Java web app
azure-monitor Java 2X Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-get-started.md
Last updated 11/22/2020 ms.devlang: java -- # Get started with Application Insights in a Java web project
azure-monitor Java 2X Micrometer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-micrometer.md
ms.devlang: java Last updated 11/01/2018-- # How to use Micrometer with Azure Application Insights Java SDK (not recommended)
azure-monitor Java 2X Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-trace-logs.md
Last updated 05/18/2019 ms.devlang: java -- # Explore Java trace logs in Application Insights
azure-monitor Java 2X Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-troubleshoot.md
Last updated 03/14/2019 ms.devlang: java -- # Troubleshooting and Q and A for Application Insights for Java SDK
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
Last updated 06/24/2021 ms.devlang: java -- # Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications
azure-monitor Java Jmx Metrics Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-jmx-metrics-configuration.md
Last updated 03/16/2021 ms.devlang: java -- # Configuring JMX metrics
azure-monitor Java On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-on-premises.md
ms.devlang: java Last updated 04/16/2020--- # Java codeless application monitoring on-premises - Azure Monitor Application Insights
azure-monitor Java Standalone Arguments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-arguments.md
Last updated 04/16/2020 ms.devlang: java -- # Tips for updating your JVM args - Azure Monitor Application Insights for Java
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
Last updated 11/04/2020 ms.devlang: java -- # Configuration options - Azure Monitor Application Insights for Java
azure-monitor Java Standalone Sampling Overrides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-sampling-overrides.md
Title: Sampling overrides (preview) - Azure Monitor Application Insights for Jav
description: Learn to configure sampling overrides in Azure Monitor Application Insights for Java. Last updated 03/22/2021- ms.devlang: java - # Sampling overrides (preview) - Azure Monitor Application Insights for Java
azure-monitor Java Standalone Telemetry Processors Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors-examples.md
Title: Telemetry processor examples - Azure Monitor Application Insights for Jav
description: Explore examples that show telemetry processors in Azure Monitor Application Insights for Java. Last updated 12/29/2020- ms.devlang: java - # Telemetry processor examples - Azure Monitor Application Insights for Java
azure-monitor Java Standalone Telemetry Processors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors.md
Title: Telemetry processors (preview) - Azure Monitor Application Insights for J
description: Learn to configure telemetry processors in Azure Monitor Application Insights for Java. Last updated 10/29/2020- ms.devlang: java - # Telemetry processors (preview) - Azure Monitor Application Insights for Java
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
Last updated 11/25/2020 ms.devlang: java -- # Upgrading from Application Insights Java 2.x SDK
azure-monitor Javascript Angular Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-angular-plugin.md
ibiza Last updated 10/07/2020-- ms.devlang: javascript
azure-monitor Javascript Click Analytics Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-click-analytics-plugin.md
ibiza Last updated 01/14/2021-- ms.devlang: javascript
azure-monitor Javascript React Native Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-react-native-plugin.md
Title: React Native plugin for Application Insights JavaScript SDK description: How to install and use the React Native plugin for Application Insights JavaScript SDK. - ibiza
azure-monitor Javascript React Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-react-plugin.md
Title: React plugin for Application Insights JavaScript SDK description: How to install and use React plugin for Application Insights JavaScript SDK. - ibiza
azure-monitor Javascript Sdk Load Failure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk-load-failure.md
Title: Troubleshooting SDK load failure for JavaScript web applications - Azure Application Insights description: How to troubleshoot SDK load failure for JavaScript web applications -- Last updated 06/05/2020 ms.devlang: javascript
azure-monitor Kubernetes Codeless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/kubernetes-codeless.md
Title: Monitor applications on Azure Kubernetes Service (AKS) with Application I
description: Azure Monitor seamlessly integrates with your application running on Kubernetes, and allows you to spot the problems with your apps in no time. Last updated 05/13/2020- # Zero instrumentation application monitoring for Kubernetes - Azure Monitor Application Insights
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
Title: Diagnose with Live Metrics Stream - Azure Application Insights
description: Monitor your web app in real time with custom metrics, and diagnose issues with a live feed of failures, traces, and events. Last updated 10/12/2021- ms.devlang: csharp
azure-monitor Monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md
Title: Monitor applications running on Azure Functions with Application Insights
description: Azure Monitor seamlessly integrates with your application running on Azure Functions, and allows you to monitor the performance and spot the problems with your apps in no time. Last updated 08/27/2021- # Monitoring Azure Functions with Azure Monitor Application Insights
azure-monitor Monitor Web App Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-web-app-availability.md
Title: Monitor availability with URL ping tests - Azure Monitor
description: Set up ping tests in Application Insights. Get alerts if a website becomes unavailable or responds slowly. Last updated 07/13/2021-
azure-monitor Opencensus Python Dependency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python-dependency.md
Title: Dependency Tracking in Azure Application Insights with OpenCensus Python | Microsoft Docs description: Monitor dependency calls for your Python apps via OpenCensus Python. -- Last updated 10/15/2019 ms.devlang: python - # Track dependencies with OpenCensus Python
azure-monitor Opencensus Python Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python-request.md
Title: Incoming Request Tracking in Azure Application Insights with OpenCensus Python | Microsoft Docs description: Monitor request calls for your Python apps via OpenCensus Python. -- Last updated 10/15/2019 ms.devlang: python - # Track incoming requests with OpenCensus Python
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md
Last updated 10/12/2021
ms.devlang: python -- # Set up Azure Monitor for your Python application
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Title: Enable Azure Monitor OpenTelemetry for .NET, Node.js, and Python applicat
description: This article provides guidance on how to enable Azure Monitor on applications by using OpenTelemetry. Last updated 10/11/2021-- ms.devlang: csharp, javascript, python
azure-monitor Opentelemetry Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-overview.md
Title: OpenTelemetry with Azure Monitor overview
description: Provides an overview of how to use OpenTelemetry with Azure Monitor. Last updated 10/11/2021-- # OpenTelemetry overview
azure-monitor Overview Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/overview-dashboard.md
Title: Azure Application Insights Overview Dashboard | Microsoft Docs
description: Monitor applications with Azure Application Insights and Overview Dashboard functionality. Last updated 06/03/2019- # Application Insights Overview dashboard
azure-monitor Performance Counters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/performance-counters.md
Last updated 12/13/2018 ms.devlang: csharp - # System performance counters in Application Insights
azure-monitor Platforms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/platforms.md
Title: 'Application Insights: languages, platforms, and integrations | Microsoft
description: Languages, platforms, and integrations available for Application Insights Last updated 10/29/2021-
azure-monitor Powershell Azure Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/powershell-azure-diagnostics.md
description: Automate configuring Azure Diagnostics to pipe data to Application
Last updated 08/06/2019 - # Using PowerShell to set up Application Insights for Azure Cloud Services
azure-monitor Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/powershell.md
description: Automate creating and managing resources, alerts, and availability
Last updated 05/02/2020 - # Manage Application Insights resources using PowerShell
azure-monitor Pre Aggregated Metrics Log Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pre-aggregated-metrics-log-metrics.md
Title: Log-based and pre-aggregated metrics in Azure Application Insights | Microsoft Docs description: Why to use log-based versus pre-aggregated metrics in Azure Application Insights -- Last updated 09/18/2018-
azure-monitor Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pricing.md
Title: Manage usage and costs for Azure Application Insights | Microsoft Docs
description: Manage telemetry volumes and monitor costs in Application Insights. -- Last updated 02/17/2021
azure-monitor Proactive Application Security Detection Pack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/proactive-application-security-detection-pack.md
Title: Security detection Pack with Azure Application Insights
description: Monitor application with Azure Application Insights and smart detection for potential security issues. Last updated 12/12/2017- # Application security detection pack (preview)
azure-monitor Proactive Arm Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/proactive-arm-config.md
Title: Smart detection rule settings - Azure Application Insights description: Automate management and configuration of Azure Application Insights smart detection rules with Azure Resource Manager Templates -- Last updated 02/14/2021- # Manage Application Insights smart detection rules using Azure Resource Manager templates
azure-monitor Proactive Cloud Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/proactive-cloud-services.md
Title: Alert on issues in Azure Cloud Services using the Azure Diagnostics integ
description: Monitor for issues like startup failures, crashes, and role recycle loops in Azure Cloud Services with Azure Application Insights Last updated 06/07/2018-
azure-monitor Proactive Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/proactive-diagnostics.md
Title: Smart detection in Azure Application Insights | Microsoft Docs
description: Application Insights performs automatic deep analysis of your app telemetry and warns you of potential problems. Last updated 02/07/2019- # Smart detection in Application Insights
azure-monitor Proactive Email Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/proactive-email-notification.md
Title: Smart Detection notification change - Azure Application Insights description: Change to the default notification recipients from Smart Detection. Smart Detection lets you monitor application traces with Azure Application Insights for unusual patterns in trace telemetry. -- Last updated 02/14/2021- # Smart Detection e-mail notification change
azure-monitor Proactive Exception Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/proactive-exception-volume.md
Title: Abnormal rise in exception volume - Azure Application Insights
description: Monitor application exceptions with smart detection in Azure Application Insights for unusual patterns in exception volume. Last updated 12/08/2017- # Abnormal rise in exception volume (preview)
azure-monitor Proactive Failure Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/proactive-failure-diagnostics.md
Title: Smart Detection - failure anomalies, in Application Insights | Microsoft
description: Alerts you to unusual changes in the rate of failed requests to your web app, and provides diagnostic analysis. No configuration is needed. Last updated 12/18/2018-
azure-monitor Proactive Performance Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/proactive-performance-diagnostics.md
Title: Smart detection - performance anomalies | Microsoft Docs
description: Smart detection analyzes your app telemetry and warns you of potential problems. This feature needs no setup. Last updated 05/04/2017-
azure-monitor Proactive Potential Memory Leak https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/proactive-potential-memory-leak.md
Title: Detect memory leak - Azure Application Insights smart detection
description: Monitor applications with Azure Application Insights for potential memory leaks. Last updated 12/12/2017- # Memory leak detection (preview)
azure-monitor Proactive Trace Severity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/proactive-trace-severity.md
Title: Degradation in trace severity ratio - Azure Application Insights
description: Monitor application traces with Azure Application Insights for unusual patterns in trace telemetry with smart detection. Last updated 11/27/2017- # Degradation in trace severity ratio (preview)
azure-monitor Profiler Aspnetcore Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-aspnetcore-linux.md
description: A conceptual overview and step-by-step tutorial on how to use Appli
ms.devlang: csharp -- Last updated 02/23/2018-
azure-monitor Profiler Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-bring-your-own-storage.md
Title: Configure BYOS (Bring Your Own Storage) for Profiler & Snapshot Debugger description: Configure BYOS (Bring Your Own Storage) for Profiler & Snapshot Debugger -- Last updated 01/14/2021-
azure-monitor Profiler Cloudservice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-cloudservice.md
Title: Profile live Azure Cloud Services with Application Insights | Microsoft D
description: Enable Application Insights Profiler for Azure Cloud Services. -- Last updated 08/06/2018-
azure-monitor Profiler Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-overview.md
Title: Profile production apps in Azure with Application Insights Profiler description: Identify the hot path in your web server code with a low-footprint profiler. -- Last updated 08/06/2018-
azure-monitor Profiler Servicefabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-servicefabric.md
Title: Profile live Azure Service Fabric apps with Application Insights
description: Enable Profiler for a Service Fabric application -- Last updated 08/06/2018-
azure-monitor Profiler Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-settings.md
Title: Use the Azure Application Insights Profiler settings pane | Microsoft Docs description: See Profiler status and start profiling sessions -- Last updated 12/08/2021-
azure-monitor Profiler Trackrequests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-trackrequests.md
Title: Write code to track requests with Azure Application Insights | Microsoft Docs description: Write code to track requests with Application Insights so you can get profiles for your requests. -- Last updated 08/06/2018-
azure-monitor Profiler Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-troubleshooting.md
Title: Troubleshoot problems with Azure Application Insights Profiler description: This article presents troubleshooting steps and information to help developers enable and use Application Insights Profiler. -- Last updated 08/06/2018-
azure-monitor Profiler Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-vm.md
Title: Profile web apps on an Azure VM - Application Insights Profiler description: Profile web apps on an Azure VM by using Application Insights Profiler. -- Last updated 11/08/2019-
azure-monitor Profiler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler.md
Title: Profile live Azure App Service apps with Application Insights | Microsoft Docs description: Profile live apps on Azure App Service with Application Insights Profiler. -- Last updated 08/06/2018
azure-monitor Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/release-notes.md
description: The latest updates for Application Insights SDKs.
Last updated 07/27/2020- # Release Notes - Application Insights
azure-monitor Remove Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/remove-application-insights.md
Title: Remove Application Insights in Visual Studio - Azure Monitor
description: How to remove Application Insights SDK for ASP.NET and ASP.NET Core in Visual Studio. Last updated 04/06/2020- # How to remove Application Insights in Visual Studio
azure-monitor Resource Manager Function App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/resource-manager-function-app.md
Title: Resource Manager template samples for Azure Function App + Application In
description: Sample Azure Resource Manager templates to deploy an Azure Function App with an Application Insights resource. Last updated 08/06/2020- # Resource Manager template sample for creating Azure Function apps with Application Insights monitoring
azure-monitor Resource Manager Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/resource-manager-web-app.md
description: Sample Azure Resource Manager templates to deploy an Azure App Serv
Last updated 08/06/2020- # Resource Manager template samples for creating Azure App Services web apps with Application Insights monitoring
azure-monitor Resources Roles Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/resources-roles-access-control.md
description: Owners, contributors and readers of your organization's insights.
Last updated 02/14/2019 - # Resources, roles, and access control in Application Insights
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md
Title: Connection strings in Azure Application Insights | Microsoft Docs description: How to use connection strings. -- Last updated 01/17/2020
azure-monitor Separate Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/separate-resources.md
Title: How to design your Application Insights deployment - One vs many resource
description: Direct telemetry to different resources for development, test, and production stamps. Last updated 05/11/2020- # How many Application Insights resources should I deploy
azure-monitor Sharepoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sharepoint.md
Title: Monitor a SharePoint site with Application Insights
description: Start monitoring a new application with a new instrumentation key Last updated 09/08/2020- # Monitor a SharePoint site with Application Insights
azure-monitor Sla Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sla-report.md
Title: Downtime, SLA, and outage workbook - Application Insights
description: Calculate and report SLA for Web Test through a single pane of glass across your Application Insights resources and Azure subscriptions. Last updated 05/4/2021- # Downtime, SLA, and outages workbook
azure-monitor Snapshot Collector Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-collector-release-notes.md
Title: Release Notes for Microsoft.ApplicationInsights.SnapshotCollector NuGet package - Application Insights description: Release notes for the Microsoft.ApplicationInsights.SnapshotCollector NuGet package used by the Application Insights Snapshot Debugger. -- Last updated 11/10/2020
azure-monitor Snapshot Debugger Appservice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-debugger-appservice.md
Title: Enable Snapshot Debugger for .NET apps in Azure App Service | Microsoft Docs description: Enable Snapshot Debugger for .NET apps in Azure App Service -- Last updated 03/26/2019-
azure-monitor Snapshot Debugger Function App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-debugger-function-app.md
Title: Enable Snapshot Debugger for .NET and .NET Core apps in Azure Functions | Microsoft Docs description: Enable Snapshot Debugger for .NET and .NET Core apps in Azure Functions -- Last updated 12/18/2020
azure-monitor Snapshot Debugger Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-debugger-troubleshoot.md
Title: Troubleshoot Azure Application Insights Snapshot Debugger description: This article presents troubleshooting steps and information to help developers enable and use Application Insights Snapshot Debugger. -- Last updated 03/07/2019-
azure-monitor Snapshot Debugger Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-debugger-upgrade.md
Title: Upgrading Azure Application Insights Snapshot Debugger description: How to upgrade Snapshot Debugger for .NET apps to the latest version on Azure App Services, or via Nuget packages -- Last updated 03/28/2019-
azure-monitor Snapshot Debugger Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-debugger-vm.md
Title: Enable Snapshot Debugger for .NET apps in Azure Service Fabric, Cloud Service, and Virtual Machines | Microsoft Docs description: Enable Snapshot Debugger for .NET apps in Azure Service Fabric, Cloud Service, and Virtual Machines -- Last updated 03/07/2019-
azure-monitor Snapshot Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-debugger.md
description: Debug snapshots are automatically collected when exceptions are thr
Last updated 10/12/2021---
azure-monitor Standard Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/standard-metrics.md
description: This article lists Azure Application Insights metrics with supporte
Last updated 07/03/2019- # Application Insights standard metrics
azure-monitor Status Monitor V2 Api Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-api-reference.md
Title: Azure Application Insights .Net Agent API reference description: Application Insights Agent API reference. Monitor website performance without redeploying the website. Works with ASP.NET web apps hosted on-premises, in VMs, or on Azure. -- Last updated 04/23/2019- # Azure Monitor Application Insights Agent API Reference
azure-monitor Status Monitor V2 Detailed Instructions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-detailed-instructions.md
Title: Azure Application Insights Agent detailed instructions | Microsoft Docs description: Detailed instructions for getting started with Application Insights Agent. Monitor website performance without redeploying the website. Works with ASP.NET web apps hosted on-premises, in VMs, or on Azure. -- Last updated 04/23/2019- # Application Insights Agent (formerly named Status Monitor v2): Detailed instructions
azure-monitor Status Monitor V2 Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-get-started.md
Title: Azure Application Insights Agent - getting started | Microsoft Docs description: A quickstart guide for Application Insights Agent. Monitor website performance without redeploying the website. Works with ASP.NET web apps hosted on-premises, in VMs, or on Azure. -- Last updated 01/22/2021
azure-monitor Status Monitor V2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-overview.md
Title: Azure Application Insights Agent overview | Microsoft Docs description: An overview of Application Insights Agent. Monitor website performance without redeploying the website. Works with ASP.NET web apps hosted on-premises, in VMs, or on Azure. -- Last updated 09/16/2019- # Deploy Azure Monitor Application Insights Agent for on-premises servers
azure-monitor Status Monitor V2 Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-troubleshoot.md
Title: Azure Application Insights Agent troubleshooting and known issues | Microsoft Docs description: The known issues of Application Insights Agent and troubleshooting examples. Monitor website performance without redeploying the website. Works with ASP.NET web apps hosted on-premises, in VMs, or on Azure. -- Last updated 04/23/2019- # Troubleshooting Application Insights Agent (formerly named Status Monitor v2)
azure-monitor Telemetry Channels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/telemetry-channels.md
Last updated 05/14/2019 ms.devlang: csharp -
azure-monitor Transaction Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/transaction-diagnostics.md
Title: Azure Application Insights Transaction Diagnostics | Microsoft Docs
description: Application Insights end-to-end transaction diagnostics Last updated 01/19/2018-
azure-monitor Tutorial Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-alert.md
Title: Send alerts from Azure Application Insights | Microsoft Docs
description: Tutorial to send alerts in response to errors in your application using Azure Application Insights. Last updated 04/10/2019-
azure-monitor Tutorial App Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-app-dashboards.md
Title: Create custom dashboards in Azure Application Insights | Microsoft Docs
description: Tutorial to create custom KPI dashboards using Azure Application Insights. Last updated 09/30/2020-
azure-monitor Tutorial Runtime Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-runtime-exceptions.md
Title: Diagnose run-time exceptions using Azure Application Insights | Microsoft
description: Tutorial to find and diagnose run-time exceptions in your application using Azure Application Insights. Last updated 09/19/2017-
azure-monitor Usage Cohorts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-cohorts.md
Title: Application Insights usage cohorts | Microsoft Docs
description: Analyze different sets or users, sessions, events, or operations that have something in common Last updated 07/30/2021- # Application Insights cohorts
azure-monitor Usage Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-flows.md
Title: Application Insights User Flows analyzes navigation flows description: Analyze how users navigate between the pages and features of your web app. -- Last updated 07/30/2021- # Analyze user navigation patterns with User Flows in Application Insights
azure-monitor Usage Funnels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-funnels.md
Title: Application Insights Funnels
description: Learn how you can use Funnels to discover how customers are interacting with your application. Last updated 07/30/2021- # Discover how customers are using your application with Application Insights Funnels
azure-monitor Usage Heart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-heart.md
Title: HEART analytics workbook
description: Product teams use the HEART Workbook to measure success across five user-centric dimensions to deliver better software. Last updated 11/11/2021- # Analyzing product usage with HEART
azure-monitor Usage Impact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-impact.md
Title: Application Insights Usage Impact - Azure Monitor description: Analyze how different properties potentially impact conversion rates for parts of your apps. -- Last updated 07/30/2021- # Impact analysis with Application Insights
azure-monitor Usage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-overview.md
Title: Usage analysis with Application Insights | Azure Monitor
description: Understand your users and what they do with your app. Last updated 07/30/2021- # Usage analysis with Application Insights
azure-monitor Usage Retention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-retention.md
Title: Analyze web app user retention with Application Insights
description: How many users return to your app? Last updated 07/30/2021- # User retention analysis for web applications with Application Insights
azure-monitor Usage Segmentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-segmentation.md
Title: User, session, and event analysis in Application Insights description: Demographic analysis of users of your web app. -- Last updated 07/30/2021- # Users, sessions, and events analysis in Application Insights
azure-monitor Usage Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-troubleshoot.md
Title: Troubleshoot user analytics tools - Application Insights
description: Troubleshooting guide - analyzing site and app usage with Application Insights. Last updated 07/30/2021- # Troubleshoot user behavior analytics tools in Application Insights
azure-monitor Web App Extension Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/web-app-extension-release-notes.md
Title: Release Notes for Azure web app extension - Application Insights
description: Releases notes for Azure Web Apps Extension for runtime instrumentation with Application Insights. Last updated 06/26/2020- # Release notes for Azure Web App extension for Application Insights
azure-monitor Work Item Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/work-item-integration.md
Title: Work Item Integration - Application Insights
description: Learn how to create work items in GitHub or Azure DevOps with Application Insights data embedded in them. Last updated 06/27/2021- # Work Item Integration
azure-monitor Worker Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/worker-service.md
ms.devlang: csharp Last updated 05/11/2020- # Application Insights for Worker Service applications (non-HTTP applications)
azure-monitor Azure Monitor Operations Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/azure-monitor-operations-manager.md
You enable Application Insights for each of your business applications. It ident
As you gain familiarity with Azure Monitor, you start to create alert rules that are able to replace some management pack functionality and start to evolve your business processes to use the new monitoring platform. This allows you to start removing machines and management packs from the Operations Manager management group. You continue to use management packs for critical server software and on-premises infrastructure but continue to watch for new features in Azure Monitor that will allow you to retire additional functionality. ## Monitor Azure services
-Azure services actually require Azure Monitor to collect telemetry, and it's enabled the moment that you create an Azure subscription. The [Activity log](essentials/activity-log.md) is automatically collected for the subscription, and [platform metrics](essentials/data-platform-metrics.md) are automatically collected from any Azure resources you create. You can immediately start using [metrics explorer](essentials/metrics-getting-started.md), which is similar to performance views in the Operations console, but it provides interactive analysis and [advanced aggregations](essentials/metrics-charts.md) of data. [Create a metric alert](alerts/alerts-metric.md) to be notified when a value crosses a threshold or [add a chart to an Azure dashboard](essentials/metrics-charts.md#pinning-to-dashboards) for visibility.
+Azure services actually require Azure Monitor to collect telemetry, and it's enabled the moment that you create an Azure subscription. The [Activity log](essentials/activity-log.md) is automatically collected for the subscription, and [platform metrics](essentials/data-platform-metrics.md) are automatically collected from any Azure resources you create. You can immediately start using [metrics explorer](essentials/metrics-getting-started.md), which is similar to performance views in the Operations console, but it provides interactive analysis and [advanced aggregations](essentials/metrics-charts.md) of data. [Create a metric alert](alerts/alerts-metric.md) to be notified when a value crosses a threshold or [save a chart to a dashboard or workbook](essentials/metrics-charts.md#saving-to-dashboards-or-workbooks) for visibility.
[![Metrics explorer](media/azure-monitor-operations-manager/metrics-explorer.png)](media/azure-monitor-operations-manager/metrics-explorer.png#lightbox)
azure-monitor Best Practices Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-data-collection.md
Some monitoring of Azure resources is available automatically with no configurat
[![Deploy Azure resource monitoring](media/best-practices-data-collection/best-practices-azure-resources.png)](media/best-practices-data-collection/best-practices-azure-resources.png#lightbox) ### Collect tenant and subscription logs
-While the [Azure Active Directory logs](../active-directory/reports-monitoring/overview-reports.md) for your tenant and the [Activity log](essentials/platform-logs-overview.md) for your subscription are collected automatically, sending them to a Log Analytics workspace enables you to analyze these events with other log data using log queries in Log Analytics. This also allows you to create log query alerts which is the only way to alert on Azure Active Directory logs and provide more complex logic than Activity log alerts.
+While the [Azure Active Directory logs](../active-directory/reports-monitoring/overview-reports.md) for your tenant and the [Activity log](essentials/platform-logs-overview.md) for your subscription are collected automatically, sending them to a Log Analytics workspace enables you to analyze these events with other log data using log queries in Log Analytics. This also allows you to create log query alerts which are the only way to alert on Azure Active Directory logs and provide more complex logic than Activity log alerts.
There's no cost for sending the Activity log to a workspace, but there is a data ingestion and retention charge for Azure Active Directory logs.
Azure Monitor monitors your custom applications using [Application Insights](app
### Create an application resource Application Insights is the feature of Azure Monitor for monitoring your cloud native and hybrid applications.
-You must create a resource in Application Insights for each application that you're going to monitor. Log data collected by Application Insights is stored in Azure Monitor Logs for a workspace-based application. Log data for classic applications is stored separate from your Log Analytics workspace as described in [Data structure](logs/data-platform-logs.md#data-structure).
+You must create a resource in Application Insights for each application that you're going to monitor. Log data collected by Application Insights is stored in Azure Monitor Logs for a workspace-based application. Log data for classic applications is stored separate from your Log Analytics workspace as described in [Data structure](logs/log-analytics-workspace-overview.md#data-structure).
When you create the application, you must select whether to use classic or workspace-based. See [Create an Application Insights resource](app/create-new-resource.md) to create a classic application. See [Workspace-based Application Insights resources (preview)](app/create-workspace-resource.md) to create a workspace-based application.
To enable monitoring for an application, you must decide whether you will use co
- [Other platforms](app/platforms.md) ### Configure availability testing
-Availability tests in Application Insights are recurring tests that monitor the availability and responsiveness of your application at regular intervals from points around the world. You can create a simple ping test for free or create a sequence of web requests to simulate user transactions which has associated cost.
+Availability tests in Application Insights are recurring tests that monitor the availability and responsiveness of your application at regular intervals from points around the world. You can create a simple ping test for free or create a sequence of web requests to simulate user transactions which have associated cost.
See [Monitor the availability of any website](app/monitor-web-app-availability.md) for summary of the different kinds of test and details on creating them.
azure-monitor Data Collection Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-endpoint-overview.md
+
+ Title: Data collection endpoints in Azure Monitor (preview)
+description: Overview of data collection endpoints (DCEs) in Azure Monitor including their contents and structure and how you can create and work with them.
+ Last updated : 02/21/2022++++
+# Data collection endpoints in Azure Monitor (preview)
+Data Collection Endpoints (DCEs) allow you to uniquely configure ingestion settings for Azure Monitor. This article provides an overview of data collection endpoints including their contents and structure and how you can create and work with them.
+
+## Workflows that use DCEs
+The following workflows currently use DCEs:
+
+- [Azure Monitor agent](../agents/data-collection-rule-azure-monitor-agent.md))
+- [Custom logs](../logs/custom-logs-overview.md)
+
+## Components of a data collection endpoint
+A data collection endpoint includes the following components.
+
+| Component | Description |
+|:|:|
+| Configuration access endpoint | The endpoint used to access the configuration service to fetch associated data collection rules (DCR). Example: `<unique-dce-identifier>.<regionname>.handler.control` |
+| Logs ingestion endpoint | The endpoint used to ingest logs to Log Analytics workspace(s). Example: `<unique-dce-identifier>.<regionname>.ingest` |
+| Network Access Control Lists (ACLs) | Network access control rules for the endpoints
++
+## Regionality
+Data collection endpoints are ARM resources created within specific regions. An endpoint in a given region can only be **associated with machines in the same region**, although you can have more than one endpoint within the same region as per your needs.
+
+## Limitations
+Data collection endpoints only support Log Analytics as a destination for collected data. [Custom Metrics (preview)](../essentials/metrics-custom-overview.md) collected and uploaded via the Azure Monitor Agent are not currently controlled by DCEs nor can they be configured over private links.
+
+## Create endpoint in Azure portal
+
+1. In the **Azure Monitor** menu in the Azure portal, select **Data Collection Endpoint** from the **Settings** section. Click **Create** to create a new Data Collection Rule and assignment.
+
+ [![Data Collection Endpoints](media/data-collection-endpoint-overview/data-collection-endpoint-overview.png)](media/data-collection-endpoint-overview/data-collection-endpoint-overview.png#lightbox)
+
+2. Click **Create** to create a new endpoint. Provide a **Rule name** and specify a **Subscription**, **Resource Group** and **Region**. This specifies where the DCE will be created.
+
+ [![Data Collection Rule Basics](media/data-collection-endpoint-overview/data-collection-endpoint-basics.png)](media/data-collection-endpoint-overview/data-collection-endpoint-basics.png#lightbox)
+
+3. Click **Review + create** to review the details of the data collection endpoint. Click **Create** to create it.
+
+## Create endpoint and association using REST API
+
+> [!NOTE]
+> The data collection endpoint should be created in the **same region** where your virtual machines exist.
+
+1. Create data collection endpoint(s) using these [DCE REST APIs](/cli/azure/monitor/data-collection/endpoint).
+2. Create association(s) to link the endpoint(s) to your target machines or resources, using these [DCRA REST APIs](/rest/api/monitor/datacollectionruleassociations/create#examples).
++
+## Sample data collection endpoint
+The sample data collection endpoint below is for virtual machines with Azure Monitor agent, with public network access disabled so that agent only uses private links to communicate and send data to Azure Monitor/Log Analytics.
+
+```json
+{
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx/resourceGroups/myResourceGroup/providers/Microsoft.Insights/dataCollectionEndpoints/myCollectionEndpoint",
+ "name": "myCollectionEndpoint",
+ "type": "Microsoft.Insights/dataCollectionEndpoints",
+ "location": "eastus",
+ "tags": {
+ "tag1": "A",
+ "tag2": "B"
+ },
+ "properties": {
+ "configurationAccess": {
+ "endpoint": "https://mycollectionendpoint-abcd.eastus-1.control.monitor.azure.com"
+ },
+ "logsIngestion": {
+ "endpoint": "https://mycollectionendpoint-abcd.eastus-1.ingest.monitor.azure.com"
+ },
+ "networkAcls": {
+ "publicNetworkAccess": "Disabled"
+ }
+ },
+ "systemData": {
+ "createdBy": "user1",
+ "createdByType": "User",
+ "createdAt": "yyyy-mm-ddThh:mm:ss.sssssssZ",
+ "lastModifiedBy": "user2",
+ "lastModifiedByType": "User",
+ "lastModifiedAt": "yyyy-mm-ddThh:mm:ss.sssssssZ"
+ },
+ "etag": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+}
+```
+
+## Next steps
+- [Associate endpoint to machines](../agents/data-collection-rule-azure-monitor-agent.md#create-rule-and-association-in-azure-portal)
+- [Add endpoint to AMPLS resource](../logs/private-link-configure.md#connect-azure-monitor-resources)
azure-monitor Data Collection Rule Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-overview.md
+
+ Title: Data Collection Rules in Azure Monitor
+description: Overview of data collection rules (DCRs) in Azure Monitor including their contents and structure and how you can create and work with them.
+ Last updated : 02/21/2022+++
+# Data collection rules in Azure Monitor
+[Data Collection Rules (DCRs)](../essentials/data-collection-rule-overview.md) provide an [ETL](/azure/architecture/data-guide/relational-data/etl)-like pipeline in Azure Monitor, allowing you to define the way that data coming into Azure Monitor should be handled. Depending on the type of workflow, DCRs may specify where data should be sent and may filter or transform data before it's stored in Azure Monitor Logs. Some data collection rules will be created and managed by Azure Monitor, while you may create others to customize data collection for your particular requirements. This article describes DCRs including their contents and structure and how you can create and work with them.
+
+## Types of data collection rules
+There are currently two types of data collection rule in Azure Monitor:
+
+- **Standard DCR**. Used with different workflows that send data to Azure Monitor. Workflows currently supported are [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) and [custom logs](../logs/custom-logs-overview.md).
+
+- **Workspace transformation DCR**. Used with a Log Analytics workspace to apply transformations to workflows that don't currently support DCRs.
+
+## Structure of a data collection rule
+Data collection rules are formatted in JSON. While you may not need to interact with them directly, there are scenarios where you may need to directly edit a data collection rule. See [Data collection rule structure](data-collection-rule-structure.md) for a description of this structure and different elements.
+
+## Permissions
+When using programmatic methods to create data collection rules and associations, you require the following permissions:
+
+| Built-in Role | Scope(s) | Reason |
+|:|:|:|
+| [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) | <ul><li>Subscription and/or</li><li>Resource group and/or </li><li>An existing data collection rule</li></ul> | Create or edit data collection rules |
+| [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor)<br>[Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator)</li></ul> | <ul><li>Virtual machines, virtual machine scale sets</li><li>Arc-enabled servers</li></ul> | Deploy associations (i.e. to assign rules to the machine) |
+| Any role that includes the action *Microsoft.Resources/deployments/** | <ul><li>Subscription and/or</li><li>Resource group and/or </li><li>An existing data collection rule</li></ul> | Deploy ARM templates |
+
+## Limits
+For limits that apply to each data collection rule, see [Azure Monitor service limits](../service-limits.md#data-collection-rules).
+
+## Creating a data collection rule
+The following articles describe different scenarios for creating data collection rules. In some cases, the data collection rule may be created for you, while in others you may need to create and edit it yourself.
+
+| Workflow | Resources |
+|:|:|
+| Azure Monitor agent | [Configure data collection for the Azure Monitor agent](../agents/data-collection-rule-azure-monitor-agent.md)<br>[Use Azure Policy to install Azure Monitor agent and associate with DCR](../agents/azure-monitor-agent-manage.md#using-azure-policy) |
+| Custom logs | [Configure custom logs using the Azure portal](../logs/tutorial-custom-logs.md)<br>[Configure custom logs using Resource Manager templates and REST API](../logs/tutorial-custom-logs-api.md) |
+| Workspace transformation | [Configure ingestion-time transformations using the Azure portal](../logs/tutorial-ingestion-time-transformations.md)<br>[Configure ingestion-time transformations using Resource Manager templates and REST API](../logs/tutorial-ingestion-time-transformations-api.md) |
++
+## Programmatically work with DCRs
+See the following resources for programmatically working with DCRs.
+
+- Directly edit the data collection rule in JSON and [submit using the REST API](/rest/api/monitor/datacollectionrules).
+- Create DCR and associations with [Azure CLI](https://github.com/Azure/azure-cli-extensions/blob/master/src/monitor-control-service/README.md).
+- Create DCR and associations with Azure PowerShell.
+ - [Get-AzDataCollectionRule](https://github.com/Azure/azure-powershell/blob/master/src/Monitor/Monitor/help/Get-AzDataCollectionRule.md)
+ - [New-AzDataCollectionRule](https://github.com/Azure/azure-powershell/blob/master/src/Monitor/Monitor/help/New-AzDataCollectionRule.md)
+ - [Set-AzDataCollectionRule](https://github.com/Azure/azure-powershell/blob/master/src/Monitor/Monitor/help/Set-AzDataCollectionRule.md)
+ - [Update-AzDataCollectionRule](https://github.com/Azure/azure-powershell/blob/master/src/Monitor/Monitor/help/Update-AzDataCollectionRule.md)
+ - [Remove-AzDataCollectionRule](https://github.com/Azure/azure-powershell/blob/master/src/Monitor/Monitor/help/Remove-AzDataCollectionRule.md)
+ - [Get-AzDataCollectionRuleAssociation](https://github.com/Azure/azure-powershell/blob/master/src/Monitor/Monitor/help/Get-AzDataCollectionRuleAssociation.md)
+ - [New-AzDataCollectionRuleAssociation](https://github.com/Azure/azure-powershell/blob/master/src/Monitor/Monitor/help/New-AzDataCollectionRuleAssociation.md)
+ - [Remove-AzDataCollectionRuleAssociation](https://github.com/Azure/azure-powershell/blob/master/src/Monitor/Monitor/help/Remove-AzDataCollectionRuleAssociation.md)
+++
+## Data resiliency and high availability
+Data collection rules are stored regionally, and are available in all public regions where Log Analytics is supported. Government regions and clouds are not currently supported. A rule gets created and stored in the region you specify, and is backed up to the [paired-region](../../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) within the same geography. The service is deployed to all three [availability zones](../../availability-zones/az-overview.md#availability-zones) within the region, making it a **zone-redundant service** which further adds to high availability.
+
+### Single region data residency
+This is a preview feature to enable storing customer data in a single region is currently only available in the Southeast Asia Region (Singapore) of the Asia Pacific Geo and Brazil South (Sao Paulo State) Region of Brazil Geo. Single region residency is enabled by default in these regions.
++
+## Next steps
+
+- [Read about the detailed structure of a data collection rule.](data-collection-rule-structure.md)
+- [Get details on transformations in a data collection rule.](data-collection-rule-transformations.md)
azure-monitor Data Collection Rule Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-structure.md
+
+ Title: Structure of a data collection rule in Azure Monitor (preview)
+description: Details on the structure of different kinds of data collection rule in Azure Monitor.
+++ Last updated : 02/22/2022+++++
+# Structure of a data collection rule in Azure Monitor (preview)
+[Data Collection Rules (DCRs)](data-collection-rule-overview.md) in Azure Monitor define the way that data coming into Azure Monitor should be handled. Some data collection rules will be created and managed by Azure Monitor, while you may create others to customize data collection for your particular requirements. This article describes the structure of DCRs for creating and editing data collection rules in those cases where you need to work with them directly.
++
+## Custom logs
+A DCR for [custom logs](../logs/custom-logs-overview.md) contains the following sections:
+### streamDeclarations
+This section contains the declaration of all the different types of data that will be sent via the HTTP endpoint directly into Log Analytics. Each stream is an object whose key represents the stream name (Must begin with *Custom-*) and whose value is the full list of top-level properties that the JSON data that will be sent will contain. Note that the shape of the data you send to the endpoint doesn't need to match that of the destination table. Rather, the output of the transform that is applied on top of the input data needs to match the destination shape. The possible data types that can be assigned to the properties are `string`, `int`, `long`, `real`, `boolean`, `dynamic`, and `datetime`.
+
+### destinations
+This section contains a declaration of all the destinations where the data will be sent. Only Log Analytics is currently supported as a destination. Each Log Analytics destination will require the full Workspace Resource ID, as well as a friendly name that will be used elsewhere in the DCR to refer to this workspace.
+
+### dataFlows
+This section ties the other sections together. Defines the following for each stream declared in the `streamDeclarations` section:
+
+- `destination` from the `destinations` section where the data will be sent.
+- `transformKql` which is the [transformation](data-collection-rule-transformations.md) applied to the data that was sent in the input shape described in the `streamDeclarations` section to the shape of the target table.
+- `outputStream` section, which describes which table in the workspace specified under the `destination` property the data will be ingested into. The value of the outputStream will have the `Microsoft-[tableName]` shape when data is being ingested into a standard Log Analytics table, or `Custom-[tableName]` when ingesting data into a custom-created table. Only one destination is allowed per stream.
+
+## Azure Monitor agent
+ A DCR for [Azure Monitor agent](../agents/data-collection-rule-azure-monitor-agent.md) contains the following sections:
+
+### Data sources
+Unique source of monitoring data with its own format and method of exposing its data. Examples of a data source include Windows event log, performance counters, and syslog. Each data source matches a particular data source type as described below.
+
+Each data source has a data source type. Each type defines a unique set of properties that must be specified for each data source. The data source types currently available are shown in the following table.
+
+| Data source type | Description |
+|:|:|
+| extension | VM extension-based data source |
+| performanceCounters | Performance counters for both Windows and Linux |
+| syslog | Syslog events on Linux |
+| windowsEventLogs | Windows event log |
++
+### Streams
+Unique handle that describes a set of data sources that will be transformed and schematized as one type. Each data source requires one or more streams, and one stream may be used by multiple data sources. All data sources in a stream share a common schema. Use multiple streams for example, when you want to send a particular data source to multiple tables in the same Log Analytics workspace.
+
+### Destinations
+Set of destinations where the data should be sent. Examples include Log Analytics workspace and Azure Monitor Metrics. Multiple destinations are allowed for multi-homing scenario.
+
+### Data flows
+Definition of which streams should be sent to which destinations.
+
+### Endpoint
+HTTPS endpoint for DCR used for custom logs API. The DCR is applied to any data sent to that endpoint.
+++++
+## Next steps
+
+- [Overview of data collection rules including methods for creating them.](data-collection-rule-overview.md)
azure-monitor Data Collection Rule Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-transformations.md
+
+ Title: Data collection rule transformations
+description: Use transformations in a data collection rule in Azure Monitor to filter and modify incoming data.
+ Last updated : 02/21/2022+++
+# Data collection rule transformations in Azure Monitor (preview)
+Transformations in a [data collection rule (DCR)](data-collection-rule-overview.md) allow you to filter or modify incoming data before it's stored in a Log Analytics workspace. This article describes how to build transformations in a DCR, including details and limitations of the Kusto Query Language (KQL) used for the transform statement.
+
+## Basic concepts
+Data transformations are defined using a Kusto Query Language (KQL) statement that is applied individually to each entry in the data source. It must understand the format of the incoming data and create output in the structure of the target table.
+
+## Transformation structure
+The input stream is represented by a virtual table named `source` with columns matching the input data stream definition. Following is a typical example of a transformation. This example includes the following functionality:
+
+- Filters the incoming data with a [where](/azure/data-explorer/kusto/query/whereoperator) statement
+- Adds a new column using the [extend](/azure/data-explorer/kusto/query/extendoperator) operator
+- Formats the output to match the columns of the target table using the [project](/azure/data-explorer/kusto/query/projectoperator) operator
+
+```kusto
+source
+| where severity == "Critical"
+| extend Properties = parse_json(properties)
+| project
+ TimeGenerated = todatetime(["time"]),
+ Category = category,
+ StatusDescription = StatusDescription,
+ EventName = name,
+ EventId = tostring(Properties.EventId)
+```
++
+## KQL limitations
+Since the transformation is applied to each record individually, it can't use any KQL operators that act on multiple records. Only operators that take a single row as input and return no more than one row are supported. For example, [summarize](/azure/data-explorer/kusto/query/summarizeoperator) isn't supported since it summarizes multiple records. See [Supported KQL features](#supported-kql-features) for a complete list of supported features.
+
+### Inline reference table
+The [datatable](/azure/data-explorer/kusto/query/datatableoperator?pivots=azuremonitor) operator isn't supported in the subset of KQL available to use in transformations. This would normally be used in KQL to define an inline query-time table. Use dynamic literals instead to work around this limitation.
+
+For example, the following isn't supported in a transformation:
+
+```kusto
+let galaxy = datatable (country:string,entity:string)['ES','Spain','US','United States'];
+source
+| join kind=inner (galaxy) on $left.Location == $right.country
+| extend Galaxy_CF = ['entity']
+```
+You can instead use the following statement which is supported and performs the same functionality:
+
+```kusto
+let galaxyDictionary = parsejson('{"ES": "Spain","US": "United States"}');
+source
+| extend Galaxy_CF = galaxyDictionary[Location]
+```
+
+### has operator
+Transformations don't currently support [has](/azure/data-explorer/kusto/query/has-operator). Use [contains](/azure/data-explorer/kusto/query/contains-operator) which is supported and performs similar functionality.
++
+### Handling dynamic data
+Since the properties of type [dynamic](/azure/data-explorer/kusto/query/scalar-data-types/dynamic) aren't supported in the input stream schema, you need alternate methods for strings containing JSON.
+
+Consider the following input:
+
+```json
+{
+ "TimeGenerated" : "2021-11-07T09:13:06.570354Z",
+ "Message": "Houston, we have a problem",
+ "AdditionalContext": {
+ "Level": 2,
+ "DeviceID": "apollo13"
+ }
+}
+```
+
+In order to access the properties in *AdditionalContext*, define it as string-typed column in the input stream:
+
+```json
+"columns": [
+ {
+ "name": "TimeGenerated",
+ "type": "datetime"
+ },
+ {
+ "name": "Message",
+ "type": "string"
+ },
+ {
+ "name": "AdditionalContext",
+ "type": "string"
+ }
+]
+```
+
+The content of *AdditionalContext* column can now be parsed and used in the KQL transformation:
+
+```kusto
+source
+| extend parsedAdditionalContext = parse_json(AdditionalContext)
+| extend Level = toint (parsedAdditionalContext.Level)
+| extend DeviceId = tostring(parsedAdditionalContext.DeviceID)
+```
+
+### Dynamic literals
+[Dynamic literals](/azure/data-explorer/kusto/query/scalar-data-types/dynamic#dynamic-literals) aren't supported, but you can use the [parse_json function](/azure/data-explorer/kusto/query/parsejsonfunction) as a workaround.
+
+For example, the following query isn't supported:
+
+```kql
+print d=dynamic({"a":123, "b":"hello", "c":[1,2,3], "d":{}})
+ ```
+
+The following query is supported and provides the same functionality:
+
+```kql
+print d=parse_json('{"a":123, "b":"hello", "c":[1,2,3], "d":{}}')
+```
+
+## Supported KQL features
+
+### Supported statements
+
+#### let statement
+The right-hand side of [let](/data-explorer/kusto/query/letstatement) can be a scalar expression, a tabular expression or a user-defined function. Only user-defined functions with scalar arguments are supported.
+
+#### tabular expression statements
+The only supported data sources for the KQL statement are as follows:
+
+- **source**, which represents the source data. For example:
+
+```kql
+source
+| where ActivityId == "383112e4-a7a8-4b94-a701-4266dfc18e41"
+| project PreciseTimeStamp, Message
+```
+
+- [print](/azure/data-explorer/kusto/query/printoperator) operator, which always produces a single row. For example:
+
+```kusto
+print x = 2 + 2, y = 5 | extend z = exp2(x) + exp2(y)
+```
++
+### Tabular operators
+- [extend](/azure/data-explorer/kusto/query/extendoperator)
+- [project](/azure/data-explorer/kusto/query/projectoperator)
+- [print](/azure/data-explorer/kusto/query/printoperator)
+- [where](/azure/data-explorer/kusto/query/whereoperator)
+- [parse](/azure/data-explorer/kusto/query/parseoperator)
+- [project-away](/azure/data-explorer/kusto/query/projectawayoperator)
+- [project-rename](/azure/data-explorer/kusto/query/projectrenameoperator)
+- [columnifexists]() (use columnifexists instead of column_ifexists)
+
+### Scalar operators
+
+#### Numerical operators
+All [Numerical operators](/azure/data-explorer/kusto/query/numoperators) are supported.
+
+#### Datetime and Timespan arithmetic operators
+All [Datetime and Timespan arithmetic operators](/azure/data-explorer/kusto/query/datetime-timespan-arithmetic) are supported.
+
+#### String operators
+The following [String operators](/azure/data-explorer/kusto/query/datatypes-string-operators) are supported.
+
+- ==
+- !=
+- =~
+- !~
+- contains
+- !contains
+- contains_cs
+- !contains_cs
+- startswith
+- !startswith
+- startswith_cs
+- !startswith_cs
+- endswith
+- !endswith
+- endswith_cs
+- !endswith_cs
+- matches regex
+- in
+- !in
+
+#### Bitwise operators
+
+The following [Bitwise operators](/azure/data-explorer/kusto/query/binoperators) are supported.
+
+- binary_and()
+- binary_or()
+- binary_xor()
+- binary_not()
+- binary_shift_left()
+- binary_shift_right()
+
+### Scalar functions
+
+#### Bitwise functions
+
+- [binary_and](/azure/data-explorer/kusto/query/binary-andfunction)
+- [binary_or](/azure/data-explorer/kusto/query/binary-orfunction)
+- [binary_not](/azure/data-explorer/kusto/query/binary-notfunction)
+- [binary_shift_left](/azure/data-explorer/kusto/query/binary-shift-leftfunction)
+- [binary_shift_right](/azure/data-explorer/kusto/query/binary-shift-rightfunction)
+- [binary_xor](/azure/data-explorer/kusto/query/binary-xorfunction)
+
+#### Conversion functions
+
+- [tobool](/azure/data-explorer/kusto/query/toboolfunction)
+- [todatetime](/azure/data-explorer/kusto/query/todatetimefunction)
+- [todouble/toreal](/azure/data-explorer/kusto/query/todoublefunction)
+- [toguid](/azure/data-explorer/kusto/query/toguid)
+- [toint](/azure/data-explorer/kusto/query/toint)
+- [tolong](/azure/data-explorer/kusto/query/tolong)
+- [tostring](/azure/data-explorer/kusto/query/tostringfunction)
+- [totimespan](/azure/data-explorer/kusto/query/totimespanfunction)
+
+#### DateTime and TimeSpan functions
+
+- [ago](/azure/data-explorer/kusto/query/agofunction)
+- [datetime_add](/azure/data-explorer/kusto/query/datetime-addfunction)
+- [datetime_diff](/azure/data-explorer/kusto/query/datetime-difffunction)
+- [datetime_part](/azure/data-explorer/kusto/query/datetime-partfunction)
+- [dayofmonth](/azure/data-explorer/kusto/query/dayofmonthfunction)
+- [dayofweek](/azure/data-explorer/kusto/query/dayofweekfunction)
+- [dayofyear](/azure/data-explorer/kusto/query/dayofyearfunction)
+- [endofday](/azure/data-explorer/kusto/query/endofdayfunction)
+- [endofmonth](/azure/data-explorer/kusto/query/endofmonthfunction)
+- [endofweek](/azure/data-explorer/kusto/query/endofweekfunction)
+- [endofyear](/azure/data-explorer/kusto/query/endofyearfunction)
+- [getmonth](/azure/data-explorer/kusto/query/getmonthfunction)
+- [getyear](/azure/data-explorer/kusto/query/getyearfunction)
+- [hourofday](/azure/data-explorer/kusto/query/hourofdayfunction)
+- [make_datetime](/azure/data-explorer/kusto/query/make-datetimefunction)
+- [make_timespan](/azure/data-explorer/kusto/query/make-timespanfunction)
+- [now](/azure/data-explorer/kusto/query/nowfunction)
+- [startofday](/azure/data-explorer/kusto/query/startofdayfunction)
+- [startofmonth](/azure/data-explorer/kusto/query/startofmonthfunction)
+- [startofweek](/azure/data-explorer/kusto/query/startofweekfunction)
+- [startofyear](/azure/data-explorer/kusto/query/startofyearfunction)
+- [todatetime](/azure/data-explorer/kusto/query/todatetimefunction)
+- [totimespan](/azure/data-explorer/kusto/query/totimespanfunction)
+- [weekofyear](/azure/data-explorer/kusto/query/weekofyearfunction)
+
+#### Dynamic and array functions
+
+- [array_concat](/azure/data-explorer/kusto/query/arrayconcatfunction)
+- [array_length](/azure/data-explorer/kusto/query/arraylengthfunction)
+- [pack_array](/azure/data-explorer/kusto/query/packarrayfunction)
+- [pack](/azure/data-explorer/kusto/query/packfunction)
+- [parse_json](/azure/data-explorer/kusto/query/parsejsonfunction)
+- [parse_xml](/azure/data-explorer/kusto/query/parse-xmlfunction.html)
+- [zip](/azure/data-explorer/kusto/query/zipfunction)
+
+#### Mathematical functions
+
+- [abs](/azure/data-explorer/kusto/query/abs-function)
+- [bin/floor](/azure/data-explorer/kusto/query/binfunction)
+- [ceiling](/azure/data-explorer/kusto/query/ceilingfunction)
+- [exp](/azure/data-explorer/kusto/query/exp-function)
+- [exp10](/azure/data-explorer/kusto/query/exp10-function)
+- [exp2](/azure/data-explorer/kusto/query/exp2-function)
+- [isfinite](/azure/data-explorer/kusto/query/isfinitefunction)
+- [isinf](/azure/data-explorer/kusto/query/isinffunction)
+- [isnan](/azure/data-explorer/kusto/query/isnanfunction)
+- [log](/azure/data-explorer/kusto/query/log-function)
+- [log10](/azure/data-explorer/kusto/query/log10-function)
+- [log2](/azure/data-explorer/kusto/query/log2-function)
+- [pow](/azure/data-explorer/kusto/query/powfunction)
+- [round](/azure/data-explorer/kusto/query/roundfunction)
+- [sign](/azure/data-explorer/kusto/query/signfunction)
+
+#### Conditional functions
+
+- [case](/azure/data-explorer/kusto/query/casefunction)
+- [iif](/azure/data-explorer/kusto/query/iiffunction)
+- [max_of](/azure/data-explorer/kusto/query/max-offunction)
+- [min_of](/azure/data-explorer/kusto/query/min-offunction)
+
+#### String functions
+
+- [base64_encodestring](/azure/data-explorer/kusto/query/base64_encode_tostringfunction) (use base64_encodestring instead of base64_encode_tostring)
+- [base64_decodestring](/azure/data-explorer/kusto/query/base64_decode_tostringfunction) (use base64_decodestring instead of base64_decode_tostring)
+- [countof](/azure/data-explorer/kusto/query/countoffunction)
+- [extract](/azure/data-explorer/kusto/query/extractfunction)
+- [extract_all](/azure/data-explorer/kusto/query/extractallfunction)
+- [indexof](/azure/data-explorer/kusto/query/indexoffunction)
+- [isempty](/azure/data-explorer/kusto/query/isemptyfunction)
+- [isnotempty](/azure/data-explorer/kusto/query/isnotemptyfunction)
+- [parse_json](/azure/data-explorer/kusto/query/parsejsonfunction)
+- [split](/azure/data-explorer/kusto/query/splitfunction)
+- [strcat](/azure/data-explorer/kusto/query/strcatfunction)
+- [strcat_delim](/azure/data-explorer/kusto/query/strcat-delimfunction)
+- [strlen](/azure/data-explorer/kusto/query/strlenfunction)
+- [substring](/azure/data-explorer/kusto/query/substringfunction)
+- [tolower](/azure/data-explorer/kusto/query/tolowerfunction)
+- [toupper](/azure/data-explorer/kusto/query/toupperfunction)
+- [hash_sha256](/azure/data-explorer/kusto/query/sha256hashfunction)
+
+#### Type functions
+
+- [gettype](/azure/data-explorer/kusto/query/gettypefunction)
+- [isnotnull](/azure/data-explorer/kusto/query/isnotnullfunction)
+- [isnull](/azure/data-explorer/kusto/query/isnullfunction)
+
+### Identifier quoting
+Use [Identifier quoting](/azure/data-explorer/kusto/query/schema-entities/entity-names?q=identifier#identifier-quoting) as required.
++++
+## Next steps
+
+- [Create a data collection rule](../agents/data-collection-rule-azure-monitor-agent.md) and an association to it from a virtual machine using the Azure Monitor agent.
azure-monitor Metrics Charts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-charts.md
Previously updated : 06/30/2020 Last updated : 02/21/2022
To change the color of a chart line, select the colored bar in the legend that c
Your customized colors are preserved when you pin the chart to a dashboard. The following section shows how to pin a chart.
-## Pinning to dashboards
+## Saving to dashboards or workbooks
-After you configure a chart, you might want to add it to a dashboard. By pinning a chart to a dashboard, you can make it accessible to your team. You can also gain insights by viewing it in the context of other monitoring telemetry.
+After you configure a chart, you might want to add it to a dashboard or workbook. By adding a chart to a dashboard or workbook, you can make it accessible to your team. You can also gain insights by viewing it in the context of other monitoring telemetry.
-To pin a configured chart to a dashboard, in the upper-right corner of the chart, select **Pin to dashboard**.
+- To pin a configured chart to a dashboard, in the upper-right corner of the chart, select **Save to dashboard** and then **Pin to dashboard**.
+- To save a configured chart to a workbook, in the upper-right corner of the chart, select **Save to dashboard** and then **Save to workbook**.
-![Screenshot showing how to pin a chart to a dashboard.](./media/metrics-charts/036.png)
## Alert rules
azure-monitor Metrics Dynamic Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-dynamic-scope.md
In this example, we filter by TailspinToysDemo. Here, the filter removes metrics
Multiple-resource charts that visualize metrics across resource groups and subscriptions require the user to have *Monitoring Reader* permission at the subscription level. Ensure that all users of the dashboards to which you pin multiple-resource charts have sufficient permissions. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
-To pin your multiple-resource chart to a dashboard, see [Pinning to dashboards](../essentials/metrics-charts.md#pinning-to-dashboards).
+To pin your multiple-resource chart to a dashboard, see [Saving to dashboards or workbooks](../essentials/metrics-charts.md#saving-to-dashboards-or-workbooks).
## Next steps
azure-monitor Metrics Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-getting-started.md
Previously updated : 02/25/2019 Last updated : 02/21/2022
Azure Monitor metrics explorer is a component of the Microsoft Azure portal that
To create a metric chart, from your resource, resource group, subscription, or Azure Monitor view, open the **Metrics** tab and follow these steps:
-1. Click on the "Select a scope" button to open the resource scope picker. This will allow you to select the resource(s) you want to see metrics for. The resource should already be populated if you opened metrics explorer from the resource's menu. To learn how to view metrics across multiple resources, [read this article](./metrics-dynamic-scope.md).
+1. Select the "Select a scope" button to open the resource scope picker. This allows you to select the resource(s) you want to see metrics for. The resource should already be populated if you opened metrics explorer from the resource's menu. To learn how to view metrics across multiple resources, [read this article](./metrics-dynamic-scope.md).
> ![Select a resource](./media/metrics-getting-started/scope-picker.png)
-2. For some resources, you must pick a namespace. The namespace is just a way to organize metrics so that you can easily find them. For example, storage accounts have separate namespaces for storing Files, Tables, Blobs, and Queues metrics. Many resource types only have one namespace.
+1. For some resources, you must pick a namespace. The namespace is just a way to organize metrics so that you can easily find them. For example, storage accounts have separate namespaces for storing Files, Tables, Blobs, and Queues metrics. Many resource types only have one namespace.
-3. Select a metric from a list of available metrics.
+1. Select a metric from a list of available metrics.
> ![Select a metric](./media/metrics-getting-started/metrics-dropdown.png)
-4. Optionally, you can [change the metric aggregation](../essentials/metrics-charts.md#aggregation). For example, you might want your chart to show minimum, maximum, or average values of the metric.
+1. Optionally, you can [change the metric aggregation](../essentials/metrics-charts.md#aggregation). For example, you might want your chart to show minimum, maximum, or average values of the metric.
> [!TIP] > Use the **Add metric** button and repeat these steps if you want to see multiple metrics plotted in the same chart. For multiple charts in one view, select the **Add chart** button on top.
By default, the chart shows the most recent 24 hours of metrics data. Use the **
See [examples of the charts](../essentials/metric-chart-samples.md) that have filtering and splitting applied. The article shows the steps were used to configure the charts. ## Share your metric chart
-There are currently two ways to share your metric chart. Below are the instructions on how to share information from your metrics charts through Excel and a link.
+There are three ways to share your metric chart. See the instructions below on how to share information from your metrics charts using Excel, a link and a workbook.
### Download to Excel
-Click "Share" and select "Download to Excel". Your download should start immediately.
+Select "Share" and "Download to Excel". Your download should start immediately.
-![screenshot on how to share metric chart via excel](./media/metrics-getting-started/share-excel.png)
### Share a link
-Click "Share" and select "Copy link". You should get a notification that the link was copied successfully.
+Select "Share" and "Copy link". You should get a notification that the link was copied successfully.
-![screenshot on how to share metric chart via link](./media/metrics-getting-started/share-link.png)
+
+### Send to workbook
+Select "Share" and "Send to Workbook". The **Send to Workbook** window opens for you to send the metric chart to a new or existing workbook.
+ ## Advanced chart settings
-You can customize chart style, title, and modify advanced chart settings. When done with customization, pin it to a dashboard to save your work. You can also configure metrics alerts. Follow [product documentation](../essentials/metrics-charts.md) to learn about these and other advanced features of Azure Monitor metrics explorer.
+You can customize chart style, title, and modify advanced chart settings. When done with customization, pin it to a dashboard or save to a workbook to save your work. You can also configure metrics alerts. Follow [product documentation](../essentials/metrics-charts.md) to learn about these and other advanced features of Azure Monitor metrics explorer.
## Next steps
azure-monitor Sql Insights Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/sql-insights-enable.md
To learn how to enable SQL Insights, you can also refer to this Data Exposed epi
> [!VIDEO https://docs.microsoft.com/Shows/Data-Exposed/How-to-Set-up-Azure-Monitor-for-SQL-Insights/player?format=ny] ## Create Log Analytics workspace
-SQL insights stores its data in one or more [Log Analytics workspaces](../logs/data-platform-logs.md#log-analytics-and-workspaces). Before you can enable SQL Insights, you need to either [create a workspace](../logs/quick-create-workspace.md) or select an existing one. A single workspace can be used with multiple monitoring profiles, but the workspace and profiles must be located in the same Azure region. To enable and access the features in SQL insights, you must have the [Log Analytics contributor role](../logs/manage-access.md) in the workspace.
+SQL insights stores its data in one or more [Log Analytics workspaces](../logs/data-platform-logs.md#log-analytics-workspaces). Before you can enable SQL Insights, you need to either [create a workspace](../logs/quick-create-workspace.md) or select an existing one. A single workspace can be used with multiple monitoring profiles, but the workspace and profiles must be located in the same Azure region. To enable and access the features in SQL insights, you must have the [Log Analytics contributor role](../logs/manage-access.md) in the workspace.
## Create monitoring user You need a user (login) on the SQL deployments that you want to monitor. Follow the procedures below for different types of SQL deployments.
The profile will store the information that you want to collect from your SQL sy
For example, you might create one profile named *SQL Production* and another named *SQL Staging* with different settings for frequency of data collection, what data to collect, and which workspace to send the data to.
-The profile is stored as a [data collection rule](../agents/data-collection-rule-overview.md) resource in the subscription and resource group you select. Each profile needs the following:
+The profile is stored as a [data collection rule](../essentials/data-collection-rule-overview.md) resource in the subscription and resource group you select. Each profile needs the following:
- Name. Cannot be edited once created. - Location. This is an Azure region.
azure-monitor Sql Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/sql-insights-overview.md
SQL insights performs all monitoring remotely. No agents are installed on the vi
SQL insights uses dedicated monitoring virtual machines to remotely collect data from your SQL resources. Each monitoring virtual machine has the [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) and the Workload Insights (WLI) extension installed.
-The WLI extension includes the open-source [Telegraf agent](https://www.influxdata.com/time-series-platform/telegraf/). SQL insights uses [data collection rules](../agents/data-collection-rule-overview.md) to specify the data collection settings for Telegraf's [SQL Server plug-in](https://www.influxdata.com/integration/microsoft-sql-server/).
+The WLI extension includes the open-source [Telegraf agent](https://www.influxdata.com/time-series-platform/telegraf/). SQL insights uses [data collection rules](../essentials/data-collection-rule-overview.md) to specify the data collection settings for Telegraf's [SQL Server plug-in](https://www.influxdata.com/integration/microsoft-sql-server/).
Different sets of data are available for Azure SQL Database, Azure SQL Managed Instance, and SQL Server. The following tables describe the available data. You can customize which datasets to collect and the frequency of collection when you [create a monitoring profile](sql-insights-enable.md#create-sql-monitoring-profile).
azure-monitor Azure Cli Log Analytics Workspace Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/azure-cli-log-analytics-workspace-sample.md
az monitor log-analytics workspace table update --resource-group ContosoRG \
The retention time is between 30 and 730 days.
-For more information about tables, see [Data structure](./data-platform-logs.md#data-structure).
+For more information about tables, see [Data structure](./log-analytics-workspace-overview.md#data-structure).
+
+## Delete a table
+
+You can delete [Custom Log](custom-logs-overview.md), [Search Results](search-jobs.md) and [Restored Logs](restore.md) tables.
+
+To delete a table, run the [az monitor log-analytics workspace table delete](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-data-export-delete) command:
+
+```azurecli
+az monitor log-analytics workspace table delete ΓÇôsubscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
+ --name MySearchTable_SRCH
+```
## Export data from selected tables
az monitor log-analytics workspace data-export delete --resource-group ContosoRG
For more information about data export, see [Log Analytics workspace data export in Azure Monitor](./logs-data-export.md). + ## Manage a linked service Linked services define a relation from the workspace to another Azure resource. Azure Monitor Logs and Azure resources use this connection in their operations. Example uses of linked services, including an automation account and a workspace association to customer-managed keys.
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
+
+ Title: Configure Basic Logs in Azure Monitor (Preview)
+description: Configure a table for Basic Logs in Azure Monitor.
++ Last updated : 01/13/2022+++
+# Configure Basic Logs in Azure Monitor (Preview)
+
+Setting a table's [log data plan](log-analytics-workspace-overview.md#log-data-plans-preview) to *Basic Logs* lets you save on the cost of storing high-volume verbose logs you use for debugging, troubleshooting and auditing, but not for analytics and alerts. This article describes how to configure Basic Logs for a particular table in your Log Analytics workspace.
+
+> [!IMPORTANT]
+> You can switch plans once a week.
+
+## Which tables support Basic Logs?
+All tables in your Log Analytics are Analytics tables, by default. You can configure particular tables to use Basic Logs. You can't configure a table for Basic Logs if Azure Monitor relies on that table for specific features.
+
+You can currently configure the following tables for Basic Logs:
+
+- All tables created with the [Data Collection Rule (DCR)-based custom logs API.](custom-logs-overview.md)
+- [ContainerLog](/azure/azure-monitor/reference/tables/containerlog) and [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2), which [Container Insights](../containers/container-insights-overview.md) uses and which include verbose text-based log records.
+- [AppTraces](/azure/azure-monitor/reference/tables/apptraces), which contains freeform log records for application traces in Application Insights.
+
+> [!NOTE]
+> Tables created with the [Data Collector API](data-collector-api.md) do not support Basic Logs.
+
+## Set table configuration
+To configure a table for Basic Logs or Analytics Logs, call the **Tables - Update** API:
+
+```http
+PATCH https://management.azure.com/subscriptions/<subscriptionId>/resourcegroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/tables/<tableName>?api-version=2021-12-01-preview
+```
+> [!IMPORTANT]
+> Use the Bearer token for authentication. Read more about [using Bearer tokens](https://social.technet.microsoft.com/wiki/contents/articles/51140.azure-rest-management-api-the-quickest-way-to-get-your-bearer-token.aspx).
+
+### Request body
+|Name | Type | Description |
+| | | |
+|properties.plan | string | The table plan. Possible values are *Analytics* and *Basic*.|
+
+### Example
+This example configures the `ContainerLog` table for Basic Logs.
+#### Sample request
+
+```http
+PATCH https://management.azure.com/subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/Microsoft.OperationalInsights/workspaces/ContosoWorkspace/tables/ContainerLog?api-version=2021-12-01-preview
+```
+
+Use this request body to change to Basic Logs:
+
+```http
+{
+ "properties": {
+ "plan": "Basic"
+ }
+}
+```
+
+Use this request body to change to Analytics Logs:
+
+```http
+{
+ "properties": {
+ "plan": "Analytics"
+ }
+}
+```
+
+#### Sample response
+This is the response for a table changed to Basic Logs.
+
+Status code: 200
+
+```http
+{
+ "properties": {
+ "retentionInDays": 8,
+ "totalRetentionInDays": 30,
+ "archiveRetentionInDays": 22,
+ "plan": "Basic",
+ "lastPlanModifiedDate": "2022-01-01T14:34:04.37",
+ "schema": {...}
+ },
+ "id": "subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/Microsoft.OperationalInsights/workspaces/ContosoWorkspace",
+ "name": "ContainerLog"
+}
+```
++
+## Check table configuration
+# [Portal](#tab/portal-1)
+
+To check the configuration of a table in the Azure portal:
+
+1. From the **Azure Monitor** menu, select **Logs** and select your workspace for the [scope](scope.md). See [Log Analytics tutorial](log-analytics-tutorial.md#view-table-information) for a walkthrough.
+1. Open the **Tables** tab, which lists all tables in the workspace.
+
+ Basic Logs tables have a unique icon:
+
+ ![Screenshot of the Basic Logs table icon in the table list.](./media/basic-logs-configure/table-icon.png#lightbox)
+
+ You can also hover over a table name for the table information view. This will specify that the table is configured as Basic Logs:
+
+ ![Screenshot of the Basic Logs table indicator in the table details.](./media/basic-logs-configure/table-info.png#lightbox)
+
+# [API](#tab/api-2)
+
+To check the configuration of a table, call the **Tables - Get** API:
+
+```http
+GET https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{tableName}?api-version=2021-12-01-preview
+```
+
+**Response Body**
+
+|Name | Type | Description |
+| | | |
+|properties.plan | string | The table plan. Either "Analytics" or "Basic". |
+|properties.retentionInDays | integer | The table's data retention in days. In _Basic Logs_, the value is 8 days, fixed. In _Analytics Logs_, between 7 and 730.|
+|properties.totalRetentionInDays | integer | The table's data retention including Archive period|
+|properties.archiveRetentionInDays|integer|The table's archive period (read-only, calculated).|
+|properties.lastPlanModifiedDate|String|Last time when plan was set for this table. Null if no change was ever done from the default settings (read-only)
+
+**Sample Request**
+
+```http
+GET https://management.azure.com/subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/Microsoft.OperationalInsights/workspaces/ContosoWorkspace/tables/ContainerLog?api-version=2021-12-01-preview
+```
++
+**Sample Response**
+
+Status code: 200
+```http
+{
+ "properties": {
+ "retentionInDays": 8,
+ "totalRetentionInDays": 8,
+ "archiveRetentionInDays": 0,
+ "plan": "Basic",
+ "lastPlanModifiedDate": "2022-01-01T14:34:04.37",
+ "schema": {...},
+ "provisioningState": "Succeeded"
+ },
+ "id": "subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/Microsoft.OperationalInsights/workspaces/ContosoWorkspace",
+ "name": "ContainerLog"
+}
+```
+++
+## Retention and archiving of Basic Logs
+
+Analytics tables retain data based on a [retention and archive policy](data-retention-archive.md) you set.
+
+Basic Logs tables retain data for eight days. When you change an existing table's plan to Basic Logs, Azure archives data that is more than eight days old but still within the table's original retention period.
+
+## Next steps
+
+- [Learn more about the different log plans.](log-analytics-workspace-overview.md#log-data-plans-preview)
+- [Query data in Basic Logs.](basic-logs-query.md)
azure-monitor Basic Logs Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-query.md
+
+ Title: Query data from Basic Logs in Azure Monitor (Preview)
+description: Create a log query using tables configured for Basic logs in Azure Monitor.
++ Last updated : 01/27/2022+++
+# Query Basic Logs in Azure Monitor (Preview)
+Basic Logs reduce the cost of high-volume verbose logs you donΓÇÖt need for analytics and alerts. Basic Logs have reduced charges for ingestion and limitations on log queries and other Azure Monitor features. This article describes how to query data from tables configured for Basic Logs in the Azure portal and using the Log Analytics REST API.
+
+> [!NOTE]
+> Other tools that use the Azure API for querying - for example, Grafana and Power BI - cannot access Basic Logs.
+
+## Limits
+Queries with Basic Logs are subject to the following limitations:
+### KQL language limits
+Log queries against Basic Logs are optimized for simple data retrieval using a subset of KQL language, including the following operators:
+
+- [where](/azure/data-explorer/kusto/query/whereoperator)
+- [extend](/azure/data-explorer/kusto/query/extendoperator)
+- [project](/azure/data-explorer/kusto/query/projectoperator)
+- [project-away](/azure/data-explorer/kusto/query/projectawayoperator)
+- [project-keep](/azure/data-explorer/kusto/query/projectkeepoperator)
+- [project-rename](/azure/data-explorer/kusto/query/projectrenameoperator)
+- [project-reorder](/azure/data-explorer/kusto/query/projectreorderoperator)
+- [parse](/azure/data-explorer/kusto/query/parseoperator)
+- [parse-where](/azure/data-explorer/kusto/query/parsewhereoperator)
+
+You can use all functions and binary operators within these operators.
+
+### Time range
+Specify the time range in the query header in Log Analytics or in the API call. You can't specify the time range in the query body using a **where** statement.
+
+### Query context
+Queries with Basic Logs must use a workspace for the scope. You can't run queries using another resource for the scope. For more details, see [Log query scope and time range in Azure Monitor Log Analytics](scope.md).
+
+### Concurrent queries
+You can run two concurrent queries per user.
+
+### Purge
+You cannot [purge personal data](personal-data-mgmt.md#how-to-export-and-delete-private-data) from Basic Logs tables.
++
+## Run a query from the Azure portal
+Creating a query using Basic Logs is the same as any other query in Log Analytics. See [Get started with Azure Monitor Log Analytics](./log-analytics-tutorial.md) if you aren't familiar with this process.
+
+Open Log Analytics in the Azure portal and open the **Tables** tab. When browsing the list of tables, Basic Logs tables are identified with a unique icon:
+
+![Screenshot of the Basic Logs table icon in the table list.](./media/basic-logs-configure/table-icon.png)
+
+You can also hover over a table name for the table information view. This will specify that the table is configured as Basic Logs:
+
+![Screenshot of the Basic Logs table indicator in the table details.](./media/basic-logs-configure/table-info.png)
++
+When you add a table to the query, Log Analytics will identify a Basic Logs table and align the authoring experience accordingly. The following example shows when you attempt to use an operator that isn't supported by Basic Logs.
+
+![Screenshot of Query on Basic Logs limitations.](./media/basic-logs-query/query-validator.png)
+
+## Run a query from REST API
+Use **/search** from the [Log Analytics API](api/overview.md) to run a query with Basic Logs using a REST API. This is similar to the [/query](api/request-format.md) API with the following differences:
+
+- The query is subject to the language limitations described above.
+- The time span must be specified in the header of the request and not in the query statement.
+
+### Sample Request
+```http
+https://api.loganalytics.io/v1/workspaces/testWS/search?timespan=P1D
+```
+
+**Request body**
+
+```json
+{
+ "query": "ContainerLog | where LogEntry has \"some value\"\n",
+}
+```
+
+## Costs
+The charge for a query on Basic Logs is based on the amount of data the query scans, not just the amount of data the query returns. For example, a query that scans three days of data in a table that ingests 100 GB each day, would be charged for 300 GB. Calculation is based on chunks of up to one day of data.
+
+For more information, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
+
+> [!NOTE]
+> During the preview period, there is no cost for log queries on Basic Logs.
+
+## Next steps
+
+- [Learn more about Basic Logs and the different log plans.](log-analytics-workspace-overview.md#log-data-plans-preview)
+- [Configure a table for Basic Logs.](basic-logs-configure.md)
+- [Use a search job to retrieve data from Basic Logs into Analytics Logs where it can be queries multiple times.](search-jobs.md)
azure-monitor Custom Logs Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/custom-logs-overview.md
+
+ Title: Send custom logs to Azure Monitor Logs with REST API
+description: Sending log data to Azure Monitor using custom logs API.
+ Last updated : 01/06/2022+++
+# Custom logs API in Azure Monitor Logs (Preview)
+With the DCR based custom logs API in Azure Monitor, you can send data to a Log Analytics workspace from any REST API client. This allows you to send data from virtually any source to [supported built-in tables](tables-feature-support.md) or to custom tables that you create. You can even extend the schema of built-in tables with custom columns.
++
+> [!NOTE]
+> The custom logs API should not be confused with [custom logs](../agents/data-sources-custom-logs.md) data source with the legacy Log Analytics agent.
+## Basic operation
+Your application sends data to a [data collection endpoint](../essentials/data-collection-endpoint-overview.md) which is a unique connection point for your subscription. The payload of your API call includes the source data formatted in JSON. The call specifies a [data collection rule](../essentials/data-collection-rule-overview.md) that understands the format of the source data, potentially filters and transforms it for the target table, and then directs it to a specific table in a specific workspace. You can modify the target table and workspace by modifying the data collection rule without any change to the REST API call or source data.
++
+## Authentication
+Authentication for the custom logs API is performed at the data collection endpoint which uses standard Azure Resource Manager authentication. A common strategy is to use an Application ID and Application Key as described in [Tutorial: Add ingestion-time transformation to Azure Monitor Logs (preview)](tutorial-custom-logs.md).
+
+## Tables
+Custom logs can send data to any custom table that you create and to certain built-in tables in your Log Analytics workspace. The target table must exist before you can send data to it. The following built-in tables are currently supported:
+
+- [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecurityevent)
+- [SecurityEvents](/azure/azure-monitor/reference/tables/securityevents)
+- [Syslog](/azure/azure-monitor/reference/tables/syslog)
+- [WindowsEvents](/azure/azure-monitor/reference/tables/windowsevent)
+
+## Source data
+The source data sent by your application is formatted in JSON and must match the structure expected by the data collection rule. It doesn't necessarily need to match the structure of the target table since the DCR can include a transformation to convert the data to match the table's structure.
+
+## Data collection rule
+[Data collection rules](../essentials/data-collection-rule-overview.md) define data collected by Azure Monitor and specify how and where that data should be sent or stored. The REST API call must specify a DCR to use. A single DCE can support multiple DCRs, so you can specify a different DCR for different sources and target tables.
+
+The DCR must understand the structure of the input data and the structure of the target table. If the two don't match, it can use a [transformation](../essentials/data-collection-rule-transformations.md) to convert the source data to match the target table. You may also use the transform to filter source data and perform any other calculations or conversions.
+
+## Sending data
+Ingestion is a POST call to the data collection endpoint over HTTP. Details of the call are as follows:
+
+### Endpoint URI
+The endpoint URI uses the following format, where the `Data Collection Endpoint` and `DCR Immutable ID` identify the DCE and DCR. `Stream Name` refers to the [stream](../essentials/data-collection-rule-structure.md#custom-logs) in the DCR that should handle the custom data.
+
+```
+{Data Collection Endpoint URI}/dataCollectionRules/{DCR Immutable ID}/streams/{Stream Name}?api-version=2021-11-01-preview
+```
+
+### Headers
+The call can use the following headers:
+
+| Header | Required? | Value | Description |
+|:|:|:|:|
+| Authorization | Yes | Bearer {Bearer token obtained through the Client Credentials Flow} | |
+| Content-Type | Yes | `application/json` | |
+| Content-Encoding | No | `gzip` | Use the GZip compression scheme for performance optimization. |
+| x-ms-client-request-id | No | String-formatted GUID | Request ID that can be used by Microsoft for any troubleshooting purposes. |
+
+### Body
+The body of the call includes the custom data to be sent to Azure Monitor. The shape of the data must be a JSON object or array with a structure that matches the format expected by the stream in the DCR.
+
+## Limits and restrictions
+For limits related to custom logs, see [Azure Monitor service limits](../service-limits.md#custom-logs).
+
+### Table limits
+
+* Custom tables must have the `_CL` suffix.
+* Column names can consist of alphanumeric characters as well as the characters `_` and `-`. They must start with a letter.
+* Columns extended on top of built-in tables must have the suffix `_CF`. Columns in a custom table do not need this suffix.
+
+## Next steps
+
+- [Walk through a tutorial sending custom logs using the Azure portal.](tutorial-custom-logs.md)
+- [Walk through a tutorial sending custom logs using Resource Manager templates and REST API.](tutorial-custom-logs-api.md)
azure-monitor Data Collection Rule Sample Custom Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-collection-rule-sample-custom-logs.md
+
+ Title: Sample data collection rule - custom logs
+description: Sample data collection rule for custom logs.
+ Last updated : 02/15/2022+++
+# Sample data collection rule - custom logs
+The sample [data collection rule](../essentials/data-collection-rule-overview.md) below is for use with [custom logs](../logs/custom-logs-overview.md). It has the following details:
+
+- Sends data to a table called MyTable_CL in a workspace called my-workspace.
+- Applies a [transformation](../essentials/data-collection-rule-transformations.md) to the incoming data.
+
+## Sample DCR
+
+```json
+{
+ "properties": {
+ "dataCollectionEndpointId": "https://my-dcr.westus2-1.ingest.monitor.azure.com",
+ "streamDeclarations": {
+ "Custom-MyTableRawData": {
+ "columns": [
+ {
+ "name": "Time",
+ "type": "datetime"
+ },
+ {
+ "name": "Computer",
+ "type": "string"
+ },
+ {
+ "name": "AdditionalContext",
+ "type": "string"
+ }
+ ]
+ }
+ },
+ "destinations": {
+ "logAnalytics": [
+ {
+ "workspaceResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/cefingestion/providers/microsoft.operationalinsights/workspaces/my-workspace",
+ "name": "LogAnalyticsDest"
+ }
+ ]
+ },
+ "dataFlows": [
+ {
+ "streams": [
+ "Custom-MyTableRawData"
+ ],
+ "destinations": [
+ "LogAnalyticsDest"
+ ],
+ "transformKql": "source | extend jsonContext = parse_json(AdditionalContext) | project TimeGenerated = Time, Computer, AdditionalContext = jsonContext, ExtendedColumn=tostring(jsonContext.CounterName)",
+ "outputStream": "Custom-MyTable_CL"
+ }
+ ]
+ }
+}
+```
++
+## Next steps
+
+- [Walk through a tutorial on configuring custom logs using resource manager templates.](tutorial-custom-logs-api.md)
+- [Get details on the structure of data collection rules.](../essentials/data-collection-rule-structure.md)
+- [Get an overview on custom logs](custom-logs-overview.md).
azure-monitor Data Platform Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-platform-logs.md
description: Learn the basics of Azure Monitor Logs, which is used for advanced
documentationcenter: '' na Previously updated : 10/22/2020 Last updated : 01/27/2022 # Azure Monitor Logs overview
-Azure Monitor Logs is a feature of Azure Monitor that collects and organizes log and performance data from [monitored resources](../monitor-reference.md). Data from multiple sources can be consolidated into a single workspace. These sources include:
+Azure Monitor Logs is a feature of Azure Monitor that collects and organizes log and performance data from [monitored resources](../monitor-reference.md). Several features of Azure Monitor store their data in Logs and present this data in a variety of ways to assist you in monitoring the performance and availability of your cloud and hybrid applications and their supporting components.
-- [Platform logs](../essentials/platform-logs-overview.md) from Azure services.-- Log and performance data from [virtual machine agents](../agents/agents-overview.md).-- Usage and performance data from [applications](../app/app-insights-overview.md). -
-You can then analyze the data by using a sophisticated query language that's capable of quickly analyzing millions of records.
-
-You might perform a simple query that retrieves a specific set of records or perform sophisticated data analysis to identify critical patterns in your monitoring data. Work with log queries and their results interactively by using Log Analytics, use them in alert rules to be proactively notified of issues, or visualize their results in a workbook or dashboard.
+In addition to leveraging existing Azure Monitor features, you can analyze Logs data by using a sophisticated query language that's capable of quickly analyzing millions of records. You might perform a simple query that retrieves a specific set of records or perform sophisticated data analysis to identify critical patterns in your monitoring data. Work with log queries and their results interactively by using Log Analytics, use them in alert rules to be proactively notified of issues, or visualize their results in a workbook or dashboard.
> [!NOTE] > Azure Monitor Logs is one half of the data platform that supports Azure Monitor. The other is [Azure Monitor Metrics](../essentials/data-platform-metrics.md), which stores numeric data in a time-series database. Numeric data is more lightweight than data in Azure Monitor Logs. Azure Monitor Metrics can support near real-time scenarios, so it's useful for alerting and fast detection of issues.
The following table describes some of the ways that you can use Azure Monitor Lo
| **Analyze** | Use [Log Analytics](./log-analytics-tutorial.md) in the Azure portal to write [log queries](./log-query-overview.md) and interactively analyze log data by using a powerful analysis engine. | | **Alert** | Configure a [log alert rule](../alerts/alerts-log.md) that sends a notification or takes [automated action](../alerts/action-groups.md) when the results of the query match a particular result. | | **Visualize** | Pin query results rendered as tables or charts to an [Azure dashboard](../../azure-portal/azure-portal-dashboards.md).<br>Create a [workbook](../visualize/workbooks-overview.md) to combine with multiple sets of data in an interactive report. <br>Export the results of a query to [Power BI](./log-powerbi.md) to use different visualizations and share with users outside Azure.<br>Export the results of a query to [Grafana](../visualize/grafana-plugin.md) to use its dashboarding and combine with other data sources.|
-| **Get insights** | Support [insights](../monitor-reference.md#insights-and-curated-visualizations) that provide a customized monitoring experience for particular applications and services. |
+| **Get insights** | Logs support [insights](../monitor-reference.md#insights-and-curated-visualizations) that provide a customized monitoring experience for particular applications and services. |
| **Retrieve** | Access log query results from a:<ul><li>Command line via the [Azure CLI](/cli/azure/monitor/log-analytics) or [Azure PowerShell cmdlets](/powershell/module/az.operationalinsights).</li><li>Custom app via the [REST API](https://dev.loganalytics.io/) or client library for [.NET](/dotnet/api/overview/azure/Monitor.Query-readme), [Java](/java/api/overview/azure/monitor-query-readme), [JavaScript](/javascript/api/overview/azure/monitor-query-readme), or [Python](/python/api/overview/azure/monitor-query-readme).</li></ul> | | **Export** | Configure [automated export of log data](./logs-data-export.md) to an Azure storage account or Azure Event Hubs.<br>Build a workflow to retrieve log data and copy it to an external location by using [Azure Logic Apps](./logicapp-flow-connector.md). | ![Diagram that shows an overview of Azure Monitor Logs.](media/data-platform-logs/logs-overview.png) ## Data collection
-After you create a Log Analytics workspace, you must configure sources to send their data. No data is collected automatically.
+After you create a [Log Analytics workspace](#log-analytics-workspaces), you must configure sources to send their data. No data is collected automatically.
This configuration will be different depending on the data source. For example:
This configuration will be different depending on the data source. For example:
- [Enable VM insights](../vm/vminsights-enable-overview.md) to collect data from virtual machines. - [Configure data sources on the workspace](../agents/data-sources.md) to collect more events and performance data.
-For a complete list of data sources that you can configure to send data to Azure Monitor Logs, see [What is monitored by Azure Monitor?](../monitor-reference.md).
+> [!IMPORTANT]
+> Most data collection in Logs will incur ingestion and retention costs, so refer to [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) before enabling any data collection.
-## Log Analytics and workspaces
-Log Analytics is a tool in the Azure portal. Use it to edit and run log queries and interactively analyze their results. You can then use those queries to support other features in Azure Monitor, such as log query alerts and workbooks. Access Log Analytics from the **Logs** option on the Azure Monitor menu or from most other services in the Azure portal.
-For a description of Log Analytics, see [Overview of Log Analytics in Azure Monitor](./log-analytics-overview.md). To walk through using Log Analytics features to create a simple log query and analyze its results, see [Log Analytics tutorial](./log-analytics-tutorial.md).
+For a complete list of data sources that you can configure to send data to Azure Monitor Logs, see [What is monitored by Azure Monitor?](../monitor-reference.md).
-Azure Monitor Logs stores the data that it collects in one or more [Log Analytics workspaces](./design-logs-deployment.md). A workspace defines:
+## Log Analytics workspaces
+Azure Monitor Logs stores the data that it collects in one or more [Log Analytics workspaces](./design-logs-deployment.md). You must create at least one workspace to use Azure Monitor Logs. See [Log Analytics workspace overview](log-analytics-workspace-overview.md) For a description of Log Analytics workspaces.
-- The geographic location of the data.-- Access rights that define which users can access data.-- Configuration settings such as the pricing tier and data retention.
+## Log Analytics
+Log Analytics is a tool in the Azure portal. Use it to edit and run log queries and interactively analyze their results. You can then use those queries to support other features in Azure Monitor, such as log query alerts and workbooks. Access Log Analytics from the **Logs** option on the Azure Monitor menu or from most other services in the Azure portal.
-You must create at least one workspace to use Azure Monitor Logs. A single workspace might be sufficient for all of your monitoring data, or you might choose to create multiple workspaces depending on your requirements. For example, you might have one workspace for your production data and another for testing.
+For a description of Log Analytics, see [Overview of Log Analytics in Azure Monitor](./log-analytics-overview.md). To walk through using Log Analytics features to create a simple log query and analyze its results, see [Log Analytics tutorial](./log-analytics-tutorial.md).
-To create a new workspace, see [Create a Log Analytics workspace in the Azure portal](./quick-create-workspace.md). For considerations on creating multiple workspaces, see [Designing your Azure Monitor Logs deployment](design-logs-deployment.md).
## Log queries Data is retrieved from a Log Analytics workspace through a log query, which is a read-only request to process data and return results. Log queries are written in [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/). KQL is the same query language that Azure Data Explorer uses.
For a list of where log queries are used and references to tutorials and other d
![Screenshot that shows queries in Log Analytics.](media/data-platform-logs/log-analytics.png)
-## Data structure
-Log queries retrieve their data from a Log Analytics workspace. Each workspace contains multiple tables that are organized into separate columns with multiple rows of data. Each table is defined by a unique set of columns. Rows of data provided by the data source share those columns.
-
-[![Diagram that shows the Azure Monitor Logs structure.](media/data-platform-logs/logs-structure.png)](media/data-platform-logs/logs-structure.png#lightbox)
-
-Log data from Application Insights is also stored in Azure Monitor Logs, but it's stored differently depending on how your application is configured:
--- For a workspace-based application, data is stored in a Log Analytics workspace in a standard set of tables. The types of data include application requests, exceptions, and page views. Multiple applications can use the same workspace. --- For a classic application, the data is not stored in a Log Analytics workspace. It uses the same query language, and you create and run queries by using the same Log Analytics tool in the Azure portal. Data items for classic applications are stored separately from each other. The general structure is the same as for workspace-based applications, although the table and column names are different. -
-For a detailed comparison of the schema for workspace-based and classic applications, see [Workspace-based resource changes](../app/apm-tables.md).
-
-> [!NOTE]
-> The classic Application Insights experience includes backward compatibility for your resource queries, workbooks, and log-based alerts. To query or view against the [new workspace-based table structure or schema](../app/apm-tables.md), you must first go to your Log Analytics workspace. During the preview, selecting **Logs** from within the Application Insights panes will give you access to the classic Application Insights query experience. For more information, see [Query scope](./scope.md).
-
-[![Diagram that shows the Azure Monitor Logs structure for Application Insights.](media/data-platform-logs/logs-structure-ai.png)](media/data-platform-logs/logs-structure-ai.png#lightbox)
## Relationship to Azure Data Explorer Azure Monitor Logs is based on Azure Data Explorer. A Log Analytics workspace is roughly the equivalent of a database in Azure Data Explorer. Tables are structured the same, and both use KQL.
azure-monitor Data Retention Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-retention-archive.md
+
+ Title: Configure data retention and archive in Azure Monitor Logs (Preview)
+description: Configure archive settings for a table in a Log Analytics workspace in Azure Monitor.
+++++ Last updated : 01/27/2022
+# Customer intent: As an Azure account administrator, I want to set data retention and archive policies to save retention costs.
++
+# Configure data retention and archive policies in Azure Monitor Logs (Preview)
+Retention policies define when to remove or archive data in a [Log Analytics workspace](log-analytics-workspace-overview.md). Archiving lets you keep older, less used data in your workspace at a reduced cost.
+
+This article describes how to configure data retention and archiving.
+
+## How retention and archiving work
+Each workspace has a default retention policy that's applied to all tables. You can set a different retention policy on individual tables.
++
+During the interactive retention period, data is available for monitoring, troubleshooting and analytics. When you no longer use the logs, but still need to keep the data for compliance or occasional investigation, archive the logs to save costs. You can access archived data by [running a search job](search-jobs.md) or [restoring archived logs](restore.md).
+
+> [!NOTE]
+> The archive feature is currently in public preview and can only be set at the table level, not at the workspace level.
+
+## Configure the default workspace retention policy
+You can set the workspace default retention policy in the Azure portal to 30, 31, 60, 90, 120, 180, 270, 365, 550, and 730 days. To set a different policy, use the Resource Manager configuration method described below. If you're on the *free* tier, you need to upgrade to the paid tier to change the data retention period.
+
+To set the default workspace retention policy:
+
+1. From the **Logs Analytics workspaces** menu in the Azure portal, select your workspace.
+1. Select **Usage and estimated costs** in the left pane.
+1. Select **Data Retention** at the top of the page.
+
+ :::image type="content" source="media/manage-cost-storage/manage-cost-change-retention-01.png" alt-text="Change workspace data retention setting":::
+
+1. Move the slider to increase or decrease the number of days, and then select **OK**.
+
+## Set retention and archive policy by table
+
+You can set retention policies for individual tables, except for workspaces in the legacy Free Trial pricing tier, using Azure Resource Manager APIs. You cannot currently configure data retention for individual tables in the Azure portal.
+
+You can keep data in interactive retention between 4 and 730 days. You can set the archive period for a total retention time of up to 2,555 days (seven years).
+
+Each table is a sub-resource of the workspace it's in. For example, you can address the `SecurityEvent` table in [Azure Resource Manager](../../azure-resource-manager/management/overview.md) as:
+
+```
+/subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent
+```
+
+Note that the table name is case-sensitive.
+
+### Get retention and archive policy by table
+
+To get the retention policy of a particular table (in this example, `SecurityEvent`), Call the **Tables - Get** API:
+
+```JSON
+GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent?api-version=2021-12-01-preview
+```
+
+To get all table-level retention policies in your workspace, don't set a table name; for example:
+
+```JSON
+GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables?api-version=2021-12-01-preview
+```
+### Set the retention and archive policy for a table
+
+To set the retention and archive duration for a table, call the **Tables - Update** API:
+
+```http
+PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{tableName}?api-version=2021-12-01-preview
+```
+
+> [!NOTE]
+> You don't explicitly specify the archive duration in the API call. Instead, you set the total retention, which specifies the retention plus the archive duration.
++
+You can use either PUT or PATCH, with the following difference:
+
+- The **PUT** API sets *retentionInDays* and *totalRetentionInDays* to the default value if you don't set non-null values.
+- The **PATCH** API doesn't change the *retentionInDays* or *totalRetentionInDays* values if you don't specify values.
++
+#### Request body
+The request body includes the values in the following table.
+
+|Name | Type | Description |
+| | | |
+|properties.retentionInDays | integer | The table's data retention in days. This value can be between 4 and 730; or 1095, 1460, 1826, 2191, or 2556. <br/>Setting this property to null will default to the workspace retention. For a Basic Logs table, the value is always 8. |
+|properties.totalRetentionInDays | integer | The table's total data retention including archive period. Set this property to null if you don't want to archive data. |
+
+#### Example
+The following table sets table retention to workspace default of 30 days, and total of 2 years. This means that the archive duration would be 23 months.
+###### Request
+
+```http
+PATCH https://management.azure.com/subscriptions/00000000-0000-0000-0000-00000000000/resourcegroups/testRG/providers/Microsoft.OperationalInsights/workspaces/testWS/tables/CustomLog_CL?api-version=2021-12-01-preview
+```
+
+#### Request body
+```http
+{
+ "properties": {
+ "retentionInDays": null,
+ "totalRetentionInDays": 730
+ }
+}
+```
+
+###### Response
+
+Status code: 200
+
+```http
+{
+ "properties": {
+ "retentionInDays": 30,
+ "totalRetentionInDays": 730,
+ "archiveRetentionInDays": 700,
+ ...
+ },
+ ...
+}
+```
+
+## Purge retained data
+When you shorten an existing retention policy, it takes several days for Azure Monitor to remove data that you no longer want to keep.
+
+If you set the data retention policy to 30 days, you can purge older data immediately using the `immediatePurgeDataOn30Days` parameter in Azure Resource Manager. This can be useful when you need to remove personal data immediately. The immediate purge functionality is not available through the Azure portal.
+
+Note that workspaces with a 30-day retention policy might actually keep data for 31 days if you don't set the `immediatePurgeDataOn30Days` parameter.
+
+You can also purge data from a workspace using the [purge feature](personal-data-mgmt.md#how-to-export-and-delete-private-data), which removes personal data. You cannot purge data from archived logs.
+
+The Log Analytics [Purge API](/rest/api/loganalytics/workspacepurge/purge) doesn't affect retention billing. **To lower retention costs, decrease the retention period for the workspace or for specific tables.**
+
+## Tables with unique retention policies
+By default, the tables of two data types - `Usage` and `AzureActivity` - keep data for at least 90 days at no charge. Increasing the workspace retention policy to more than 90 days also increases the retention policy of these tables. These tables are also free from data ingestion charges.
+
+Tables related to Application Insights resources also keep data for 90 days at no charge. You can adjust the retention policy of each of these tables individually.
+
+- `AppAvailabilityResults`
+- `AppBrowserTimings`
+- `AppDependencies`
+- `AppExceptions`
+- `AppEvents`
+- `AppMetrics`
+- `AppPageViews`
+- `AppPerformanceCounters`,
+- `AppRequests`
+- `AppSystemEvents`
+- `AppTraces`
+
+## Pricing model
+
+You'll be charged for each day you retain data. The cost of retaining data for part of a day is the same as for a full day.
+
+For more information, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
+
+## Next steps
+- [Learn more about Log Analytics workspaces and data retention and archive.](log-analytics-workspace-overview.md)
+- [Create a search job to retrieve archive data matching particular criteria.](search-jobs.md)
+- [Restore archive data within a particular time range.](restore.md)
azure-monitor Ingestion Time Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/ingestion-time-transformations.md
+
+ Title: Overview of ingestion-time transformations in Azure Monitor Logs
+description: This article describes ingestion-time transformations which allow you to filter and transform data before it's stored in a Log Analytics workspace in Azure Monitor.
++ Last updated : 01/19/2022++
+# Ingestion-time transformations in Azure Monitor Logs (preview)
+[Ingestion-time transformations](ingestion-time-transformations.md) allow you to manipulate incoming data before it's stored in a Log Analytics workspace. You can add data filtering, parsing and extraction, and control the structure of the data that gets ingested.in
++
+## Basic operation
+The transformation is a [KQL query](../essentials/data-collection-rule-transformations.md) that runs against the incoming data and modifies it before it's stored in the workspace. Transformations are defined separately for each table in the workspace. This article provides an overview of this feature and guidance for further details and samples. Configuration for ingestion-time transformation is stored in a workspace transformation DCR. You can either [create this DCR directly](tutorial-ingestion-time-transformations-api.md) or configure transformation [through the Azure portal](tutorial-ingestion-time-transformations.md).
+
+## When to use ingestion-time transformations
+Use ingestion-time transformation for the following scenarios:
+
+**Reduce data ingestion cost.** You can create a transformation to filter data that you don't require from a particular workflow. You may also remove data that you don't require from specific columns, resulting in a lower amount of the data that you need to ingest and store. For example, you might have a diagnostic setting to collect resource logs from a particular resource but not require all of the log entries that it generates. Create a transformation that filters out records that match a certain criteria.
+
+**Simplify query requirements.** You may have a table with valuable data buried in a particular column or data that needs some type of conversion each time it's queried. Create a transformation that parses this data into a custom column so that queries don't need to parse it. Remove extra data from the column that isn't required to decrease ingestion and retention costs.
+
+## Supported workflows
+Ingestion-time transformation is applied to any workflow that doesn't currently use a [data collection rule](../essentials/data-collection-rule-overview.md) sending data to a [supported table](tables-feature-support.md). The workflows that currently use data collection rules are as follows. Any transformation on a workspace will be ignored for these workloads.
+
+- [Azure Monitor agent](../agents/data-collection-rule-azure-monitor-agent.md)
+- [Custom logs](../logs/custom-logs-overview.md)
+
+## Supported tables
+See [Supported tables for ingestion-time transformations](tables-feature-support.md) for a complete list of tables that support ingestion-time transformations.
+
+## Configure ingestion-time transformation
+See the following tutorials for a complete walkthrough of configuring ingestion-time transformation.
+
+- [Azure portal](../logs/tutorial-ingestion-time-transformations.md)
+- [Resource Manager templates and REST API](../logs/tutorial-ingestion-time-transformations-api.md)
++
+## Limits
+
+- Transformation queries use a subset of KQL. See [Supported KSQL features](../essentials/data-collection-rule-transformations.md#supported-kql-features) for details.
+
+## Next steps
+
+- [Get details on transformation queries](../essentials/data-collection-rule-transformations.md)
+- [Walk through configuration of ingestion-time transformation using the Azure portal](tutorial-ingestion-time-transformations.md)
+- [Walk through configuration of ingestion-time transformation using Resource Manager templates and REST API](tutorial-ingestion-time-transformations.md)
azure-monitor Log Analytics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-overview.md
Controls for working with the query in the query window.
Copy button | Copy a link to the query, the query text, or the query results to the clipboard. | | New alert rule button | Create a new tab with an empty query. | | Export button | Export the results of the query to a CSV file or the query to Power Query Formula Language format for use with Power BI. |
-| Pin to dashboard button | Add the results of the query to an Azure dashboard. |
+| Pin to button | Pin the results of the query to an Azure dashboard or add them to an Azure workbook. |
| Format query button | Arrange the selected text for readability. | | Example queries button | Open the example queries dialog box that is displayed when you first open Log Analytics. | | Query Explorer button | Open **Query Explorer** which provides access to saved queries in the workspace. |
azure-monitor Log Analytics Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-workspace-overview.md
+
+ Title: Log Analytics workspace overview
+description: Overview of Log Analytics workspace which store data for Azure Monitor Logs.
+
+ na
Last updated : 02/18/2022++
+# Log Analytics workspace overview
+A Log Analytics workspace is a unique environment for log data from Azure Monitor and other Azure services such as Microsoft Sentinel and Microsoft Defender for Cloud. Each workspace has its own data repository and configuration but may combine data from multiple services. This article provides an overview of concepts related to Log Analytics workspaces and provides links to other documentation for more details on each.
+
+> [!IMPORTANT]
+> You may see the term *Microsoft Sentinel workspace* used in [Microsoft Sentinel](../../sentinel/overview.md) documentation. This is the same Log Analytics workspace described in this article but enabled for Microsoft Sentinel. This subjects all data in the workspace to Sentinel pricing as described in [Cost](#cost) below.
+
+You can use a single workspace for all your data collection, or you may create multiple workspaces based on a variety of requirements such as the geographic location of the data, access rights that define which users can access data, and configuration settings such as the pricing tier and data retention.
+
+To create a new workspace, see [Create a Log Analytics workspace in the Azure portal](./quick-create-workspace.md). For considerations on creating multiple workspaces, see [Designing your Azure Monitor Logs deployment](design-logs-deployment.md).
++
+## Data structure
+Each workspace contains multiple tables that are organized into separate columns with multiple rows of data. Each table is defined by a unique set of columns. Rows of data provided by the data source share those columns. Log queries define columns of data to retrieve and provide output to different features of Azure Monitor and other services that use workspaces.
+
+[![Diagram that shows the Azure Monitor Logs structure.](media/data-platform-logs/logs-structure.png)](media/data-platform-logs/logs-structure.png#lightbox)
++
+## Cost
+There is no direct cost for creating or maintaining a workspace. You're charged for the data sent to it (data ingestion) and how long that data is stored (data retention). These costs may vary based on the data plan of each table as described in [Log data plans (preview)](#log-data-plans-preview).
+
+See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for detailed pricing and [Manage usage and costs with Azure Monitor Logs](manage-cost-storage.md) for guidance on reducing your costs. If you are using your Log Analytics workspace with services other than Azure Monitor, then see the documentation for those services for pricing information.
+
+## Log data plans (preview)
+By default, all tables in a workspace are **Analytics** tables, which are available to all features of Azure Monitor and any other services that use the workspace. You can configure certain tables as **Basic Logs (preview)** to reduce the cost of storing high-volume verbose logs you use for debugging, troubleshooting and auditing, but not for analytics and alerts. Tables configured for Basic Logs have a lower ingestion cost in exchange for reduced features.
+
+The following table gives a brief summary of the two plans. See [Configure Basic Logs in Azure Monitor (Preview)](basic-logs-configure.md) for more details on Basic Logs and how to configure them.
+
+> [!NOTE]
+> Basic Logs are currently in public preview. You can currently work with Basic Logs tables in the Azure Portal and using a limited number of other components.
+
+The following table summarizes the differences between the plans.
+
+| Category | Analytics Logs | Basic Logs |
+|:|:|:|
+| Ingestion | Cost for ingestion. | Reduced cost for ingestion. |
+| Log queries | No additional cost. Full query language. | Additional cost. Subset of query language. |
+| Retention | Configure retention from 30 days to 750 days. | Retention fixed at 8 days. |
+| Alerts | Supported. | Not supported. |
+
+## Ingestion-time transformations
+[Data collection rules (DCRs)](../essentials/data-collection-rule-overview.md) that define data coming into Azure Monitor can include transformations that allow you to filter and transform data before it's ingested into the workspace. Since all workflows don't yet support DCRs, each workspace can define ingestion-time transformations. This allows you filter or transform data before it's stored.
+
+[Ingestion-time transformations](ingestion-time-transformations.md) are defined for each table in a workspace and apply to all data sent to that table, even if sent from multiple sources. Ingestion-time transformations though only apply to workflows that don't already use a data collection rule. For example, [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) uses a data collection rule to define data collected from virtual machines. This data will not be subject to any ingestion-time transformations defined in the workspace.
+
+For example, you might have [diagnostic settings](../essentials/diagnostic-settings.md) that send [resource logs](../essentials/resource-logs.md) for different Azure resources to your workspace. You can create a transformation for the table that collects the resource logs that filters this data for only records that you want, saving you the ingestion cost for records you don't need. You may also want to extract important data from certain columns and store it in additional columns in the workspace to support simpler queries.
++
+## Data retention and archive
+Data in each table in a [Log Analytics workspace](log-analytics-workspace-overview.md) is retained for a specified period of time after which it's either removed or archived with a reduced retention fee. Set the retention time to balance your requirement for having data available with reducing your cost for data retention.
+
+> [!NOTE]
+> Archive is currently in public preview.
+
+To access archived data, you must first retrieve data from it in an Analytics Logs table using one of the following methods:
+
+| Method | Description |
+|:|:|
+| [Search Jobs](search-jobs.md) | Retrieve data matching particular criteria. |
+| [Restore](restore.md) | Retrieve data from a particular time range. |
+++
+## Permissions
+Permission to data in a Log Analytics workspace is defined by the [access control mode](design-logs-deployment.md#access-control-mode), which is a setting on each workspace. Users can either be given explicit access to the workspace using a [built-in or custom role](../roles-permissions-security.md), or you can allow access to data collected for Azure resources to users with access to those resources.
+
+See [Manage access to log data and workspaces in Azure Monitor](manage-access.md) for details on the different permission options and on configuring permissions.
+
+## Next steps
+
+- [Create a new Log Analytics workspace](quick-create-workspace.md)
+- See [Designing your Azure Monitor Logs deployment](design-logs-deployment.md) for considerations on creating multiple workspaces.
+- [Learn about log queries to retrieve and analyze data from a Log Analytics workspace.](./log-query-overview.md)
azure-monitor Log Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-powerbi.md
This article focuses on ways to feed data from Log Analytics into Microsoft Powe
## Background
-Azure Monitor Logs is a platform that provides an end-to-end solution for ingesting logs. [Azure Monitor Log Analytics](../platform/data-platform.md#) is the interface to query these logs. For more information on the entire Azure Monitor data platform including Log Analytics, see [Azure Monitor data platform](../data-platform.md).
+Azure Monitor Logs is a platform that provides an end-to-end solution for ingesting logs. [Azure Monitor Log Analytics](../data-platform.md) is the interface to query these logs. For more information on the entire Azure Monitor data platform including Log Analytics, see [Azure Monitor data platform](../data-platform.md).
Microsoft Power BI is MicrosoftΓÇÖs data visualization platform. For more information on how to get started, see [Power BIΓÇÖs homepage](https://powerbi.microsoft.com/).
azure-monitor Private Link Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-configure.md
In this section, we review the process of setting up a Private Link through the
### Connect Azure Monitor resources
-Connect Azure Monitor resources (Log Analytics workspaces, Application Insights components and [Data Collection endpoints](../agents/data-collection-endpoint-overview.md)) to your AMPLS.
+Connect Azure Monitor resources (Log Analytics workspaces, Application Insights components and [Data Collection endpoints](../essentials/data-collection-endpoint-overview.md)) to your AMPLS.
1. In your Azure Monitor Private Link scope, select **Azure Monitor Resources** in the left-hand menu. Select the **Add** button. 2. Add the workspace or component. Selecting the **Add** button brings up a dialog where you can select Azure Monitor resources. You can browse through your subscriptions and resource groups, or you can type in their name to filter down to them. Select the workspace or component and select **Apply** to add them to your scope.
You've now created a new private endpoint that is connected to this AMPLS.
## Configure access to your resources
-So far we covered the configuration of your network, but you should also consider how you want to configure network access to your monitored resources - Log Analytics workspaces, Application Insights components and [Data Collection endpoints](../agents/data-collection-endpoint-overview.md).
+So far we covered the configuration of your network, but you should also consider how you want to configure network access to your monitored resources - Log Analytics workspaces, Application Insights components and [Data Collection endpoints](../essentials/data-collection-endpoint-overview.md).
Go to the Azure portal. In your resource's menu, there's a menu item called **Network Isolation** on the left-hand side. This page controls both which networks can reach the resource through a Private Link, and whether other networks can reach it or not.
This zone covers the global endpoints used by Azure Monitor, meaning endpoints t
* **profiler** - Application Insights profiler endpoint * **snapshot** - Application Insights snapshots endpoint
-This zone also covers the resource specific endpoints for [Data Collection Endpoints](../agents/data-collection-endpoint-overview.md):
+This zone also covers the resource specific endpoints for [Data Collection Endpoints](../essentials/data-collection-endpoint-overview.md):
* `<unique-dce-identifier>.<regionname>.handler.control` - Private configuration endpoint, part of a Data Collection Endpoint (DCE) resource * `<unique-dce-identifier>.<regionname>.ingest` - Private ingestion endpoint, part of a Data Collection Endpoint (DCE) resource
The below screenshot shows endpoints mapped for an AMPLS with two workspaces in
- Learn about [private storage](private-storage.md) for Custom Logs and Customer managed keys (CMK) - Learn about [Private Link for Automation](../../automation/how-to/private-link-security.md)-- Learn about the new [Data Collection endpoints](../agents/data-collection-endpoint-overview.md)
+- Learn about the new [Data Collection endpoints](../essentials/data-collection-endpoint-overview.md)
azure-monitor Private Link Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-design.md
As discussed in the [Azure Monitor Private Link overview article](private-link-s
The simplest and most secure approach would be: 1. Create a single Private Link connection, with a single Private Endpoint and a single AMPLS. If your networks are peered, create the Private Link connection on the shared (or hub) VNet.
-2. Add *all* Azure Monitor resources (Application Insights components, Log Analytics workspaces and [Data Collection endpoints](../agents/data-collection-endpoint-overview.md)) to that AMPLS.
+2. Add *all* Azure Monitor resources (Application Insights components, Log Analytics workspaces and [Data Collection endpoints](../essentials/data-collection-endpoint-overview.md)) to that AMPLS.
3. Block network egress traffic as much as possible. If you can't add all Azure Monitor resources to your AMPLS, you can still apply your Private Link to some resources, as explained in [Control how Private Links apply to your networks](./private-link-design.md#control-how-private-links-apply-to-your-networks). While useful, this approach is less recommended since it doesn't prevent data exfiltration.
In the following diagram, VNet1 uses the Open mode and VNet2 uses the Private On
The AMPLS object has the following limits: * A VNet can only connect to **one** AMPLS object. That means the AMPLS object must provide access to all the Azure Monitor resources the VNet should have access to. * An AMPLS object can connect to 300 Log Analytics workspaces and 1000 Application Insights components at most.
-* An Azure Monitor resource (Workspace or Application Insights component or [Data Collection Endpoint](../agents/data-collection-endpoint-overview.md)) can connect to 5 AMPLSs at most.
+* An Azure Monitor resource (Workspace or Application Insights component or [Data Collection Endpoint](../essentials/data-collection-endpoint-overview.md)) can connect to 5 AMPLSs at most.
* An AMPLS object can connect to 10 Private Endpoints at most. > [!NOTE]
That granularity allows you to set access according to your needs, per workspace
Blocking queries from public networks means clients (machines, SDKs etc.) outside of the connected AMPLSs can't query data in the resource. That data includes logs, metrics, and the live metrics stream. Blocking queries from public networks affects all experiences that run these queries, such as workbooks, dashboards, Insights in the Azure portal, and queries run from outside the Azure portal.
-Your [Data Collection endpoints](../agents/data-collection-endpoint-overview.md) can be set to:
+Your [Data Collection endpoints](../essentials/data-collection-endpoint-overview.md) can be set to:
* Accept or block access from public networks (networks not connected to the resource AMPLS). See [Set resource access flags](./private-link-configure.md#set-resource-access-flags) for configuration details.
The latest versions of the Windows and Linux agents must be used to support secu
**Azure Monitor Windows agents**
-Azure Monitor Windows agent version 1.1.1.0 or higher (using [Data Collection endpoints](../agents/data-collection-endpoint-overview.md))
+Azure Monitor Windows agent version 1.1.1.0 or higher (using [Data Collection endpoints](../essentials/data-collection-endpoint-overview.md))
**Azure Monitor Linux agents**
-Azure Monitor Windows agent version 1.10.5.0 or higher (using [Data Collection endpoints](../agents/data-collection-endpoint-overview.md))
+Azure Monitor Windows agent version 1.10.5.0 or higher (using [Data Collection endpoints](../essentials/data-collection-endpoint-overview.md))
**Log Analytics Windows agent (on deprecation path)**
$ sudo /opt/microsoft/omsagent/bin/omsadmin.sh -X
$ sudo /opt/microsoft/omsagent/bin/omsadmin.sh -w <workspace id> -s <workspace key> ``` ### Azure portal
-To use Azure Monitor portal experiences such as Application Insights, Log Analytics and [Data Collection endpoints](../agents/data-collection-endpoint-overview.md), you need to allow the Azure portal and Azure Monitor extensions to be accessible on the private networks. Add **AzureActiveDirectory**, **AzureResourceManager**, **AzureFrontDoor.FirstParty**, and **AzureFrontdoor.Frontend** [service tags](../../firewall/service-tags.md) to your Network Security Group.
+To use Azure Monitor portal experiences such as Application Insights, Log Analytics and [Data Collection endpoints](../essentials/data-collection-endpoint-overview.md), you need to allow the Azure portal and Azure Monitor extensions to be accessible on the private networks. Add **AzureActiveDirectory**, **AzureResourceManager**, **AzureFrontDoor.FirstParty**, and **AzureFrontdoor.Frontend** [service tags](../../firewall/service-tags.md) to your Network Security Group.
### Programmatic access To use the REST API, [CLI](/cli/azure/monitor) or PowerShell with Azure Monitor on private networks, add the [service tags](../../virtual-network/service-tags-overview.md) **AzureActiveDirectory** and **AzureResourceManager** to your firewall.
azure-monitor Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-security.md
When configuring Private Link even for a single resource, traffic to the below e
### Resource-specific endpoints Log Analytics endpoints are workspace-specific, except for the query endpoint discussed earlier. As a result, adding a specific Log Analytics workspace to the AMPLS will send ingestion requests to this workspace over the Private Link, while ingestion to other workspaces will continue to use the public endpoints.
-[Data Collection Endpoints](../agents/data-collection-endpoint-overview.md) are also resource-specific, and allow you to uniquely configure ingestion settings for collecting guest OS telemetry data from your machines (or set of machines) when using the new [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) and [Data Collection Rules](../agents/data-collection-rule-overview.md). Configuring a data collection endpoint for a set of machines does not affect ingestion of guest telemetry from other machines using the new agent.
+[Data Collection Endpoints](../essentials/data-collection-endpoint-overview.md) are also resource-specific, and allow you to uniquely configure ingestion settings for collecting guest OS telemetry data from your machines (or set of machines) when using the new [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) and [Data Collection Rules](../essentials/data-collection-rule-overview.md). Configuring a data collection endpoint for a set of machines does not affect ingestion of guest telemetry from other machines using the new agent.
> [!IMPORTANT] > Starting December 1, 2021, the Private Endpoints DNS configuration will use the Endpoint Compression mechanism, which allocates a single private IP address for all workspaces in the same region. This improves the supported scale (up to 300 workspaces and 1000 components per AMPLS) and reduces the total number of IPs taken from the network's IP pool.
azure-monitor Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/restore.md
+
+ Title: Restore logs in Azure Monitor (Preview)
+description: Restore a specific time range of data in a Log Analytics workspace for high-performance queries.
+++ Last updated : 01/19/2022+++
+# Restore logs in Azure Monitor (preview)
+The restore operation makes a specific time range of data in a table available in the hot cache for high-performance queries. This article describes how to restore data, query that data, and then dismiss the data when you're done.
+
+## When to restore logs
+Use the restore operation to query data in [Archived Logs](data-retention-archive.md). You can also use the restore operation to run powerful queries within a specific time range on any Analytics table when the log queries you run on the source table cannot complete within the log query timeout of 10 minutes.
+
+> [!NOTE]
+> Restore is one method for accessing archived data. Use restore to run queries against a set of data within a particular time range. Use [Search jobs](search-jobs.md) to access data based on specific criteria.
+
+## What does restore do?
+When you restore data, you specify the source table that contains the data you want to query and the name of the new destination table to be created.
+
+The restore operation creates the restore table and allocates additional compute resources for querying the restored data using high-performance queries that support full KQL.
+
+The destination table provides a view of the underlying source data, but does not affect it in any way. The table has no retention setting, and you must explicitly [dismiss the restored data](#dismiss-restored-data) when you no longer need it.
+
+## Restore data using API
+To restore data from a table, call the **Tables - Create or Update** API. The name of the destination table must end with *_RST*.
+
+```http
+PUT https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{user defined name}_RST?api-version=2021-12-01-preview
+```
+### Request body
+The body of the request must include the following values:
+
+|Name | Type | Description |
+|:|:|:|
+|properties.restoredLogs.sourceTable | string | Table with the data to restore. |
+|properties.restoredLogs.startRestoreTime | string | Start of the time range to restore. |
+|properties.restoredLogs.endRestoreTime | string | End of the time range to restore. |
+
+### Restore table status
+The **provisioningState** property indicates the current state of the restore table operation. The API returns this property when you start the restore, and you can retrieve this property later using a GET operation on the table. The **provisioningState** property has one of the following values:
+
+| Value | Description
+|:|:|
+| Updating | Restore operation in progress. |
+| Succeeded | Restore operation completed. |
+| Deleting | Deleting the restored table. |
+
+#### Sample request
+This sample restores data from the month of January 2020 from the *Usage* table to a table called *Usage_RST*.
+
+**Request**
+
+```http
+PUT https://management.azure.com/subscriptions/00000000-0000-0000-0000-00000000000/resourcegroups/testRG/providers/Microsoft.OperationalInsights/workspaces/testWS/tables/Usage_RST?api-version=2021-12-01-preview
+```
+
+**Request body:**
+```http
+{
+ "properties": {
+ "restoredLogs": {
+ "startRestoreTime": "2020-01-01T00:00:00Z",
+ "endRestoreTime": "2020-01-31T00:00:00Z",
+ "sourceTable": "Usage"
+ }
+ }
+}
+```
+
+## Dismiss restored data
+
+To save costs, dismiss restored data when you no longer need it by deleting the restored table.
+
+To delete a restore table, call the **Tables - Delete** API:
+
+```http
+DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{user defined name}_RST?api-version=2021-12-01-preview
+```
+Deleting the restored table does not delete the data in the source table.
+
+> [!NOTE]
+> Restored data is available as long as the underlying source data is available. When you delete the source table from the workspace or when the source table's retention period ends, the data is dismissed from the restored table. However, the empty table will remain if you do not delete it explicitly.
+
+## Limitations
+Restore is subject to the following limitations.
+
+You can:
+
+- Restore data for a minimum of two days.
+- Restore up to 60TB.
+- Perform up to four restores per workspace per week.
+- Run up to two restore processes in a workspace concurrently.
+- Run only one active restore on a specific table at a given time. Executing a second restore on a table that already has an active restore will fail.
+
+## Pricing model
+The charge for the restore operation is based on the volume of data you restore and the number of days the data is available. The cost of retaining data for part of a day is the same as for a full day.
+
+For example, if your table holds 500 GB a day and you restore 10 days of data, you'll be charged for 5000 GB a day until you dismiss the restored data.
+
+> [!NOTE]
+> There is no charge for restored data during the preview period.
+
+For more information, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
+
+## Next steps
+
+- [Learn more about data retention and archiving data.](data-retention-archive.md)
+- [Learn about Search jobs, which is another method for retrieving archived data.](search-jobs.md)
azure-monitor Search Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/search-jobs.md
+
+ Title: Search jobs in Azure Monitor (Preview)
+description: Search jobs are asynchronous log queries in Azure Monitor that make results available as a table for further analytics.
+++ Last updated : 01/27/2022+++
+# Search jobs in Azure Monitor (preview)
+
+Search jobs are asynchronous queries that fetch records into a new search table within your workspace for further analytics. The search job uses parallel processing and can run for hours across extremely large datasets. This article describes how to create a search job and how to query its resulting data.
+
+## When to use search jobs
+
+Use a search job when the log query timeout of 10 minutes is not enough time to search through large volumes of data or when you are running a slow query.
+
+Search jobs also let you retrieve records from [Archived Logs](data-retention-archive.md) and [Basic Logs](basic-logs-configure.md) tables into a new log table you can use for queries. In this way, running a search job can be an alternative to:
+
+- [Restoring data from Archived Logs](restore.md) for a specific time range.<br/>
+ Use restore when you have a temporary need to run many queries on a large volume of data.
+
+- Querying Basic Logs directly and paying for each query.<br/>
+ To decide which alternative is more cost-effective, compare the cost of querying Basic Logs with the cost of performing a search job and storing the resulting data based on your needs.
+
+## What does a search job do?
+
+A search job sends its results to a new table in the same workspace as the source data. The results table is available as soon as the search job begins, but it may take time for results to begin to appear.
+
+The search job results table is a [Log Analytics](log-analytics-workspace-overview.md#log-data-plans-preview) table that is available for log queries or any other features of Azure Monitor that use tables in a workspace. The table uses the [retention value](data-retention-archive.md) set for the workspace, but you can modify this retention once the table is created.
+
+The search results table schema is based on the source table schema and the specified query. The following additional columns help you track the source records:
+
+| Column | Value |
+|:|:|
+| _OriginalType | *Type* value from source table. |
+| _OriginalItemId | *_ItemID* value from source table. |
+| _OriginalTimeGenerated | *TimeGenerated* value from source table. |
+| TimeGenerated | Time at which the search job retrieved the record from the original table. |
+
+Queries on the results table appear in [log query auditing](query-audit.md) but not the initial search job.
+
+## Create a search job
+To run a search job, call the **Tables - Create or Update** API. The call includes the name of the results table to be created. The name of the results table must end with *_SRCH*.
+
+```http
+PUT https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/<TableName>_SRCH?api-version=2021-12-01-preview
+```
+
+### Request body
+Include the following values in the body of the request:
+
+|Name | Type | Description |
+| | | |
+|properties.searchResults.query | string | Log query written in KQL to retrieve data. |
+|properties.searchResults.limit | integer | Maximum number of records in the result set, up to one million records. (Optional)|
+|properties.searchResults.startSearchTime | string |Start of the time range to search. |
+|properties.searchResults.endSearchTime | string | End of the time range to search. |
++
+### Sample request
+This example creates a table called *Syslog_suspected_SRCH* with the results of a query that searches for particular records in the *Syslog* table.
+
+**Request**
+```http
+PUT https://management.azure.com/subscriptions/00000000-0000-0000-0000-00000000000/resourcegroups/testRG/providers/Microsoft.OperationalInsights/workspaces/testWS/tables/Syslog_suspected_SRCH?api-version=2021-12-01-preview
+```
+
+**Request body**
+```json
+{
+ "properties": {
+ "searchResults": {
+ "query": "Syslog | where * has 'suspected.exe'",
+ "limit": 1000,
+ "startSearchTime": "2020-01-01T00:00:00Z",
+ "endSearchTime": "2020-01-31T00:00:00Z"
+ }
+ }
+}
+```
+
+**Response**<br>
+Status code: 202 accepted.
++
+## Get search job status and details
+Call the **Tables - Get** API to get the status and details of a search job:
+```http
+GET https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/<TableName>_SRCH?api-version=2021-12-01-preview
+```
+
+### Table status
+Each search job table has a property called *provisioningState*, which can have one of the following values:
+
+| Status | Description |
+|:|:|
+| Updating | Populating the table and its schema. |
+| InProgress | Search job is running, fetching data. |
+| Succeeded | Search job completed. |
+| Deleting | Deleting the search job table. |
++
+#### Sample request
+This example retrieves the table status for the search job in the previous example.
+
+**Request**
+```http
+GET https://management.azure.com/subscriptions/00000000-0000-0000-0000-00000000000/resourcegroups/testRG/providers/Microsoft.OperationalInsights/workspaces/testWS/tables/Syslog_SRCH?2021-07-01-privatepreview
+```
+
+**Response**<br>
+```json
+{
+ "properties": {
+ "retentionInDays": 30,
+ "totalRetentionInDays": 30,
+ "archiveRetentionInDays": 0,
+ "plan": "Analytics",
+ "lastPlanModifiedDate": "Mon, 01 Nov 2021 16:38:01 GMT",
+ "schema": {
+ "name": "Syslog_SRCH",
+ "tableType": "SearchResults",
+ "description": "This table was created using a Search Job with the following query: 'Syslog | where * has 'suspected.exe'.'",
+ "columns": [...],
+ "standardColumns": [...],
+ "solutions": [
+ "LogManagement"
+ ],
+ "searchResults": {
+ "query": "Syslog | where * has 'suspected.exe'",
+ "limit": 1000,
+ "startSearchTime": "Wed, 01 Jan 2020 00:00:00 GMT",
+ "endSearchTime": "Fri, 31 Jan 2020 00:00:00 GMT",
+ "sourceTable": "Syslog"
+ }
+ },
+ "provisioningState": "Succeeded"
+ },
+ "id": "subscriptions/00000000-0000-0000-0000-00000000000/resourcegroups/testRG/providers/Microsoft.OperationalInsights/workspaces/testWS/tables/Syslog_SRCH",
+ "name": "Syslog_SRCH"
+}
+```
++
+## Delete search job table
+We recommend deleting the search job table when you're done querying the table. This reduces workspace clutter and additional charges for data retention.
+
+To delete a table, call the **Tables - Delete** API:
+
+```http
+DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/<TableName>_SRCH?api-version=2021-12-01-preview
+```
+
+## Limitations
+Search jobs are subject to the following limitations:
+
+- Optimized to query one table at a time.
+- Search date range is up to one year.
+- Supports long running searches up to a 24-hour time-out.
+- Results are limited to one million records in the record set.
+- Concurrent execution is limited to five search jobs per workspace.
+- Limited to 100 search results tables per workspace.
+- Limited to 100 search job executions per day per workspace.
+
+When you reach the record limit, Azure aborts the job with a status of *partial success*, and the table will contain only records ingested up to that point.
+
+### KQL query limitations
+Log queries in a search job are intended to scan very large sets of data. To support distribution and segmentation, the queries use a subset of KQL, including the operators:
+
+- [where](/azure/data-explorer/kusto/query/whereoperator)
+- [extend](/azure/data-explorer/kusto/query/extendoperator)
+- [project](/azure/data-explorer/kusto/query/projectoperator)
+- [project-away](/azure/data-explorer/kusto/query/projectawayoperator)
+- [project-keep](/azure/data-explorer/kusto/query/projectkeepoperator)
+- [project-rename](/azure/data-explorer/kusto/query/projectrenameoperator)
+- [project-reorder](/azure/data-explorer/kusto/query/projectreorderoperator)
+- [parse](/azure/data-explorer/kusto/query/whereoperator)
+- [parse-where](/azure/data-explorer/kusto/query/whereoperator)
+
+You can use all functions and binary operators within these operators.
+
+## Pricing model
+The charge for a search job is based on:
+
+- The amount of data the search job needs to scan.
+- The amount of data ingested in the results table.
+
+For example, if your table holds 500 GB per day, for a query on three days, you'll be charged for 1500 GB of scanned data. If the job returns 1000 records, you'll be charged for ingesting these 1000 records into the results table.
+
+> [!NOTE]
+> There is no charge for search jobs during the public preview. You'll be charged only for the ingestion of the results set.
+
+For more information, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
+
+## Next steps
+
+- [Learn more about data retention and archiving data.](data-retention-archive.md)
+- [Learn about restoring data, which is another method for retrieving archived data.](restore.md)
+- [Learn about directly querying Basic Logs.](basic-logs-query.md)
azure-monitor Tables Feature Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tables-feature-support.md
+
+ Title: Tables that support ingestion-time transformations in Azure Monitor Logs (preview)
+description: Reference for tables that support ingestion-time transformations in Azure Monitor Logs (preview).
+
+ na
Last updated : 02/22/2022++
+# Tables that support ingestion-time transformations in Azure Monitor Logs (preview)
+
+The following list identifies the tables in a [Log Analytics workspace](log-analytics-workspace-overview.md) that support [Ingest-time transformations](ingestion-time-transformations.md).
++
+| Table | Limitations |
+|:|:|
+| [AACAudit](/azure/azure-monitor/reference/tables/aacaudit) | |
+| [AACHttpRequest](/azure/azure-monitor/reference/tables/aachttprequest) | |
+| [AADDomainServicesAccountLogon](/azure/azure-monitor/reference/tables/aaddomainservicesaccountlogon) | |
+| [AADDomainServicesAccountManagement](/azure/azure-monitor/reference/tables/aaddomainservicesaccountmanagement) | |
+| [AADDomainServicesDirectoryServiceAccess](/azure/azure-monitor/reference/tables/aaddomainservicesdirectoryserviceaccess) | |
+| [AADDomainServicesLogonLogoff](/azure/azure-monitor/reference/tables/aaddomainserviceslogonlogoff) | |
+| [AADDomainServicesPolicyChange](/azure/azure-monitor/reference/tables/aaddomainservicespolicychange) | |
+| [AADDomainServicesPrivilegeUse](/azure/azure-monitor/reference/tables/aaddomainservicesprivilegeuse) | |
+| [AADManagedIdentitySignInLogs](/azure/azure-monitor/reference/tables/aadmanagedidentitysigninlogs) | |
+| [AADNonInteractiveUserSignInLogs](/azure/azure-monitor/reference/tables/aadnoninteractiveusersigninlogs) | |
+| [AADProvisioningLogs](/azure/azure-monitor/reference/tables/aadprovisioninglogs) | |
+| [AADRiskyUsers](/azure/azure-monitor/reference/tables/aadriskyusers) | |
+| [AADServicePrincipalSignInLogs](/azure/azure-monitor/reference/tables/aadserviceprincipalsigninlogs) | |
+| [AADUserRiskEvents](/azure/azure-monitor/reference/tables/aaduserriskevents) | |
+| [ABSBotRequests](/azure/azure-monitor/reference/tables/absbotrequests) | |
+| [ACSAuthIncomingOperations](/azure/azure-monitor/reference/tables/acsauthincomingoperations) | |
+| [ACSBillingUsage](/azure/azure-monitor/reference/tables/acsbillingusage) | |
+| [ACRConnectedClientList](/azure/azure-monitor/reference/tables/acrconnectedclientlist) | |
+| [ACRConnectedClientList](/azure/azure-monitor/reference/tables/acrconnectedclientlist) | |
+| [ACSCallDiagnostics](/azure/azure-monitor/reference/tables/acscalldiagnostics) | |
+| [ACSCallSummary](/azure/azure-monitor/reference/tables/acscallsummary) | |
+| [ACSChatIncomingOperations](/azure/azure-monitor/reference/tables/acschatincomingoperations) | |
+| [ACSSMSIncomingOperations](/azure/azure-monitor/reference/tables/acssmsincomingoperations) | |
+| [ADAssessmentRecommendation](/azure/azure-monitor/reference/tables/adassessmentrecommendation) | |
+| [ADFActivityRun](/azure/azure-monitor/reference/tables/adfactivityrun) | |
+| [ADFPipelineRun](/azure/azure-monitor/reference/tables/adfpipelinerun) | |
+| [ADFSSignInLogs](/azure/azure-monitor/reference/tables/adfssigninlogs) | |
+| [ADFTriggerRun](/azure/azure-monitor/reference/tables/adftriggerrun) | |
+| [ADPAudit](/azure/azure-monitor/reference/tables/adpaudit) | |
+| [ADPDiagnostics](/azure/azure-monitor/reference/tables/adpdiagnostics) | |
+| [ADPRequests](/azure/azure-monitor/reference/tables/adprequests) | |
+| [ADReplicationResult](/azure/azure-monitor/reference/tables/adreplicationresult) | |
+| [ADSecurityAssessmentRecommendation](/azure/azure-monitor/reference/tables/adsecurityassessmentrecommendation) | |
+| [ADTDigitalTwinsOperation](/azure/azure-monitor/reference/tables/adtdigitaltwinsoperation) | |
+| [ADTEventRoutesOperation](/azure/azure-monitor/reference/tables/adteventroutesoperation) | |
+| [ADTModelsOperation](/azure/azure-monitor/reference/tables/adtmodelsoperation) | |
+| [ADTQueryOperation](/azure/azure-monitor/reference/tables/adtqueryoperation) | |
+| [ADXCommand](/azure/azure-monitor/reference/tables/adxcommand) | |
+| [ADXQuery](/azure/azure-monitor/reference/tables/adxquery) | |
+| [AegDeliveryFailureLogs](/azure/azure-monitor/reference/tables/aegdeliveryfailurelogs) | |
+| [AegPublishFailureLogs](/azure/azure-monitor/reference/tables/aegpublishfailurelogs) | |
+| [AEWAuditLogs](/azure/azure-monitor/reference/tables/aewauditlogs) | |
+| [AgriFoodApplicationAuditLogs](/azure/azure-monitor/reference/tables/agrifoodapplicationauditlogs) | |
+| [AgriFoodApplicationAuditLogs](/azure/azure-monitor/reference/tables/agrifoodapplicationauditlogs) | |
+| [AgriFoodFarmManagementLogs](/azure/azure-monitor/reference/tables/agrifoodfarmmanagementlogs) | |
+| [AgriFoodFarmManagementLogs](/azure/azure-monitor/reference/tables/agrifoodfarmmanagementlogs) | |
+| [AgriFoodFarmOperationLogs](/azure/azure-monitor/reference/tables/agrifoodfarmoperationlogs) | |
+| [AgriFoodInsightLogs](/azure/azure-monitor/reference/tables/agrifoodinsightlogs) | |
+| [AgriFoodJobProcessedLogs](/azure/azure-monitor/reference/tables/agrifoodjobprocessedlogs) | |
+| [AgriFoodModelInferenceLogs](/azure/azure-monitor/reference/tables/agrifoodmodelinferencelogs) | |
+| [AgriFoodProviderAuthLogs](/azure/azure-monitor/reference/tables/agrifoodproviderauthlogs) | |
+| [AgriFoodSatelliteLogs](/azure/azure-monitor/reference/tables/agrifoodsatellitelogs) | |
+| [AgriFoodWeatherLogs](/azure/azure-monitor/reference/tables/agrifoodweatherlogs) | |
+| [Alert](/azure/azure-monitor/reference/tables/alert) | |
+| [AlertEvidence](/azure/azure-monitor/reference/tables/alertevidence) | |
+| [AmlOnlineEndpointConsoleLog](/azure/azure-monitor/reference/tables/amlonlineendpointconsolelog) | |
+| [ApiManagementGatewayLogs](/azure/azure-monitor/reference/tables/apimanagementgatewaylogs) | |
+| [AppCenterError](/azure/azure-monitor/reference/tables/appcentererror) | |
+| [AppPlatformSystemLogs](/azure/azure-monitor/reference/tables/appplatformsystemlogs) | |
+| [AppServiceAppLogs](/azure/azure-monitor/reference/tables/appserviceapplogs) | |
+| [AppServiceAuditLogs](/azure/azure-monitor/reference/tables/appserviceauditlogs) | |
+| [AppServiceConsoleLogs](/azure/azure-monitor/reference/tables/appserviceconsolelogs) | |
+| [AppServiceFileAuditLogs](/azure/azure-monitor/reference/tables/appservicefileauditlogs) | |
+| [AppServiceHTTPLogs](/azure/azure-monitor/reference/tables/appservicehttplogs) | |
+| [AppServicePlatformLogs](/azure/azure-monitor/reference/tables/appserviceplatformlogs) | |
+| [ATCExpressRouteCircuitIpfix](/azure/azure-monitor/reference/tables/atcexpressroutecircuitipfix) | |
+| [AuditLogs](/azure/azure-monitor/reference/tables/auditlogs) | |
+| [AutoscaleEvaluationsLog](/azure/azure-monitor/reference/tables/autoscaleevaluationslog) | |
+| [AutoscaleScaleActionsLog](/azure/azure-monitor/reference/tables/autoscalescaleactionslog) | |
+| [AWSCloudTrail](/azure/azure-monitor/reference/tables/awscloudtrail) | |
+| [AWSGuardDuty](/azure/azure-monitor/reference/tables/awsguardduty) | |
+| [AWSVPCFlow](/azure/azure-monitor/reference/tables/awsvpcflow) | |
+| [AzureAssessmentRecommendation](/azure/azure-monitor/reference/tables/azureassessmentrecommendation) | |
+| [AzureDevOpsAuditing](/azure/azure-monitor/reference/tables/azuredevopsauditing) | |
+| [BehaviorAnalytics](/azure/azure-monitor/reference/tables/behavioranalytics) | |
+| [BlockchainApplicationLog](/azure/azure-monitor/reference/tables/blockchainapplicationlog) | |
+| [BlockchainProxyLog](/azure/azure-monitor/reference/tables/blockchainproxylog) | |
+| [CDBCassandraRequests](/azure/azure-monitor/reference/tables/cdbcassandrarequests) | |
+| [CDBControlPlaneRequests](/azure/azure-monitor/reference/tables/cdbcontrolplanerequests) | |
+| [CDBDataPlaneRequests](/azure/azure-monitor/reference/tables/cdbdataplanerequests) | |
+| [CDBGremlinRequests](/azure/azure-monitor/reference/tables/cdbgremlinrequests) | |
+| [CDBMongoRequests](/azure/azure-monitor/reference/tables/cdbmongorequests) | |
+| [CDBPartitionKeyRUConsumption](/azure/azure-monitor/reference/tables/cdbpartitionkeyruconsumption) | |
+| [CDBPartitionKeyStatistics](/azure/azure-monitor/reference/tables/cdbpartitionkeystatistics) | |
+| [CDBQueryRuntimeStatistics](/azure/azure-monitor/reference/tables/cdbqueryruntimestatistics) | |
+| [CloudAppEvents](/azure/azure-monitor/reference/tables/cloudappevents) | |
+| [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) | |
+| [ComputerGroup](/azure/azure-monitor/reference/tables/computergroup) | |
+| [ConfigurationData](/azure/azure-monitor/reference/tables/configurationdata) | Partial support ΓÇô some of the data is ingested through internal services that aren't supported.|
+| [ContainerImageInventory](/azure/azure-monitor/reference/tables/containerimageinventory) | |
+| [ContainerInventory](/azure/azure-monitor/reference/tables/containerinventory) | |
+| [ContainerLog](/azure/azure-monitor/reference/tables/containerlog) | |
+| [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) | |
+| [ContainerNodeInventory](/azure/azure-monitor/reference/tables/containernodeinventory) | |
+| [ContainerServiceLog](/azure/azure-monitor/reference/tables/containerservicelog) | |
+| [CoreAzureBackup](/azure/azure-monitor/reference/tables/coreazurebackup) | |
+| [DatabricksAccounts](/azure/azure-monitor/reference/tables/databricksaccounts) | |
+| [DatabricksClusters](/azure/azure-monitor/reference/tables/databricksclusters) | |
+| [DatabricksDBFS](/azure/azure-monitor/reference/tables/databricksdbfs) | |
+| [DatabricksInstancePools](/azure/azure-monitor/reference/tables/databricksinstancepools) | |
+| [DatabricksJobs](/azure/azure-monitor/reference/tables/databricksjobs) | |
+| [DatabricksNotebook](/azure/azure-monitor/reference/tables/databricksnotebook) | |
+| [DatabricksSecrets](/azure/azure-monitor/reference/tables/databrickssecrets) | |
+| [DatabricksSQLPermissions](/azure/azure-monitor/reference/tables/databrickssqlpermissions) | |
+| [DatabricksSSH](/azure/azure-monitor/reference/tables/databricksssh) | |
+| [DatabricksWorkspace](/azure/azure-monitor/reference/tables/databricksworkspace) | |
+| [DeviceNetworkInfo](/azure/azure-monitor/reference/tables/devicenetworkinfo) | |
+| [DnsEvents](/azure/azure-monitor/reference/tables/dnsevents) | |
+| [DnsInventory](/azure/azure-monitor/reference/tables/dnsinventory) | |
+| [DummyHydrationFact](/azure/azure-monitor/reference/tables/dummyhydrationfact) | |
+| [Dynamics365Activity](/azure/azure-monitor/reference/tables/dynamics365activity) | |
+| [EmailAttachmentInfo](/azure/azure-monitor/reference/tables/emailattachmentinfo) | |
+| [EmailEvents](/azure/azure-monitor/reference/tables/emailevents) | |
+| [EmailPostDeliveryEvents](/azure/azure-monitor/reference/tables/emailpostdeliveryevents) | |
+| [EmailUrlInfo](/azure/azure-monitor/reference/tables/emailurlinfo) | |
+| [Event](/azure/azure-monitor/reference/tables/event) | Partial support ΓÇô data arriving from Log Analytics agent (MMA) or Azure Monitor Agent (AMA) is fully supported. Data arriving via Diagnostics Extension agent is collected though storage while this path isnΓÇÖt supported. |
+| [ExchangeAssessmentRecommendation](/azure/azure-monitor/reference/tables/exchangeassessmentrecommendation) | |
+| [FailedIngestion](/azure/azure-monitor/reference/tables/failedingestion) | |
+| [FunctionAppLogs](/azure/azure-monitor/reference/tables/functionapplogs) | |
+| [HDInsightAmbariClusterAlerts](/azure/azure-monitor/reference/tables/hdinsightambariclusteralerts) | |
+| [HDInsightAmbariSystemMetrics](/azure/azure-monitor/reference/tables/hdinsightambarisystemmetrics) | |
+| [HDInsightHadoopAndYarnLogs](/azure/azure-monitor/reference/tables/hdinsighthadoopandyarnlogs) | |
+| [HDInsightHadoopAndYarnMetrics](/azure/azure-monitor/reference/tables/hdinsighthadoopandyarnmetrics) | |
+| [HDInsightHBaseLogs](/azure/azure-monitor/reference/tables/hdinsighthbaselogs) | |
+| [HDInsightHBaseMetrics](/azure/azure-monitor/reference/tables/hdinsighthbasemetrics) | |
+| [HDInsightHiveAndLLAPLogs](/azure/azure-monitor/reference/tables/hdinsighthiveandllaplogs) | |
+| [HDInsightHiveAndLLAPMetrics](/azure/azure-monitor/reference/tables/hdinsighthiveandllapmetrics) | |
+| [HDInsightHiveTezAppStats](/azure/azure-monitor/reference/tables/hdinsighthivetezappstats) | |
+| [HDInsightJupyterNotebookEvents](/azure/azure-monitor/reference/tables/hdinsightjupyternotebookevents) | |
+| [HDInsightKafkaMetrics](/azure/azure-monitor/reference/tables/hdinsightkafkametrics) | |
+| [HDInsightOozieLogs](/azure/azure-monitor/reference/tables/hdinsightoozielogs) | |
+| [HDInsightRangerAuditLogs](/azure/azure-monitor/reference/tables/hdinsightrangerauditlogs) | |
+| [HDInsightSecurityLogs](/azure/azure-monitor/reference/tables/hdinsightsecuritylogs) | |
+| [HDInsightSparkApplicationEvents](/azure/azure-monitor/reference/tables/hdinsightsparkapplicationevents) | |
+| [HDInsightSparkBlockManagerEvents](/azure/azure-monitor/reference/tables/hdinsightsparkblockmanagerevents) | |
+| [HDInsightSparkEnvironmentEvents](/azure/azure-monitor/reference/tables/hdinsightsparkenvironmentevents) | |
+| [HDInsightSparkExecutorEvents](/azure/azure-monitor/reference/tables/hdinsightsparkexecutorevents) | |
+| [HDInsightSparkJobEvents](/azure/azure-monitor/reference/tables/hdinsightsparkjobevents) | |
+| [HDInsightSparkLogs](/azure/azure-monitor/reference/tables/hdinsightsparklogs) | |
+| [HDInsightSparkSQLExecutionEvents](/azure/azure-monitor/reference/tables/hdinsightsparksqlexecutionevents) | |
+| [HDInsightSparkStageEvents](/azure/azure-monitor/reference/tables/hdinsightsparkstageevents) | |
+| [HDInsightSparkStageTaskAccumulables](/azure/azure-monitor/reference/tables/hdinsightsparkstagetaskaccumulables) | |
+| [HDInsightSparkTaskEvents](/azure/azure-monitor/reference/tables/hdinsightsparktaskevents) | |
+| [HuntingBookmark](/azure/azure-monitor/reference/tables/huntingbookmark) | |
+| [InsightsMetrics](/azure/azure-monitor/reference/tables/insightsmetrics) | Partial support ΓÇô some of the data is ingested through internal services that aren't supported. |
+| [IntuneAuditLogs](/azure/azure-monitor/reference/tables/intuneauditlogs) | |
+| [IntuneDevices](/azure/azure-monitor/reference/tables/intunedevices) | |
+| [IntuneOperationalLogs](/azure/azure-monitor/reference/tables/intuneoperationallogs) | |
+| [KubeEvents](/azure/azure-monitor/reference/tables/kubeevents) | |
+| [KubeHealth](/azure/azure-monitor/reference/tables/kubehealth) | |
+| [KubeMonAgentEvents](/azure/azure-monitor/reference/tables/kubemonagentevents) | |
+| [KubeNodeInventory](/azure/azure-monitor/reference/tables/kubenodeinventory) | |
+| [KubePodInventory](/azure/azure-monitor/reference/tables/kubepodinventory) | |
+| [KubeServices](/azure/azure-monitor/reference/tables/kubeservices) | |
+| [LAQueryLogs](/azure/azure-monitor/reference/tables/laquerylogs) | |
+| [McasShadowItReporting](/azure/azure-monitor/reference/tables/mcasshadowitreporting) | |
+| [MCCEventLogs](/azure/azure-monitor/reference/tables/mcceventlogs) | |
+| [MicrosoftAzureBastionAuditLogs](/azure/azure-monitor/reference/tables/microsoftazurebastionauditlogs) | |
+| [MicrosoftDataShareReceivedSnapshotLog](/azure/azure-monitor/reference/tables/microsoftdatasharereceivedsnapshotlog) | |
+| [MicrosoftDataShareSentSnapshotLog](/azure/azure-monitor/reference/tables/microsoftdatasharesentsnapshotlog) | |
+| [MicrosoftDataShareShareLog](/azure/azure-monitor/reference/tables/microsoftdatasharesharelog) | |
+| [MicrosoftHealthcareApisAuditLogs](/azure/azure-monitor/reference/tables/microsofthealthcareapisauditlogs) | |
+| [NWConnectionMonitorPathResult](/azure/azure-monitor/reference/tables/nwconnectionmonitorpathresult) | |
+| [NWConnectionMonitorTestResult](/azure/azure-monitor/reference/tables/nwconnectionmonitortestresult) | |
+| [OfficeActivity](/azure/azure-monitor/reference/tables/officeactivity) | ||
+| [Perf](/azure/azure-monitor/reference/tables/perf) | Partial support ΓÇô only windows perf data is currently supported. | |
+| [PowerBIDatasetsWorkspace](/azure/azure-monitor/reference/tables/powerbidatasetsworkspace) | |
+| [HDInsightRangerAuditLogs](/azure/azure-monitor/reference/tables/hdinsightrangerauditlogs) | |
+| [PurviewScanStatusLogs](/azure/azure-monitor/reference/tables/purviewscanstatuslogs) | |
+| [SCCMAssessmentRecommendation](/azure/azure-monitor/reference/tables/sccmassessmentrecommendation) | |
+| [SCOMAssessmentRecommendation](/azure/azure-monitor/reference/tables/scomassessmentrecommendation) | |
+| [SecurityAlert](/azure/azure-monitor/reference/tables/securityalert) | |
+| [SecurityBaseline](/azure/azure-monitor/reference/tables/securitybaseline) | |
+| [SecurityBaselineSummary](/azure/azure-monitor/reference/tables/securitybaselinesummary) | |
+| [SecurityDetection](/azure/azure-monitor/reference/tables/securitydetection) | |
+| [SecurityEvent](/azure/azure-monitor/reference/tables/securityevent) | Partial support ΓÇô data arriving from Log Analytics agent (MMA) or Azure Monitor Agent (AMA) is fully supported. Data arriving via Diagnostics Extension agent is collected though storage while this path isnΓÇÖt supported. |
+| [SecurityIncident](/azure/azure-monitor/reference/tables/securityincident) | |
+| [SecurityIoTRawEvent](/azure/azure-monitor/reference/tables/securityiotrawevent) | |
+| [SecurityNestedRecommendation](/azure/azure-monitor/reference/tables/securitynestedrecommendation) | |
+| [SecurityRecommendation](/azure/azure-monitor/reference/tables/securityrecommendation) | |
+| [SentinelHealth](/azure/azure-monitor/reference/tables/sentinelhealth) | |
+| [SfBAssessmentRecommendation](/azure/azure-monitor/reference/tables/sfbassessmentrecommendation) | |
+| [SfBOnlineAssessmentRecommendation](/azure/azure-monitor/reference/tables/sfbonlineassessmentrecommendation) | |
+| [SharePointOnlineAssessmentRecommendation](/azure/azure-monitor/reference/tables/sharepointonlineassessmentrecommendation) | |
+| [SignalRServiceDiagnosticLogs](/azure/azure-monitor/reference/tables/signalrservicediagnosticlogs) | |
+| [SigninLogs](/azure/azure-monitor/reference/tables/signinlogs) | |
+| [SPAssessmentRecommendation](/azure/azure-monitor/reference/tables/spassessmentrecommendation) | |
+| [SQLAssessmentRecommendation](/azure/azure-monitor/reference/tables/sqlassessmentrecommendation) | |
+| [SQLSecurityAuditEvents](/azure/azure-monitor/reference/tables/sqlsecurityauditevents) | |
+| [SucceededIngestion](/azure/azure-monitor/reference/tables/succeededingestion) | |
+| [SynapseBigDataPoolApplicationsEnded](/azure/azure-monitor/reference/tables/synapsebigdatapoolapplicationsended) | |
+| [SynapseBuiltinSqlPoolRequestsEnded](/azure/azure-monitor/reference/tables/synapsebuiltinsqlpoolrequestsended) | |
+| [SynapseGatewayApiRequests](/azure/azure-monitor/reference/tables/synapsegatewayapirequests) | |
+| [SynapseIntegrationActivityRuns](/azure/azure-monitor/reference/tables/synapseintegrationactivityruns) | |
+| [SynapseIntegrationPipelineRuns](/azure/azure-monitor/reference/tables/synapseintegrationpipelineruns) | |
+| [SynapseIntegrationTriggerRuns](/azure/azure-monitor/reference/tables/synapseintegrationtriggerruns) | |
+| [SynapseRbacOperations](/azure/azure-monitor/reference/tables/synapserbacoperations) | |
+| [SynapseSqlPoolDmsWorkers](/azure/azure-monitor/reference/tables/synapsesqlpooldmsworkers) | |
+| [SynapseSqlPoolExecRequests](/azure/azure-monitor/reference/tables/synapsesqlpoolexecrequests) | |
+| [SynapseSqlPoolRequestSteps](/azure/azure-monitor/reference/tables/synapsesqlpoolrequeststeps) | |
+| [SynapseSqlPoolSqlRequests](/azure/azure-monitor/reference/tables/synapsesqlpoolsqlrequests) | |
+| [SynapseSqlPoolWaits](/azure/azure-monitor/reference/tables/synapsesqlpoolwaits) | |
+| [Syslog](/azure/azure-monitor/reference/tables/syslog) | Partial support ΓÇô data arriving from Log Analytics agent (MMA) or Azure Monitor Agent (AMA) is fully supported. Data arriving via Diagnostics Extension agent is collected though storage while this path isnΓÇÖt supported. |
+| [ThreatIntelligenceIndicator](/azure/azure-monitor/reference/tables/threatintelligenceindicator) | |
+| [Update](/azure/azure-monitor/reference/tables/update) | Partial support ΓÇô some of the data is ingested through internal services that aren't supported. |
+| [UpdateRunProgress](/azure/azure-monitor/reference/tables/updaterunprogress) | |
+| [UpdateSummary](/azure/azure-monitor/reference/tables/updatesummary) | |
+| [UserAccessAnalytics](/azure/azure-monitor/reference/tables/useraccessanalytics) | |
+| [UserPeerAnalytics](/azure/azure-monitor/reference/tables/userpeeranalytics) | |
+| [Watchlist](/azure/azure-monitor/reference/tables/watchlist) | |
+| [WindowsEvent](/azure/azure-monitor/reference/tables/windowsevent) | |
+| [WindowsFirewall](/azure/azure-monitor/reference/tables/windowsfirewall) | |
+| [WireData](/azure/azure-monitor/reference/tables/wiredata) | Partial support ΓÇô some of the data is ingested through internal services that aren't supported. |
+| [WorkloadDiagnosticLogs](/azure/azure-monitor/reference/tables/workloaddiagnosticlogs) | |
+| [WVDAgentHealthStatus](/azure/azure-monitor/reference/tables/wvdagenthealthstatus) | |
+| [WVDCheckpoints](/azure/azure-monitor/reference/tables/wvdcheckpoints) | |
+| [WVDConnections](/azure/azure-monitor/reference/tables/wvdconnections) | |
+| [WVDErrors](/azure/azure-monitor/reference/tables/wvderrors) | |
+| [WVDFeeds](/azure/azure-monitor/reference/tables/wvdfeeds) | |
+| [WVDManagement](/azure/azure-monitor/reference/tables/wvdmanagement) | |
azure-monitor Tutorial Custom Logs Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-custom-logs-api.md
+
+ Title: Tutorial - Send custom logs to Azure Monitor Logs using resource manager templates
+description: Tutorial on how to send custom logs to a Log Analytics workspace in Azure Monitor using resource manager templates.
++ Last updated : 01/19/2022++
+# Tutorial: Send custom logs to Azure Monitor Logs using resource manager templates (preview)
+[Custom logs](custom-logs-overview.md) in Azure Monitor allow you to send custom data to tables in a Log Analytics workspace with a REST API. This tutorial walks through configuration of a new table and a sample application to send custom logs to Azure Monitor using resource manager templates.
+
+> [!NOTE]
+> This tutorial uses resource manager templates and REST API to configure custom logs. See [Tutorial: Send custom logs to Azure Monitor Logs using the Azure portal (preview)](tutorial-custom-logs.md) for a similar tutorial using the Azure portal.
+
+In this tutorial, you learn to:
+
+> [!div class="checklist"]
+> * Create a custom table in a Log Analytics workspace
+> * Create a data collection endpoint to receive data over HTTP
+> * Create a data collection rule that transforms incoming data to match the schema of the target table
+> * Create a sample application to send custom data to Azure Monitor
++
+> [!NOTE]
+> This tutorial uses PowerShell from Azure Cloud Shell to make REST API calls using the Azure Monitor **Tables** API and the Azure portal to install resource manager templates. You can use any other method to make these calls.
+
+## Prerequisites
+To complete this tutorial, you need the following:
+
+- Log Analytics workspace where you have at least [contributor rights](manage-access.md#manage-access-using-azure-permissions) .
+- [Permissions to create Data Collection Rule objects](/essentials/data-collection-rule-overview.md#permissions) in the workspace.
+
+## Collect workspace details
+Start by gathering information that you'll need from your workspace.
+
+1. Navigate to your workspace in the **Log Analytics workspaces** menu in the Azure Portal. From the **Properties** page, copy the **Resource ID** and save it for later use.
+
+ :::image type="content" source="media/tutorial-custom-logs-api/workspace-resource-id.png" lightbox="media/tutorial-custom-logs-api/workspace-resource-id.png" alt-text="Screenshot showing workspace resource ID.":::
+
+## Configure application
+Start by registering an Azure Active Directory application to authenticate against the API. Any ARM authentication scheme is supported, but this will follow the [Client Credential Grant Flow scheme](/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow) for this tutorial.
+
+1. From the **Azure Active Directory** menu in the Azure portal, select **App registrations** and then **New registration**.
+
+ :::image type="content" source="media/tutorial-custom-logs/new-app-registration.png" lightbox="media/tutorial-custom-logs/new-app-registration.png" alt-text="Screenshot showing app registration screen.":::
+
+2. Give the application a name and change the tenancy scope if the default is not appropriate for your environment. A **Redirect URI** isn't required.
+
+ :::image type="content" source="media/tutorial-custom-logs/new-app-name.png" lightbox="media/tutorial-custom-logs/new-app-name.png" alt-text="Screenshot showing app details.":::
+
+3. Once registered, you can view the details of the application. Note the **Application (client) ID** and the **Directory (tenant) ID**. You'll need these values later in the process.
+
+ :::image type="content" source="media/tutorial-custom-logs/new-app-id.png" lightbox="media/tutorial-custom-logs/new-app-id.png" alt-text="Screenshot showing app id.":::
+
+4. You now need to generate an application client secret, which is similar to creating a password to use with a username. Select **Certificates & secrets** and then **New client secret**. Give the secret a name to identify its purpose and select an **Expires** duration. *1 year* is selected here although for a production implementation, you would follow best practices for a secret rotation procedure or use a more secure authentication mode such a certificate.
+
+ :::image type="content" source="media/tutorial-custom-logs/new-app-secret.png" lightbox="media/tutorial-custom-logs/new-app-secret.png" alt-text="Screenshot showing secret for new app.":::
+
+5. Click **Add** to save the secret and then note the **Value**. Ensure that you record this value since You can't recover it once you navigate away from this page. Use the same security measures as you would for safekeeping a password as it's the functional equivalent.
+
+ :::image type="content" source="media/tutorial-custom-logs/new-app-secret-value.png" lightbox="media/tutorial-custom-logs/new-app-secret-value.png" alt-text="Screenshot show secret value for new app.":::
+
+## Create new table in Log Analytics workspace
+The custom table must be created before you can send data to it. The table for this tutorial will include three columns, as described in the schema below. The `name`, `type`, and `description` properties are mandatory for each column. The properties `isHidden` and `isDefaultDisplay` both default to `false` if not explicitly specified. Possible data types are `string`, `int`, `long`, `real`, `boolean`, `dateTime`, `guid`, and `dynamic`.
+
+Use the **Tables - Update** API to create the table with the PowerShell code below.
+
+> [!IMPORTANT]
+> Custom tables must use a suffix of *_CL*.
+
+1. Click the **Cloud Shell** button in the Azure portal and ensure the environment is set to **PowerShell**.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations-api/open-cloud-shell.png" lightbox="media/tutorial-ingestion-time-transformations-api/open-cloud-shell.png" alt-text="Screenshot of opening cloud shell":::
+
+2. Copy the following PowerShell code and replace the **Path** parameter with the appropriate values for your workspace in the `Invoke-AzRestMethod` command. Paste it into the cloud shell prompt to run it.
+
+ ```PowerShell
+ $tableParams = @'
+ {
+ "properties": {
+ "schema": {
+ "name": "MyTable_CL",
+ "columns": [
+ {
+ "name": "TimeGenerated",
+ "type": "datetime",
+ "description": "The time at which the data was generated"
+ },
+ {
+ "name": "AdditionalContext",
+ "type": "dynamic",
+ "description": "Additional message properties"
+ },
+ {
+ "name": "ExtendedColumn",
+ "type": "string",
+ "description": "An additional column extended at ingestion time"
+ }
+ ]
+ }
+ }
+ }
+ '@
+
+ Invoke-AzRestMethod -Path "/subscriptions/{subscription}/resourcegroups/{resourcegroup}/providers/microsoft.operationalinsights/workspaces/{workspace}/tables/MyTable_CL?api-version=2021-12-01-preview" -Method PUT -payload $tableParams
+ ```
++
+## Create data collection endpoint
+A [data collection endpoint (DCE)](../essentials/data-collection-endpoint-overview.md) is required to accept the data being sent to Azure Monitor. Once you configure the DCE and link it to a data collection rule, you can send data over HTTP from your application. The DCE must be located in the same region as the Log Analytics Workspace where the data will be sent.
+
+1. In the Azure portal's search box, type in *template* and then select **Deploy a custom template**.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations-api/deploy-custom-template.png" lightbox="media/tutorial-ingestion-time-transformations-api/deploy-custom-template.png" alt-text="Screenshot to deploy custom template.":::
+
+2. Click **Build your own template in the editor**.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations-api/build-custom-template.png" lightbox="media/tutorial-ingestion-time-transformations-api/build-custom-template.png" alt-text="Screenshot to build template in the editor.":::
+
+3. Paste the resource manager template below into the editor and then click **Save**. You don't need to modify this template since you will provide values for its parameters.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations-api/edit-template.png" lightbox="media/tutorial-ingestion-time-transformations-api/edit-template.png" alt-text="Screenshot to edit resource manager template.":::
++
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "dataCollectionEndpointName": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the name of the Data Collection Endpoint to create."
+ }
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "westus2",
+ "allowedValues": [
+ "westus2",
+ "eastus2",
+ "eastus2euap"
+ ],
+ "metadata": {
+ "description": "Specifies the location in which to create the Data Collection Endpoint."
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/dataCollectionEndpoints",
+ "name": "[parameters('dataCollectionEndpointName')]",
+ "location": "[parameters('location')]",
+ "apiVersion": "2021-04-01",
+ "properties": {
+ "networkAcls": {
+ "publicNetworkAccess": "Enabled"
+ }
+ }
+ }
+ ],
+ "outputs": {
+ "dataCollectionEndpointId": {
+ "type": "string",
+ "value": "[resourceId('Microsoft.Insights/dataCollectionEndpoints', parameters('dataCollectionEndpointName'))]"
+ }
+ }
+ }
+ ```
+
+4. On the **Custom deployment** screen, specify a **Subscription** and **Resource group** to store the data collection rule and then provide values a **Name** for the data collection endpoint. The **Location** should be the same location as the workspace. The **Region** will already be populated and is used for the location of the data collection endpoint.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations-api/custom-deployment-values.png" lightbox="media/tutorial-ingestion-time-transformations-api/custom-deployment-values.png" alt-text="Screenshot to edit custom deployment values.":::
+
+5. Click **Review + create** and then **Create** when you review the details.
+
+6. Once the DCE is created, select it so you can view its properties. Note the **Logs ingestion** URI since you'll need this in a later step.
+
+ :::image type="content" source="media/tutorial-custom-logs-api/data-collection-endpoint-overview.png" lightbox="media/tutorial-custom-logs-api/data-collection-endpoint-overview.png" alt-text="Screenshot for data collection endpoint uri.":::
+
+7. Click **JSON View** to view other details for the DCE. Copy the **Resource ID** since you'll need this in a later step.
+
+ :::image type="content" source="media/tutorial-custom-logs-api/data-collection-endpoint-json.png" lightbox="media/tutorial-custom-logs-api/data-collection-endpoint-json.png" alt-text="Screenshot for data collection endpoint resource ID.":::
++
+## Create data collection rule
+The [data collection rule (DCR)](../essentials/data-collection-rule-overview.md) defines the schema of data that being sent to the HTTP endpoint, the transformation that will be applied to it, and the destination workspace and table the transformed data will be sent to.
+
+1. In the Azure portal's search box, type in *template* and then select **Deploy a custom template**.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations-api/deploy-custom-template.png" lightbox="media/tutorial-ingestion-time-transformations-api/deploy-custom-template.png" alt-text="Screenshot to deploy custom template.":::
+
+2. Click **Build your own template in the editor**.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations-api/build-custom-template.png" lightbox="media/tutorial-ingestion-time-transformations-api/build-custom-template.png" alt-text="Screenshot to build template in the editor.":::
+
+3. Paste the resource manager template below into the editor and then click **Save**.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations-api/edit-template.png" lightbox="media/tutorial-ingestion-time-transformations-api/edit-template.png" alt-text="Screenshot to edit resource manager template.":::
+
+ Notice the following details in the DCR defined in this template:
+
+ - `dataCollectionEndpointId`: Resource ID of the data collection endpoint.
+ - `streamDeclarations`: Defines the columns of the incoming data.
+ - `destinations`: Specifies the destination workspace.
+ - `dataFlows`: Matches the stream with the destination workspace and specifies the transformation query and the destination table.
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "dataCollectionRuleName": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the name of the Data Collection Rule to create."
+ }
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "westus2",
+ "allowedValues": [
+ "westus2",
+ "eastus2",
+ "eastus2euap"
+ ],
+ "metadata": {
+ "description": "Specifies the location in which to create the Data Collection Rule."
+ }
+ },
+ "workspaceResourceId": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the Azure resource ID of the Log Analytics workspace to use."
+ }
+ },
+ "endpointResourceId": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the Azure resource ID of the Data Collection Endpoint to use."
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/dataCollectionRules",
+ "name": "[parameters('dataCollectionRuleName')]",
+ "location": "[parameters('location')]",
+ "apiVersion": "2021-09-01-preview",
+ "properties": {
+ "dataCollectionEndpointId": "[parameters('endpointResourceId')]",
+ "streamDeclarations": {
+ "Custom-MyTableRawData": {
+ "columns": [
+ {
+ "name": "Time",
+ "type": "datetime"
+ },
+ {
+ "name": "Computer",
+ "type": "string"
+ },
+ {
+ "name": "AdditionalContext",
+ "type": "string"
+ }
+ ]
+ }
+ },
+ "destinations": {
+ "logAnalytics": [
+ {
+ "workspaceResourceId": "[parameters('workspaceResourceId')]",
+ "name": "clv2ws1"
+ }
+ ]
+ },
+ "dataFlows": [
+ {
+ "streams": [
+ "Custom-MyTableRawData"
+ ],
+ "destinations": [
+ "clv2ws1"
+ ],
+ "transformKql": "source | extend jsonContext = parse_json(AdditionalContext) | project TimeGenerated = Time, Computer, AdditionalContext = jsonContext, ExtendedColumn=tostring(jsonContext.CounterName)",
+ "outputStream": "Custom-MyTable_CL"
+ }
+ ]
+ }
+ }
+ ],
+ "outputs": {
+ "dataCollectionRuleId": {
+ "type": "string",
+ "value": "[resourceId('Microsoft.Insights/dataCollectionRules', parameters('dataCollectionRuleName'))]"
+ }
+ }
+ }
+ ```
+
+4. On the **Custom deployment** screen, specify a **Subscription** and **Resource group** to store the data collection rule and then provide values defined in the template. This includes a **Name** for the data collection rule and the **Workspace Resource ID** that you collected in a previous step. The **Location** should be the same location as the workspace. The **Region** will already be populated and is used for the location of the data collection rule.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations-api/custom-deployment-values.png" lightbox="media/tutorial-ingestion-time-transformations-api/custom-deployment-values.png" alt-text="Screenshot to edit custom deployment values.":::
+
+5. Click **Review + create** and then **Create** when you review the details.
+
+6. When the deployment is complete, expand the **Deployment details** box and click on your data collection rule to view its details. Click **JSON View**.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations-api/data-collection-rule-details.png" lightbox="media/tutorial-ingestion-time-transformations-api/data-collection-rule-details.png" alt-text="Screenshot for data collection rule details.":::
+
+7. Copy the **Resource ID** for the data collection rule. You'll use this in the next step.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations-api/data-collection-rule-json-view.png" lightbox="media/tutorial-ingestion-time-transformations-api/data-collection-rule-json-view.png" alt-text="Screenshot for data collection rule JSON view.":::
+
+ > [!NOTE]
+ > All of the properties of the DCR, such as the transformation, may not be displayed in the Azure portal even though the DCR was successfully created with those properties.
++
+## Assign permissions to DCR
+Once the data collection rule has been created, the application needs to be given permission to it. This will allow any application using the correct application ID and application key to send data to the new DCE and DCR.
+
+1. From the DCR in the Azure portal, select **Access Control (IAM)** amd then **Add role assignment**.
+
+ :::image type="content" source="media/tutorial-custom-logs/add-role-assignment.png" lightbox="media/tutorial-custom-logs/custom-log-create.png" alt-text="Screenshot for adding custom role assignment to DCR.":::
+
+2. Select **Monitoring Metrics Publisher** and click **Next**. You could instead create a custom action with the `Microsoft.Insights/Telemetry/Write` data action.
+
+ :::image type="content" source="media/tutorial-custom-logs/add-role-assignment-select-role.png" lightbox="media/tutorial-custom-logs/add-role-assignment-select-role.png" alt-text="Screenshot for selecting role for DCR role assignment.":::
+
+3. Select **User, group, or service principal** for **Assign access to** and click **Select members**. Select the application that you created and click **Select**.
+
+ :::image type="content" source="media/tutorial-custom-logs/add-role-assignment-select-member.png" lightbox="media/tutorial-custom-logs/add-role-assignment-select-member.png" alt-text="Screenshot for selecting members for DCR role assignment.":::
++
+4. Click **Review + assign** and verify the details before saving your role assignment.
+
+ :::image type="content" source="media/tutorial-custom-logs/add-role-assignment-save.png" lightbox="media/tutorial-custom-logs/add-role-assignment-save.png" alt-text="Screenshot for saving DCR role assignment.":::
++
+## Send sample data
+The following PowerShell code sends data to the endpoint using HTTP REST fundamentals.
+
+1. Replace the parameters in the *step 0* section with values from the resources that you just created. You may also want to replace the sample data in the *step 2* section with your own.
+
+ ```powershell
+ ##################
+ ### Step 0: set parameters required for the rest of the script
+ ##################
+ #information needed to authenticate to AAD and obtain a bearer token
+ $tenantId = "00000000-0000-0000-0000-000000000000"; #Tenant ID the data collection endpoint resides in
+ $appId = "00000000-0000-0000-0000-000000000000"; #Application ID created and granted permissions
+ $appSecret = "00000000000000000000000"; #Secret created for the application
+
+ #information needed to send data to the DCR endpoint
+ $dcrImmutableId = "dcr-000000000000000"; #the immutableId property of the DCR object
+ $dceEndpoint = "https://my-dcr-name.westus2-1.ingest.monitor.azure.com"; #the endpoint property of the Data Collection Endpoint object
+
+ ##################
+ ### Step 1: obtain a bearer token used later to authenticate against the DCE
+ ##################
+ $scope= [System.Web.HttpUtility]::UrlEncode("https://monitor.azure.com//.default")
+ $body = "client_id=$appId&scope=$scope&client_secret=$appSecret&grant_type=client_credentials";
+ $headers = @{"Content-Type"="application/x-www-form-urlencoded"};
+ $uri = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token"
+
+ $bearerToken = (Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers).access_token
+ ### If the above line throws an 'Unable to find type [System.Web.HttpUtility].' error, execute the line below separately from the rest of the code
+ # Add-Type -AssemblyName System.Web
+
+ ##################
+ ### Step 2: Load up some sample data.
+ ##################
+ $currentTime = Get-Date ([datetime]::UtcNow) -Format O
+ $staticData = @"
+ [
+ {
+ "Time": "$currentTime",
+ "Computer": "Computer1",
+ "AdditionalContext": {
+ "InstanceName": "user1",
+ "TimeZone": "Pacific Time",
+ "Level": 4,
+ "CounterName": "AppMetric1",
+ "CounterValue": 15.3
+ }
+ },
+ {
+ "Time": "$currentTime",
+ "Computer": "Computer2",
+ "AdditionalContext": {
+ "InstanceName": "user2",
+ "TimeZone": "Central Time",
+ "Level": 3,
+ "CounterName": "AppMetric1",
+ "CounterValue": 23.5
+ }
+ }
+ ]
+ "@;
+
+ ##################
+ ### Step 3: send the data to Log Analytics via the DCE.
+ ##################
+ $body = $staticData;
+ $headers = @{"Authorization"="Bearer $bearerToken";"Content-Type"="application/json"};
+ $uri = "$dceEndpoint/dataCollectionRules/$dcrImmutableId/streams/Custom-MyTableRawData?api-version=2021-11-01-preview"
+
+ $uploadResponse = Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers -TransferEncoding "GZip"
+ ```
+
+ > [!NOTE]
+ > If you receive an `Unable to find type [System.Web.HttpUtility].` error, run the last line in section 1 of the script for a fix and executely. Executing it uncommented as part of the script will not resolve the issue - the command must be executed separately.
+
+2. After executing this script, you should see a `HTTP - 200 OK` response, and in just a few minutes, the data arrive to your Log Analytics workspace.
+
+## Troubleshooting
+This section describes different error conditions you may receive and how to correct them.
+
+### Script returns error code 403
+Ensure that you have the correct permissions for your application to the DCR. You may also need to wait up to 30 minutes for permissions to propagate.
+
+### Script returns error code 413 or warning of `TimeoutExpired` with the message `ReadyBody_ClientConnectionAbort` in the response
+The message is too large. The maximum message size is currently 1MB per call.
+
+### Script returns error code 429
+API limits have been exceeded. The limits are currently set to 500MB of data/minute for both compressed and uncompressed data, as well as 300,000 requests/minute. Retry after the duration listed in the `Retry-After` header in the response.
+### Script returns error code 503
+Ensure that you have the correct permissions for your application to the DCR. You may also need to wait up to 30 minutes for permissions to propagate.
+
+### Script returns error `Unable to find type [System.Web.HttpUtility]`
+Run the last line in section 1 of the script for a fix and execute it directly. Executing it uncommented as part of the script will not resolve the issue. The command must be executed separately.
+
+### You don't receive an error, but data doesn't appear in the workspace
+The data may take some time to be ingested, especially if this is the first time data is being sent to a particular table. It shouldn't take longer than 15 minutes.
+
+### IntelliSense in Log Analytics not recognizing new table
+The cache that drives IntelliSense may take up to 24 hours to update.
+## Next steps
+
+- [Complete a similar tutorial using the Azure portal.](tutorial-custom-logs.md)
+- [Read more about custom logs.](custom-logs-overview.md)
+- [Learn more about writing transformation queries](../essentials/data-collection-rule-transformations.md)
azure-monitor Tutorial Custom Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-custom-logs.md
+
+ Title: Tutorial - Send custom logs to Azure Monitor Logs (preview)
+description: Tutorial on how to send custom logs to a Log Analytics workspace in Azure Monitor using the Azure portal.
++ Last updated : 01/19/2022++
+# Tutorial: Send custom logs to Azure Monitor Logs using the Azure portal (preview)
+[Custom logs](custom-logs-overview.md) in Azure Monitor allow you to send external data to a Log Analytics workspace with a REST API. This tutorial walks through configuration of a new table and a sample application to send custom logs to Azure Monitor.
+
+> [!NOTE]
+> This tutorial uses the Azure portal. See [Tutorial: Send custom logs to Azure Monitor Logs using resource manager templates (preview)](tutorial-custom-logs-api.md) for a similar tutorial using resource manager templates.
+
+In this tutorial, you learn to:
+
+> [!div class="checklist"]
+> * Create a custom table in a Log Analytics workspace
+> * Create a data collection endpoint to receive data over HTTP
+> * Create a data collection rule that transforms incoming data to match the schema of the target table
+> * Create a sample application to send custom data to Azure Monitor
++
+## Prerequisites
+To complete this tutorial, you need the following:
+
+- Log Analytics workspace where you have at least [contributor rights](manage-access.md#manage-access-using-azure-permissions) .
+- [Permissions to create Data Collection Rule objects](/essentials/data-collection-rule-overview.md#permissions) in the workspace.
++
+## Overview of tutorial
+In this tutorial, you'll use a PowerShell script to send sample Apache access logs over HTTP to the API endpoint. This will require a script to convert this data to the JSON format that's required for the Azure Monitor custom logs API. The data will further be converted with a transformation in a data collection rule (DCR) that filters out records that shouldn't be ingested and create the columns required for the table that the table will be sent to. Once the configuration is complete, you'll send sample data from the command line and then inspect the results in Log Analytics.
++
+## Configure application
+Start by registering an Azure Active Directory application to authenticate against the API. Any ARM authentication scheme is supported, but this will follow the [Client Credential Grant Flow scheme](/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow) for this tutorial.
+
+1. From the **Azure Active Directory** menu in the Azure portal, select **App registrations** and then **New registration**.
+
+ :::image type="content" source="media/tutorial-custom-logs/new-app-registration.png" lightbox="media/tutorial-custom-logs/new-app-registration.png" alt-text="Screenshot showing app registration screen.":::
+
+2. Give the application a name and change the tenancy scope if the default is not appropriate for your environment. A **Redirect URI** isn't required.
+
+ :::image type="content" source="media/tutorial-custom-logs/new-app-name.png" lightbox="media/tutorial-custom-logs/new-app-name.png" alt-text="Screenshot showing app details.":::
+
+3. Once registered, you can view the details of the application. Note the **Application (client) ID** and the **Directory (tenant) ID**. You'll need these values later in the process.
+
+ :::image type="content" source="media/tutorial-custom-logs/new-app-id.png" lightbox="media/tutorial-custom-logs/new-app-id.png" alt-text="Screenshot showing app id.":::
+
+4. You now need to generate an application client secret, which is similar to creating a password to use with a username. Select **Certificates & secrets** and then **New client secret**. Give the secret a name to identify its purpose and select an **Expires** duration. *1 year* is selected here although for a production implementation, you would follow best practices for a secret rotation procedure or use a more secure authentication mode such a certificate.
+
+ :::image type="content" source="media/tutorial-custom-logs/new-app-secret.png" lightbox="media/tutorial-custom-logs/new-app-secret.png" alt-text="Screenshot showing secret for new app.":::
+
+5. Click **Add** to save the secret and then note the **Value**. Ensure that you record this value since You can't recover it once you navigate away from this page. Use the same security measures as you would for safekeeping a password as it's the functional equivalent.
+
+ :::image type="content" source="media/tutorial-custom-logs/new-app-secret-value.png" lightbox="media/tutorial-custom-logs/new-app-secret-value.png" alt-text="Screenshot show secret value for new app.":::
+
+## Create data collection endpoint
+A [data collection endpoint (DCE)](../essentials/data-collection-endpoint-overview.md) is required to accept the data from the script. Once you configure the DCE and link it to a data collection rule, you can send data over HTTP from your application. The DCE must be located in the same region as the Log Analytics workspace where the data will be sent.
+
+1. To create a new DCE, go to the **Monitor** menu in the Azure portal. Select **Data Collection Endpoints** and then **Create**.
+
+ :::image type="content" source="media/tutorial-custom-logs/new-data-collection-endpoint.png" lightbox="media/tutorial-custom-logs/new-data-collection-endpoint.png" alt-text="Screenshot showing new data collection endpoint.":::
+
+2. Provide a name for the DCE and ensure that it's in the same region as your workspace. Click **Create** to create the DCE.
+
+ :::image type="content" source="media/tutorial-custom-logs/data-collection-endpoint-details.png" lightbox="media/tutorial-custom-logs/data-collection-endpoint-details.png" alt-text="Screenshot showing data collection endpoint details.":::
+
+3. Once the DCE is created, select it so you can view its properties. Note the **Logs ingestion** URI since you'll need this in a later step.
+
+ :::image type="content" source="media/tutorial-custom-logs/data-collection-endpoint-uri.png" lightbox="media/tutorial-custom-logs/data-collection-endpoint-uri.png" alt-text="Screenshot showing data collection endpoint uri.":::
++
+## Generate sample data
+The following PowerShell script both generates sample data to configure the custom table and sends sample data to the custom logs API to test the configuration.
+
+1. Update the values of `$tenantId`, `$appId`, and `$appSecret` with the values you noted for **Directory (tenant) ID**, **Application (client) ID**, and secret **Value** and then save with the file name *LogGenerator.ps1*.
+
+ ``` PowerShell
+ param ([Parameter(Mandatory=$true)] $Log, $Type="file", $Output, $DcrImmutableId, $DceURI, $Table)
+ ################
+ ##### Usage
+ ################
+ # LogGenerator.ps1
+ # -Log <String> - log file to be forwarded
+ # [-Type "file|API"] - whether the script should generate sample JSON file or send data via
+ # API call. Data will be written to a file by default
+ # [-Output <String>] - path to resulting JSON sample
+ # [-DcrImmutableId <string>] - DCR immutable ID
+ # [-DceURI] - Data collection endpoint URI
+ # [-Table] - The name of the custom log table, including "_CL" suffix
++
+ ##### >>>> PUT YOUR VALUES HERE <<<<<
+ # information needed to authenticate to AAD and obtain a bearer token
+ $tenantId = "<put tenant ID here>"; #the tenant ID in which the Data Collection Endpoint resides
+ $appId = "<put application ID here>"; #the app ID created and granted permissions
+ $appSecret = "<put secret value here>"; #the secret created for the above app - never store your secrets in the source code
+ ##### >>>> END <<<<<
++
+ $file_data = Get-Content $Log
+ if ("file" -eq $Type) {
+ ############
+ ## Convert plain log to JSON format and output to .json file
+ ############
+ # If not provided, get output file name
+ if ($null -eq $Output) {
+ $Output = Read-Host "Enter output file name"
+ };
+
+ # Form file payload
+ $payload = @();
+ $records_to_generate = [math]::min($file_data.count, 500)
+ for ($i=0; $i -lt $records_to_generate; $i++) {
+ $log_entry = @{
+ # Define the structure of log entry, as it will be sent
+ Time = Get-Date ([datetime]::UtcNow) -Format O
+ Application = "LogGenerator"
+ RawData = $file_data[$i]
+ }
+ $payload += $log_entry
+ }
+ # Write resulting payload to file
+ New-Item -Path $Output -ItemType "file" -Value ($payload | ConvertTo-Json) -Force
+
+ } else {
+ ############
+ ## Send the content to the data collection endpoint
+ ############
+ if ($null -eq $DcrImmutableId) {
+ $DcrImmutableId = Read-Host "Enter DCR Immutable ID"
+ };
+
+ if ($null -eq $DceURI) {
+ $DceURI = Read-Host "Enter data collection endpoint URI"
+ }
+
+ if ($null -eq $Table) {
+ $Table = Read-Host "Enter the name of custom log table"
+ }
+
+ ## Obtain a bearer token used to authenticate against the data collection endpoint
+ $scope = [System.Web.HttpUtility]::UrlEncode("https://monitor.azure.com//.default")
+ $body = "client_id=$appId&scope=$scope&client_secret=$appSecret&grant_type=client_credentials";
+ $headers = @{"Content-Type" = "application/x-www-form-urlencoded" };
+ $uri = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token"
+ $bearerToken = (Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers).access_token
+ ## If the above line throws an 'Unable to find type [System.Web.HttpUtility].' error, execute the line below separately from the rest of the code
+ # Add-Type -AssemblyName System.Web
+
+ ## Generate and send some data
+ foreach ($line in $file_data) {
+ # We are going to send log entries one by one with a small delay
+ $log_entry = @{
+ # Define the structure of log entry, as it will be sent
+ Time = Get-Date ([datetime]::UtcNow) -Format O
+ Application = "LogGenerator"
+ RawData = $line
+ }
+ # Sending the data to Log Analytics via the DCR!
+ $body = $log_entry | ConvertTo-Json -AsArray;
+ $headers = @{"Authorization" = "Bearer $bearerToken"; "Content-Type" = "application/json" };
+ $uri = "$DceURI/dataCollectionRules/$DcrImmutableId/streams/Custom-$Table"+"?api-version=2021-11-01-preview";
+ $uploadResponse = Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers;
+
+ # Let's see how the response looks like
+ Write-Output $uploadResponse
+ Write-Output ""
+
+ # Pausing for 1 second before processing the next entry
+ Start-Sleep -Seconds 1
+ }
+ }
+ ```
+
+2. Copy the sample log data from [sample data](#sample-data) or copy your own Apache log data into a file called *sample_access.log*. Run the script using the following command to read this data and create a JSON file called *data_sample.json* that you can send to the custom logs API.
+
+ ```PowerShell
+ .\LogGenerator.ps1 -Log "sample_access.log" -Type "file" -Output "data_sample.json"
+ ```
+
+3. Run the script using the following command to read this data and create a JSON file called *data_sample.json* that you can send to the custom logs API.
+
+ :::image type="content" source="media/tutorial-custom-logs/new-custom-log.png" lightbox="media/tutorial-custom-logs/new-custom-log.png" alt-text="Screenshot showing new DCR-based custom log.":::
+
+## Add custom log table
+Before you can send data to the workspace, you need to create the custom table that the data will be sent to.
+
+1. Go to the **Log Analytics workspaces** menu in the Azure portal and select **Tables (preview)**. The tables in the workspace will be displayed. Select **Create** and then **New custom log (DCR based)**.
+
+ :::image type="content" source="media/tutorial-custom-logs/new-custom-log.png" lightbox="media/tutorial-custom-logs/new-custom-log.png" alt-text="Screenshot showing new DCR-based custom log.":::
+
+2. Specify a name for the table. You don't need to add the *_CL* suffix required for a custom table since this will be automatically added to the name you specify.
+
+3. Click **Create a new data collection rule** to create the DCR that will be used to send data to this table. If you have an existing data collection rule, you can choose to use it instead. Specify the **Subscription**, **Resource group**, and **Name** for the data collection rule that will contain the custom log configuration.
+
+ :::image type="content" source="media/tutorial-custom-logs/new-data-collection-rule.png" lightbox="media/tutorial-custom-logs/new-data-collection-rule.png" alt-text="Screenshot showing new data collection rule.":::
+
+4. Select the data collection endpoint that you created and click **Next**.
+
+ :::image type="content" source="media/tutorial-custom-logs/custom-log-table-name.png" lightbox="media/tutorial-custom-logs/custom-log-table-name.png" alt-text="Screenshot showing custom log table name.":::
++
+## Parse and filter sample data
+Instead of directly configuring the schema of the table, the portal allows you to upload sample data so that Azure Monitor can determine the schema. The sample is expected to be a JSON file containing one or multiple log records structured in the same way they will be sent in the body of HTTP request of the custom logs API call.
+
+1. Click **Browse for files** and locate *data_sample.json* that you previously created.
+
+ :::image type="content" source="media/tutorial-custom-logs/custom-log-browse-files.png" lightbox="media/tutorial-custom-logs/custom-log-browse-files.png" alt-text="Screenshot showing custom log browse for files.":::
+
+2. Data from the sample file is displayed with a warning that a `TimeGenerated` is not in the data. All log tables within Azure Monitor Logs are required to have a `TimeGenerated` column populated with the timestamp of logged event. In this sample, the timestamp of event is stored in field called `Time`. You're going to add a transformation that will rename this column in the output.
+
+3. Click **Transformation editor** to open the transformation editor to add this column. You're going to add a transformation that will rename this column in the output. The transformation editor lets you create a transformation for the incoming data stream. This is a KQL query that is run against each incoming record. The results of the query will be stored in the destination table. See [Data collection rule transformations in Azure Monitor](../essentials/data-collection-rule-transformations.md) for details on transformation queries.
+
+ :::image type="content" source="media/tutorial-custom-logs/custom-log-data-preview.png" lightbox="media/tutorial-custom-logs/custom-log-data-preview.png" alt-text="Screenshot showing custom log data preview.":::
+
+4. Add the following query to the transformation editor to add the `TimeGenerated` column to the output.
+
+ ```kusto
+ source
+ | extend TimeGenerated = todatetime(Time)
+ ```
+
+5. Click **Run** to view the results. You can see that the `TimeGenerated` column is now added to the other columns. Most of the interesting data is contained in the `RawData` column though
+
+ :::image type="content" source="media/tutorial-custom-logs/custom-log-query-01.png" lightbox="media/tutorial-custom-logs/custom-log-query-01.png" alt-text="Screenshot showing initial custom log data query.":::
+
+6. Modify the query to the following, which extracts the client IP address, HTTP method, address of the page being access, and the response code from each log entry.
+
+ ```kusto
+ source
+ | extend TimeGenerated = todatetime(Time)
+ | parse RawData.value with
+ ClientIP:string
+ ' ' *
+ ' ' *
+ ' [' * '] "' RequestType:string
+ " " Resource:string
+ " " *
+ '" ' ResponseCode:int
+ " " *
+ ```
+
+7. Click **Run** to views the results. This extracts the contents of `RawData` into separate columns `ClientIP`, `RequestType`, `Resource`, and `ResponseCode`.
+
+ :::image type="content" source="media/tutorial-custom-logs/custom-log-query-02.png" lightbox="media/tutorial-custom-logs/custom-log-query-02.png" alt-text="Screenshot showing custom log data query with parse command.":::
+
+8. The query can be optimized more though by removing the `RawData` and `Time` columns since they aren't needed anymore.You can also filter out any records with `ResponseCode` of 200 since you're only interested in collecting data for requests that were not successful. This reduces the volume of data being ingested which reduces its overall cost.
++
+ ```kusto
+ source
+ | extend TimeGenerated = todatetime(Time)
+ | parse RawData.value with
+ ClientIP:string
+ ' ' *
+ ' ' *
+ ' [' * '] "' RequestType:string
+ " " Resource:string
+ " " *
+ '" ' ResponseCode:int
+ " " *
+ | where ResponseCode != 200
+ | project-away Time, RawData
+ ```
+
+9. Click **Run** to views the results.
+
+ :::image type="content" source="media/tutorial-custom-logs/custom-log-query-03.png" lightbox="media/tutorial-custom-logs/custom-log-query-03.png" alt-text="Screenshot showing custom log data query with filter.":::
+
+10. Click **Apply** to save the transformation and view the schema of the table that's about to be created. Click **Next** to proceed.
+
+ :::image type="content" source="media/tutorial-custom-logs/custom-log-final-schema.png" lightbox="media/tutorial-custom-logs/custom-log-final-schema.png" alt-text="Screenshot showing custom log final schema.":::
+
+11. Verify the final details and click **Create** to save the custom log.
+
+ :::image type="content" source="media/tutorial-custom-logs/custom-log-create.png" lightbox="media/tutorial-custom-logs/custom-log-create.png" alt-text="Screenshot showing custom log create.":::
+
+## Collect information from DCR
+With the data collection rule created, you need to collect its ID which is needed in the API call.
+
+1. From the **Monitor** menu in the Azure portal, select **Data collection rules** and select the DCR you just created. From **Overview** for the data collection rule, select the **JSON View**.
+
+ :::image type="content" source="media/tutorial-custom-logs/data-collection-rule-json-view.png" lightbox="media/tutorial-custom-logs/data-collection-rule-json-view.png" alt-text="Screenshot showing data collection rule JSON view.":::
+
+2. Copy the **immutableId** value.
+
+ :::image type="content" source="media/tutorial-custom-logs/data-collection-rule-immutable-id.png" lightbox="media/tutorial-custom-logs/data-collection-rule-immutable-id.png" alt-text="Screenshot showing collecting immutable ID from JSON view.":::
+++
+## Assign permissions to DCR
+The final step is to give the application permission to use the DCR. This will allow any application using the correct application ID and application key to send data to the new DCE and DCR.
+
+1. Select **Access Control (IAM)** for the DCR and then **Add role assignment**.
+
+ :::image type="content" source="media/tutorial-custom-logs/add-role-assignment.png" lightbox="media/tutorial-custom-logs/custom-log-create.png" alt-text="Screenshot showing adding custom role assignment to DCR.":::
+
+2. Select **Monitoring Metrics Publisher** and click **Next**. You could instead create a custom action with the `Microsoft.Insights/Telemetry/Write` data action.
+
+ :::image type="content" source="media/tutorial-custom-logs/add-role-assignment-select-role.png" lightbox="media/tutorial-custom-logs/add-role-assignment-select-role.png" alt-text="Screenshot showing selecting role for DCR role assignment.":::
+
+3. Select **User, group, or service principal** for **Assign access to** and click **Select members**. Select the application that you created and click **Select**.
+
+ :::image type="content" source="media/tutorial-custom-logs/add-role-assignment-select-member.png" lightbox="media/tutorial-custom-logs/add-role-assignment-select-member.png" alt-text="Screenshot showing selecting members for DCR role assignment.":::
++
+4. Click **Review + assign** and verify the details before saving your role assignment.
+
+ :::image type="content" source="media/tutorial-custom-logs/add-role-assignment-save.png" lightbox="media/tutorial-custom-logs/add-role-assignment-save.png" alt-text="Screenshot showing saving DCR role assignment.":::
+++
+## Send sample data
+Allow at least 30 minutes for the configuration to take effect. You may also experience increased latency for the first few entries, but this should normalize.
+
+1. Run the following command providing the values that you collected for your data collection rule and data collection endpoint. The script will start ingesting data by placing calls to the API at pace of approximately 1 record per second.
+
+```PowerShell
+.\LogGenerator.ps1 -Log "sample_access.log" -Type "API" -Table "ApacheAccess_CL" -DcrImmutableId <immutable ID> -DceUrl <data collection endpoint URL>
+```
+
+2. From Log Analytics, query your newly created table to verify that data arrived and if it is transformed properly.
+
+## Troubleshooting
+This section describes different error conditions you may receive and how to correct them.
+
+### Script returns error code 403
+Ensure that you have the correct permissions for your application to the DCR. You may also need to wait up to 30 minutes for permissions to propagate.
+
+### Script returns error code 413 or warning of `TimeoutExpired` with the message `ReadyBody_ClientConnectionAbort` in the response
+The message is too large. The maximum message size is currently 1MB per call.
+
+### Script returns error code 429
+API limits have been exceeded. The limits are currently set to 500MB of data/minute for both compressed and uncompressed data, as well as 300,000 requests/minute. Retry after the duration listed in the `Retry-After` header in the response.
+### Script returns error code 503
+Ensure that you have the correct permissions for your application to the DCR. You may also need to wait up to 30 minutes for permissions to propagate.
+
+### Script returns error `Unable to find type [System.Web.HttpUtility]`
+Run the last line in section 1 of the script for a fix and execute it directly. Executing it uncommented as part of the script will not resolve the issue. The command must be executed separately.
+
+### You don't receive an error, but data doesn't appear in the workspace
+The data may take some time to be ingested, especially if this is the first time data is being sent to a particular table. It shouldn't take longer than 15 minutes.
+
+### IntelliSense in Log Analytics not recognizing new table
+The cache that drives IntelliSense may take up to 24 hours to update.
+
+## Sample data
+Following is sample data that you can use for the tutorial. Alternatively, you can use your own data if you have your own Apache access logs.
+
+```
+0.0.139.0
+0.0.153.185
+0.0.153.185
+0.0.66.230
+0.0.148.92
+0.0.35.224
+0.0.162.225
+0.0.162.225
+0.0.148.108
+0.0.148.1
+0.0.203.24
+0.0.4.214
+0.0.10.125
+0.0.10.125
+0.0.10.125
+0.0.10.125
+0.0.10.117
+0.0.10.114
+0.0.10.114
+0.0.10.125
+0.0.10.117
+0.0.10.117
+0.0.10.114
+0.0.10.114
+0.0.10.125
+0.0.10.114
+0.0.10.114
+0.0.10.125
+0.0.10.117
+0.0.10.117
+0.0.10.114
+0.0.10.125
+0.0.10.117
+0.0.10.114
+0.0.10.117
+0.0.10.125
+0.0.10.125
+0.0.10.125
+0.0.10.125
+0.0.10.125
+0.0.10.125
+0.0.10.114
+0.0.10.117
+0.0.167.138
+0.0.149.55
+0.0.229.86
+0.0.117.249
+0.0.117.249
+0.0.117.249
+0.0.64.41
+0.0.208.79
+0.0.208.79
+0.0.208.79
+0.0.208.79
+0.0.196.129
+0.0.196.129
+0.0.66.158
+0.0.161.12
+0.0.161.12
+0.0.51.36
+0.0.51.36
+0.0.145.131
+0.0.145.131
+0.0.0.179
+0.0.0.179
+0.0.145.131
+0.0.145.131
+0.0.95.52
+0.0.95.52
+0.0.51.36
+0.0.51.36
+0.0.227.31
+0.0.227.31
+0.0.51.36
+0.0.51.36
+0.0.51.36
+0.0.51.36
+0.0.4.22
+0.0.4.22
+0.0.143.24
+0.0.143.24
+0.0.0.98
+0.0.0.98
+0.0.51.62
+0.0.51.62
+0.0.51.36
+0.0.51.36
+0.0.0.98
+0.0.0.98
+0.0.58.254
+0.0.58.254
+0.0.51.62
+0.0.51.62
+0.0.227.31
+0.0.227.31
+0.0.0.179
+0.0.0.179
+0.0.58.254
+0.0.58.254
+0.0.95.52
+0.0.95.52
+0.0.0.98
+0.0.0.98
+0.0.58.90
+0.0.58.90
+0.0.51.36
+0.0.51.36
+0.0.207.154
+0.0.207.154
+0.0.95.52
+0.0.95.52
+0.0.51.62
+0.0.51.62
+0.0.145.131
+0.0.145.131
+0.0.58.90
+0.0.58.90
+0.0.227.55
+0.0.227.55
+0.0.95.52
+0.0.95.52
+0.0.161.12
+0.0.161.12
+0.0.227.55
+0.0.227.55
+0.0.143.30
+0.0.143.30
+0.0.227.31
+0.0.227.31
+0.0.161.6
+0.0.161.6
+0.0.161.6
+0.0.227.31
+0.0.227.31
+0.0.51.62
+0.0.51.62
+0.0.227.31
+0.0.227.31
+0.0.95.20
+0.0.95.20
+0.0.207.154
+0.0.207.154
+0.0.0.98
+0.0.0.98
+0.0.51.36
+0.0.51.36
+0.0.227.55
+0.0.227.55
+0.0.207.154
+0.0.207.154
+0.0.51.36
+0.0.51.36
+0.0.51.36
+0.0.51.36
+0.0.207.221
+0.0.207.221
+0.0.0.179
+0.0.0.179
+0.0.161.12
+0.0.161.12
+0.0.58.90
+0.0.58.90
+0.0.145.106
+0.0.145.106
+0.0.145.106
+0.0.145.106
+0.0.0.179
+0.0.0.179
+0.0.149.8
+0.0.207.154
+0.0.207.154
+0.0.227.31
+0.0.227.31
+0.0.51.62
+0.0.51.62
+0.0.227.55
+0.0.227.55
+0.0.143.30
+0.0.143.30
+0.0.95.52
+0.0.95.52
+0.0.145.131
+0.0.145.131
+0.0.51.62
+0.0.51.62
+0.0.0.98
+0.0.0.98
+0.0.207.221
+0.0.145.131
+0.0.207.221
+0.0.145.131
+0.0.51.62
+0.0.51.62
+0.0.51.36
+0.0.51.36
+0.0.145.131
+0.0.145.131
+0.0.58.254
+0.0.58.254
+0.0.145.106
+0.0.145.106
+0.0.207.221
+0.0.207.221
+0.0.227.31
+0.0.227.31
+0.0.145.106
+0.0.145.106
+0.0.145.131
+0.0.145.131
+0.0.0.179
+0.0.0.179
+0.0.227.31
+0.0.227.31
+0.0.227.55
+0.0.227.55
+0.0.95.52
+0.0.95.52
+0.0.0.98
+0.0.0.98
+0.0.4.35
+0.0.4.35
+0.0.4.22
+0.0.4.22
+0.0.58.90
+0.0.58.90
+0.0.145.106
+0.0.145.106
+0.0.143.24
+0.0.143.24
+0.0.227.55
+0.0.227.55
+0.0.207.154
+0.0.207.154
+0.0.143.30
+0.0.143.30
+0.0.227.31
+0.0.227.31
+0.0.0.179
+0.0.0.179
+0.0.0.98
+0.0.0.98
+0.0.207.221
+0.0.207.221
+0.0.0.179
+0.0.0.179
+0.0.0.98
+0.0.0.98
+0.0.207.221
+0.0.207.221
+0.0.207.154
+0.0.207.154
+0.0.58.254
+0.0.58.254
+0.0.51.36
+0.0.51.36
+0.0.51.36
+0.0.51.36
+0.0.207.154
+0.0.207.154
+0.0.161.6
+0.0.145.131
+0.0.145.131
+0.0.207.221
+0.0.207.221
+0.0.95.20
+0.0.95.20
+0.0.183.233
+0.0.183.233
+0.0.51.36
+0.0.51.36
+0.0.95.52
+0.0.95.52
+0.0.227.31
+0.0.227.31
+0.0.51.62
+0.0.51.62
+0.0.95.52
+0.0.95.52
+0.0.207.154
+0.0.207.154
+0.0.51.36
+0.0.51.36
+0.0.58.90
+0.0.58.90
+0.0.4.35
+0.0.4.35
+0.0.95.52
+0.0.95.52
+0.0.167.138
+0.0.51.36
+0.0.51.36
+0.0.161.6
+0.0.161.6
+0.0.58.254
+0.0.58.254
+0.0.207.154
+0.0.207.154
+0.0.58.90
+0.0.58.90
+0.0.51.62
+0.0.51.62
+0.0.58.90
+0.0.58.90
+0.0.81.164
+0.0.81.164
+0.0.207.221
+0.0.207.221
+0.0.227.55
+0.0.227.55
+0.0.227.55
+0.0.227.55
+0.0.207.221
+0.0.207.154
+0.0.207.154
+0.0.207.221
+0.0.143.30
+0.0.143.30
+0.0.0.179
+0.0.0.179
+0.0.51.62
+0.0.51.62
+0.0.4.35
+0.0.4.35
+0.0.207.221
+0.0.207.221
+0.0.51.62
+0.0.51.62
+0.0.51.62
+0.0.51.62
+0.0.95.20
+0.0.4.35
+0.0.4.35
+0.0.58.254
+0.0.58.254
+0.0.145.106
+0.0.145.106
+0.0.0.98
+0.0.0.98
+0.0.95.52
+0.0.95.52
+0.0.51.62
+0.0.51.62
+0.0.207.221
+0.0.207.221
+0.0.143.30
+0.0.143.30
+0.0.207.154
+0.0.207.154
+0.0.143.30
+0.0.95.20
+0.0.95.20
+0.0.0.98
+0.0.0.98
+0.0.145.131
+0.0.145.131
+0.0.161.12
+0.0.161.12
+0.0.95.52
+0.0.95.52
+0.0.161.12
+0.0.161.12
+0.0.0.179
+0.0.0.179
+0.0.4.35
+0.0.4.35
+0.0.164.246
+0.0.161.12
+0.0.161.12
+0.0.161.12
+0.0.161.12
+0.0.207.221
+0.0.207.221
+0.0.4.35
+0.0.4.35
+0.0.207.221
+0.0.207.221
+0.0.145.106
+0.0.145.106
+0.0.4.22
+0.0.4.22
+0.0.161.12
+0.0.161.12
+0.0.58.254
+0.0.58.254
+0.0.161.12
+0.0.161.12
+0.0.66.216
+0.0.0.179
+0.0.0.179
+0.0.145.131
+0.0.145.131
+0.0.4.35
+0.0.4.35
+0.0.58.254
+0.0.58.254
+0.0.143.24
+0.0.143.24
+0.0.143.24
+0.0.143.24
+0.0.207.221
+0.0.207.221
+0.0.58.254
+0.0.58.254
+0.0.145.131
+0.0.145.131
+0.0.51.36
+0.0.51.36
+0.0.227.31
+0.0.161.12
+0.0.227.31
+0.0.161.6
+0.0.161.6
+0.0.207.221
+0.0.207.221
+0.0.161.12
+0.0.145.106
+0.0.145.106
+0.0.161.6
+0.0.161.6
+0.0.95.20
+0.0.95.20
+0.0.4.35
+0.0.4.35
+0.0.95.52
+0.0.95.52
+0.0.128.50
+0.0.227.31
+0.0.227.31
+0.0.227.31
+0.0.227.31
+0.0.227.55
+0.0.227.55
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+0.0.29.211
+```
++
+## Next steps
+
+- [Complete a similar tutorial using the Azure portal.](tutorial-custom-logs-api.md)
+- [Read more about custom logs.](custom-logs-overview.md)
+- [Learn more about writing transformation queries](../essentials/data-collection-rule-transformations.md)
azure-monitor Tutorial Ingestion Time Transformations Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-ingestion-time-transformations-api.md
+
+ Title: Tutorial - Add ingestion-time transformation to Azure Monitor Logs using resource manager templates
+description: This article describes how to add a custom transformation to data flowing through Azure Monitor Logs using resource manager templates.
++ Last updated : 02/20/2022++
+# Tutorial: Add ingestion-time transformation to Azure Monitor Logs using resource manager templates (preview)
+[Ingestion-time transformations](ingestion-time-transformations.md) allow you to manipulate incoming data before it's stored in a Log Analytics workspace. You can add data filtering, parsing and extraction, and control the structure of the data that gets ingested. This tutorial walks you through configuration of a sample ingestion time transformation using resource manager templates.
+
+> [!NOTE]
+> This tutorial uses resource manager templates and REST API to configure an ingestion-time transformation. See [Tutorial: Add ingestion-time transformation to Azure Monitor Logs using the Azure portal (preview)](tutorial-ingestion-time-transformations.md) for the same tutorial using the Azure portal.
+
+In this tutorial, you learn to:
+
+> [!div class="checklist"]
+> * Configure [ingestion-time transformation](ingestion-time-transformations.md) for a table in Azure Monitor Logs
+> * Write a log query for an ingestion-time transform
++
+> [!NOTE]
+> This tutorial uses PowerShell from Azure Cloud Shell to make REST API calls using the Azure Monitor **Tables** API and the Azure portal to install resource manager templates. You can use any other method to make these calls.
+
+## Prerequisites
+To complete this tutorial, you need the following:
+
+- Log Analytics workspace where you have at least [contributor rights](manage-access.md#manage-access-using-azure-permissions) .
+- [Permissions to create Data Collection Rule objects](/essentials/data-collection-rule-overview.md#permissions) in the workspace.
++
+## Overview of tutorial
+In this tutorial, you'll reduce the storage requirement for the `LAQueryLogs` table by filtering out certain records. You'll also remove the contents of a column while parsing the column data to store a piece of data in a custom column. The [LAQueryLogs table](query-audit.md#audit-data) is created when you enable [log query auditing](query-audit.md) in a workspace, but this is only used as a sample for the tutorial. You can use this same basic process to create a transformation for any [supported table](tables-feature-support.md) in a Log Analytics workspace.
++
+## Enable query audit logs
+You need to enable [query auditing](query-audit.md) for your workspace to create the `LAQueryLogs` table that you'll be working with. This is not required for all ingestion time transformations. It's just to generate the sample data that this sample transformation will use.
+
+1. From the **Log Analytics workspaces** menu in the Azure portal, select **Diagnostic settings** and then **Add diagnostic setting**.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations/diagnostic-settings.png" lightbox="media/tutorial-ingestion-time-transformations/diagnostic-settings.png" alt-text="Screenshot of diagnostic settings.":::
+
+2. Provide a name for the diagnostic setting and select the workspace so that the auditing data is stored in the same workspace. Select the **Audit** category and then click **Save** to save the diagnostic setting and close the diagnostic setting page.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations/new-diagnostic-setting.png" lightbox="media/tutorial-ingestion-time-transformations/new-diagnostic-setting.png" alt-text="Screenshot of new diagnostic setting.":::
+
+3. Select **Logs** and then run some queries to populate `LAQueryLogs` with some data. These queries don't need to actually return any data.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations/sample-queries.png" lightbox="media/tutorial-ingestion-time-transformations/sample-queries.png" alt-text="Screenshot of sample log queries.":::
+
+## Update table schema
+Before you can create the transformation, the following two changes must be made to the table:
+
+- The table must be enabled for ingestion-time transformation. This is required for any table that will have a transformation, even if the transformation doesn't modify the table's schema.
+- Any additional columns populated by the transformation must be added to the table.
+
+Use the **Tables - Update** API to configure the table with the PowerShell code below. Calling the API enables the table for ingestion-time transformations, whether or not custom columns are defined. In this sample, it includes a custom column called *Resources_CF* that will be populated with the transformation query.
+
+> [!IMPORTANT]
+> Any custom columns added to a built-in table must end in *_CF*. Columns added to a custom table (a table with a name that ends in *_CL*) does not need to have this suffix.
+
+1. Click the **Cloud Shell** button in the Azure portal and ensure the environment is set to **PowerShell**.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations-api/open-cloud-shell.png" lightbox="media/tutorial-ingestion-time-transformations-api/open-cloud-shell.png" alt-text="Screenshot of opening cloud shell.":::
+
+2. Copy the following PowerShell code and replace the **Path** parameter with the details for your workspace.
+
+ ```PowerShell
+ $tableParams = @'
+ {
+ "properties": {
+ "schema": {
+ "name": "LAQueryLogs",
+ "columns": [
+ {
+ "name": "Resources_CF",
+ "description": "The list of resources, this query ran against",
+ "type": "string",
+ "isDefaultDisplay": true,
+ "isHidden": false
+ }
+ ]
+ }
+ }
+ }
+ '@
+
+ Invoke-AzRestMethod -Path "/subscriptions/{subscription}/resourcegroups/{resourcegroup}/providers/microsoft.operationalinsights/workspaces/{workspace}/tables/LAQueryLogs?api-version=2021-12-01-preview" -Method PUT -payload $tableParams
+ ```
+
+3. Paste the code into the cloud shell prompt to run it.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations-api/cloud-shell-script.png" lightbox="media/tutorial-ingestion-time-transformations-api/cloud-shell-script.png" alt-text="Screenshot of script in cloud shell.":::
+
+4. You can verify that the column was added by going to the **Log Analytics workspace** menu in the Azure portal. Select **Logs** to open Log Analytics and then expand the `LAQueryLogs` table to view its columns.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations/verify-table.png" lightbox="media/tutorial-ingestion-time-transformations/verify-table.png" alt-text="Screenshot of Log Analytics with new column.":::
+
+## Define transformation query
+Use Log Analytics to test the transformation query before adding it to a data collection rule.
+
+1. Open your workspace in the **Log Analytics workspaces** menu in the Azure portal and select **Logs** to open Log Analytics.
+
+2. Run the following query to view the contents of the `LAQueryLogs` table. Notice the contents of the `RequestContext` column. The transformation will retrieve the workspace name from this column and remove the rest of the data in it.
+
+ ```kusto
+ LAQueryLogs
+ | take 10
+ ```
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations/initial-query.png" lightbox="media/tutorial-ingestion-time-transformations/initial-query.png" alt-text="Screenshot of initial query in Log Analytics.":::
+
+3. Modify the query to the following:
+
+ ``` kusto
+ LAQueryLogs
+ | where QueryText !contains 'LAQueryLogs'
+ | extend Context = parse_json(RequestContext)
+ | extend Workspace_CF = tostring(Context['workspaces'][0])
+ | project-away RequestContext, Context
+ ```
+ This makes the following changes:
+
+ - Drop rows related to querying the `LAQueryLogs` table itself to save space since these log entries aren't useful.
+ - Add a column for the name of the workspace that was queried.
+ - Remove data from the `RequestContext` column to save space.
++
+ :::image type="content" source="media/tutorial-ingestion-time-transformations/modified-query.png" lightbox="media/tutorial-ingestion-time-transformations/modified-query.png" alt-text="Screenshot of modified query in Log Analytics.":::
++
+4. Make the following changes to the query to use it in the transformation:
+
+ - Instead of specifying a table name (`LAQueryLogs` in this case) as the source of data for this query, use the `source` keyword. This is a virtual table that always represents the incoming data in a transformation query.
+ - Remove any operators that aren't supported by transform queries. See [Supported tables for ingestion-time transformations](tables-feature-support.md) for a detail list of operators that are supported.
+ - Flatten the query to a single line so that it can fit into the DCR JSON.
+
+ Following is the query that you will use in the transformation after these modifications:
+
+ ```kusto
+ source | where QueryText !contains 'LAQueryLogs' | extend Context = parse_json(RequestContext) | extend Resources_CF = tostring(Context['workspaces']) |extend RequestContext = ''
+ ```
+
+## Create data collection rule (DCR)
+Since this is the first transformation in the workspace, you need to create a [workspace transformation DCR](../essentials/data-collection-rule-overview.md#types-of-data-collection-rules). If you create transformations for other tables in the same workspace, they will be stored in this same DCR.
+
+1. In the Azure portal's search box, type in *template* and then select **Deploy a custom template**.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations-api/deploy-custom-template.png" lightbox="media/tutorial-ingestion-time-transformations-api/deploy-custom-template.png" alt-text="Screenshot to deploy custom template.":::
+
+2. Click **Build your own template in the editor**.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations-api/build-custom-template.png" lightbox="media/tutorial-ingestion-time-transformations-api/build-custom-template.png" alt-text="Screenshot to build template in the editor.":::
+
+3. Paste the resource manager template below into the editor and then click **Save**. This template defines the DCR and contains the transformation query. You don't need to modify this template since it will collect values for its parameters.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations-api/edit-template.png" lightbox="media/tutorial-ingestion-time-transformations-api/edit-template.png" alt-text="Screenshot to edit resource manager template.":::
++
+ ```json
+ {
+     "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+     "contentVersion": "1.0.0.0",
+     "parameters": {
+         "dataCollectionRuleName": {
+             "type": "string",
+             "metadata": {
+                 "description": "Specifies the name of the Data Collection Rule to create."
+             }
+         },
+         "location": {
+             "type": "string",
+             "defaultValue": "westus2",
+             "allowedValues": [
+                 "westus2",
+ "eastus2",
+                 "eastus2euap"
+             ],
+             "metadata": {
+                 "description": "Specifies the location in which to create the Data Collection Rule."
+             }
+         },
+         "workspaceResourceId": {
+             "type": "string",
+             "metadata": {
+                 "description": "Specifies the Azure resource ID of the Log Analytics workspace to use."
+             }
+         }
+     },
+     "resources": [
+         {
+             "type": "Microsoft.Insights/dataCollectionRules",
+             "name": "[parameters('dataCollectionRuleName')]",
+             "location": "[parameters('location')]",
+             "apiVersion": "2021-09-01-preview",
+             "kind": "WorkspaceTransforms",
+             "properties": {
+ "destinations": {
+ "logAnalytics": [
+ {
+ "workspaceResourceId": "[parameters('workspaceResourceId')]",
+ "name": "clv2ws1"
+ }
+ ]
+ },
+ "dataFlows": [
+ {
+ "streams": [
+ "Microsoft-Table-LAQueryLogs"
+ ],
+ "destinations": [
+ "clv2ws1"
+ ],
+ "transformKql": "source |where QueryText !contains 'LAQueryLogs' | extend Context = parse_json(RequestContext) | extend Resources_CF = tostring(Context['workspaces']) |extend RequestContext = ''"
+ }
+ ]
+             }
+         }
+     ],
+     "outputs": {
+         "dataCollectionRuleId": {
+             "type": "string",
+             "value": "[resourceId('Microsoft.Insights/dataCollectionRules', parameters('dataCollectionRuleName'))]"
+         }
+     }
+ }
+ ```
+
+4. On the **Custom deployment** screen, specify a **Subscription** and **Resource group** to store the data collection rule and then provide values defined in the template. This includes a **Name** for the data collection rule and the **Workspace Resource ID** that you collected in a previous step. The **Location** should be the same location as the workspace. The **Region** will already be populated and is used for the location of the data collection rule.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations-api/custom-deployment-values.png" lightbox="media/tutorial-ingestion-time-transformations-api/custom-deployment-values.png" alt-text="Screenshot to edit custom deployment values.":::
+
+5. Click **Review + create** and then **Create** when you review the details.
+
+6. When the deployment is complete, expand the **Deployment details** box and click on your data collection rule to view its details. Click **JSON View**.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations-api/data-collection-rule-details.png" lightbox="media/tutorial-ingestion-time-transformations-api/data-collection-rule-details.png" alt-text="Screenshot for data collection rule details.":::
+
+7. Copy the **Resource ID** for the data collection rule. You'll use this in the next step.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations-api/data-collection-rule-json-view.png" lightbox="media/tutorial-ingestion-time-transformations-api/data-collection-rule-json-view.png" alt-text="Screenshot for data collection rule JSON view.":::
+
+## Link workspace to DCR
+The final step to enable the transformation is to link the DCR to the workspace.
+
+> [!IMPORTANT]
+> A workspace can only be connected to a single DCR, and the linked DCR must contain this workspace as a destination.
+
+Use the **Workspaces - Update** API to configure the table with the PowerShell code below.
+
+1. Click the **Cloud shell** button to open cloud shell again. Copy the following PowerShell code and replace the parameters with values for your workspace and DCR.
+
+ ```PowerShell
+ $defaultDcrParams = @'
+ {
+ "properties": {
+ "defaultDataCollectionRuleResourceId": "/subscriptions/{subscription}/resourceGroups/{resourcegroup}/providers/Microsoft.Insights/dataCollectionRules/{DCR}"
+ }
+ }
+ '@
+
+ Invoke-AzRestMethod -Path "/subscriptions/{subscription}/resourcegroups/{resourcegroup}/providers/microsoft.operationalinsights/workspaces/{workspace}?api-version=2021-12-01-preview" -Method PATCH -payload $defaultDcrParams
+ ```
+
+2. Paste the code into the cloud shell prompt to run it.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations-api/cloud-shell-script-link-workspace.png" lightbox="media/tutorial-ingestion-time-transformations-api/cloud-shell-script-link-workspace.png" alt-text="Screenshot of script to link workspace to DCR.":::
+
+## Test transformation
+Allow about 30 minutes for the transformation to take effect, and you can then test it by running a query against the table. Only data sent to the table after the transformation was applied will be affected.
+
+For this tutorial, run some sample queries to send data to the `LAQueryLogs` table. Include some queries against `LAQueryLogs` so you can verify that the transformation filters these records. Notice that the output has the new `Workspace_CF` column, and there are no records for `LAQueryLogs`.
++
+## Troubleshooting
+This section describes different error conditions you may receive and how to correct them.
+
+### IntelliSense in Log Analytics not recognizing new columns in the table
+The cache that drives IntelliSense may take up to 24 hours to update.
+
+### Transformation on a dynamic column isn't working
+There is currently a known issue affecting dynamic columns. A temporary workaround is to explicitly parse dynamic column data using `parse_json()` prior to performing any operations against them.
+
+## Next steps
+
+- [Read more about ingestion-time transformations](ingestion-time-transformations.md)
+- [See which tables support ingestion-time transformations](tables-feature-support.md)
+- [Learn more about writing transformation queries](../essentials/data-collection-rule-transformations.md)
azure-monitor Tutorial Ingestion Time Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-ingestion-time-transformations.md
+
+ Title: Tutorial - Add ingestion-time transformation to Azure Monitor Logs using Azure portal
+description: This article describes how to add a custom transformation to data flowing through Azure Monitor Logs using the Azure portal.
++ Last updated : 02/20/2022++
+# Add ingestion-time transformation to Azure Monitor Logs using the Azure portal (preview)
+[Ingestion-time transformations](ingestion-time-transformations.md) allow you to manipulate incoming data before it's stored in a Log Analytics workspace. You can add data filtering, parsing and extraction, and control the structure of the data that gets ingested. This tutorial walks you through configuration of a sample ingestion time transformation using the Azure portal.
+
+> [!NOTE]
+> This tutorial uses the Azure portal to configure an ingestion-time transformation. See [Tutorial: Add ingestion-time transformation to Azure Monitor Logs using resource manager templates (preview)](tutorial-ingestion-time-transformations-api.md) for the same tutorial using resource manager templates and REST API.
+
+In this tutorial, you learn to:
+
+> [!div class="checklist"]
+> * Configure [ingestion-time transformation](ingestion-time-transformations.md) for a table in Azure Monitor Logs
+> * Write a log query for an ingestion-time transform
++
+## Prerequisites
+To complete this tutorial, you need the following:
+
+- Log Analytics workspace where you have at least [contributor rights](manage-access.md#manage-access-using-azure-permissions) .
+- [Permissions to create Data Collection Rule objects](/essentials/data-collection-rule-overview.md#permissions) in the workspace.
++
+## Overview of tutorial
+In this tutorial, you'll reduce the storage requirement for the `LAQueryLogs` table by filtering out certain records. You'll also remove the contents of a column while parsing the column data to store a piece of data in a custom column. The [LAQueryLogs table](query-audit.md#audit-data) is created when you enable [log query auditing](query-audit.md) in a workspace. You can use this same basic process to create a transformation for any [supported table](tables-feature-support.md) in a Log Analytics workspace.
+
+This tutorial will use the Azure portal which provides a wizard to walk you through the process of creating an ingestion-time transformation. The following actions are performed for you when you complete this wizard:
+
+- Updates the table schema with any additional columns from the query.
+- Creates a `WorkspaceTransforms` data collection rule (DCR) and links it to the workspace if a default DCR isn't already linked to the workspace.
+- Creates an ingestion-time transformation and adds it to the DCR.
++
+## Enable query audit logs
+You need to enable [query auditing](query-audit.md) for your workspace to create the `LAQueryLogs` table that you'll be working with. This is not required for all ingestion time transformations. It's just to generate the sample data that we'll be working with.
+
+1. From the **Log Analytics workspaces** menu in the Azure portal, select **Diagnostic settings** and then **Add diagnostic setting**.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations/diagnostic-settings.png" lightbox="media/tutorial-ingestion-time-transformations/diagnostic-settings.png" alt-text="Screenshot of diagnostic settings.":::
+
+2. Provide a name for the diagnostic setting and select the workspace so that the auditing data is stored in the same workspace. Select the **Audit** category and then click **Save** to save the diagnostic setting and close the diagnostic setting page.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations/new-diagnostic-setting.png" lightbox="media/tutorial-ingestion-time-transformations/new-diagnostic-setting.png" alt-text="Screenshot of new diagnostic setting.":::
+
+3. Select **Logs** and then run some queries to populate `LAQueryLogs` with some data. These queries don't need to return data to be added to the audit log.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations/sample-queries.png" lightbox="media/tutorial-ingestion-time-transformations/sample-queries.png" alt-text="Screenshot of sample log queries.":::
+
+## Add transformation to the table
+Now that the table's created, you can add the transformation to it.
+
+1. From the **Log Analytics workspaces** menu in the Azure portal, select **Tables (preview)**. Locate the `LAQueryLogs` table and select **Create transformation**.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations/create-transformation.png" lightbox="media/tutorial-ingestion-time-transformations/create-transformation.png" alt-text="Screenshot of creating a new transformation.":::
++
+2. Since this is the first transformation in the workspace, you need to create a [workspace transformation DCR](../essentials/data-collection-rule-overview.md#types-of-data-collection-rules). If you create transformations for other tables in the same workspace, they will be stored in this same DCR. Click **Create a new data collection rule**. The **Subscription** and **Resource group** will already be populated for the workspace. Provide a name for the DCR and click **Done**.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations/new-data-collection-rule.png" lightbox="media/tutorial-ingestion-time-transformations/new-data-collection-rule.png" alt-text="Screenshot of creating a new data collection rule.":::
+
+3. Click **Next** to view sample data from the table. As you define the transformation, the result will be applied to the sample data allowing you to evaluate the results before applying it to actual data. Click **Transformation editor** to define the transformation.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations/sample-data.png" lightbox="media/tutorial-ingestion-time-transformations/sample-data.png" alt-text="Screenshot of sample data from the log table.":::
+
+4. In the transformation editor, you can see the transformation that will be applied to the data prior to its ingestion into the table. The incoming data is represented by a virtual table named `source`, which has the same set of columns as the destination table itself. The transformation initially contains a simple query returning the `source` table with no changes.
+
+5. Modify the query to the following:
+
+ ``` kusto
+ source
+ | where QueryText !contains 'LAQueryLogs'
+ | extend Context = parse_json(RequestContext)
+ | extend Workspace_CF = tostring(Context['workspaces'][0])
+ | project-away RequestContext, Context
+ ```
+
+ This makes the following changes:
+
+ - Drop rows related to querying the `LAQueryLogs` table itself to save space since these log entries aren't useful.
+ - Add a column for the name of the workspace that was queried.
+ - Remove data from the `RequestContext` column to save space.
+++
+ > [!Note]
+ > Using the Azure portal, the output of the transformation will initiate changes to the table schema if required. Columns will be added to match the transformation output if they don't already exist. Make sure that your output doesn't contain any additional columns that you don't want added to the table. If the output does not include columns that are already in the table, those columns will not be removed, but data will not be added.
+ >
+ > Any custom columns added to a built-in table must end in *_CF*. Columns added to a custom table (a table with a name that ends in *_CL*) does not need to have this suffix.
+
+6. Copy the query into the transformation editor and click **Run** to view results from the sample data. You can verify that the new `Workspace_CF` column is in the query.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations/transformation-editor.png" lightbox="media/tutorial-ingestion-time-transformations/transformation-editor.png" alt-text="Screenshot of transformation editor.":::
+
+7. Click **Apply** to save the transformation and then **Next** to review the configuration. Click **Create** to update the data collection rule with the new transformation.
+
+ :::image type="content" source="media/tutorial-ingestion-time-transformations/save-transformation.png" lightbox="media/tutorial-ingestion-time-transformations/save-transformation.png" alt-text="Screenshot of saving transformation.":::
+
+## Test transformation
+Allow about 30 minutes for the transformation to take effect and then test it by running a query against the table. Only data sent to the table after the transformation was applied will be affected.
+
+For this tutorial, run some sample queries to send data to the `LAQueryLogs` table. Include some queries against `LAQueryLogs` so you can verify that the transformation filters these records. Notice that the output has the new `Workspace_CF` column, and there are no records for `LAQueryLogs`.
+
+## Troubleshooting
+This section describes different error conditions you may receive and how to correct them.
+
+### IntelliSense in Log Analytics not recognizing new columns in the table
+The cache that drives IntelliSense may take up to 24 hours to update.
+
+### Transformation on a dynamic column isn't working
+There is currently a known issue affecting dynamic columns. A temporary workaround is to explicitly parse dynamic column data using `parse_json()` prior to performing any operations against them.
+
+## Next steps
+
+- [Read more about ingestion-time transformations](ingestion-time-transformations.md)
+- [See which tables support ingestion-time transformations](tables-feature-support.md)
+- [Learn more about writing transformation queries](../essentials/data-collection-rule-transformations.md)
azure-monitor Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Monitor description: Sample Azure Resource Graph queries for Azure Monitor showing use of resource types and tables to access Azure Monitor related resources and properties. Previously updated : 01/20/2022 Last updated : 02/16/2022
azure-monitor Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/service-limits.md
This article lists limits in different areas of Azure Monitor.
[!INCLUDE [monitoring-limits](../../includes/azure-monitor-limits-autoscale.md)]
+## Custom logs
++ ## Data collection rules [!INCLUDE [data-collection-rules](../../includes/azure-monitor-limits-data-collection-rules.md)] ## Diagnostic Settings ## Log queries and language
azure-monitor Monitor Virtual Machine Analyze https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-analyze.md
Three namespaces are used by virtual machines.
|:|:|:| | Virtual Machine Host | Host metrics automatically collected for all Azure virtual machines. Detailed list of metrics at [Microsoft.Compute/virtualMachines](../essentials/metrics-supported.md#microsoftcomputevirtualmachines). | Collected automatically with no configuration required. | | Guest (classic) | Limited set of guest operating system and application performance data. Available in metrics explorer but not other Azure Monitor features, such as metric alerts. | [Diagnostic extension](../agents/diagnostics-extension-overview.md) installed. Data is read from Azure Storage. |
-| Virtual Machine Guest | Guest operating system and application performance data available to all Azure Monitor features using metrics. | [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) installed with a [Data Collection Rule](../agents/data-collection-rule-overview.md). |
+| Virtual Machine Guest | Guest operating system and application performance data available to all Azure Monitor features using metrics. | [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) installed with a [Data Collection Rule](../essentials/data-collection-rule-overview.md). |
## Analyze log data with Log Analytics By using Log Analytics, you can perform custom analysis of your log data. Use Log Analytics when you want to dig deeper into the data used to create the views in VM insights. You might want to analyze different logic and aggregations of that data, correlate security data collected by Microsoft Defender for Cloud and Microsoft Sentinel with your health and availability data, or work with data collected for your [workloads](monitor-virtual-machine-workloads.md).
azure-monitor Tutorial Monitor Vm Guest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/tutorial-monitor-vm-guest.md
Last updated 11/08/2021
# Tutorial: Collect guest logs and metrics from Azure virtual machine
-When you [enable monitoring with VM insights](tutorial-monitor-vm-enable.md), it collects performance data using the Log Analytics agent. To collect logs from the guest operating system and to send performance data to Azure Monitor Metrics, install the [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) and create a [data collection rule](../agents/data-collection-rule-overview.md) (DCR) that defines the data to collect and where to send it.
+When you [enable monitoring with VM insights](tutorial-monitor-vm-enable.md), it collects performance data using the Log Analytics agent. To collect logs from the guest operating system and to send performance data to Azure Monitor Metrics, install the [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) and create a [data collection rule](../essentials/data-collection-rule-overview.md) (DCR) that defines the data to collect and where to send it.
> [!NOTE] > Prior to the Azure Monitor agent, guest metrics for Azure virtual machines were collected with the [Azure diagnostic extension](../agents/diagnostics-extension-overview.md) for Windows (WAD) and Linux (LAD). These agents are still available and can be configured with the **Diagnostic settings** menu item for the virtual machine, but they are in the process of being replaced with Azure Monitor agent.
To complete this tutorial you need the following:
## Create data collection rule
-[Data collection rules](../agents/data-collection-rule-overview.md) in Azure Monitor define data to collect and where it should be sent. When you define the data collection rule using the Azure portal, you specify the virtual machines it should be applied to. The Azure Monitor agent will automatically be installed on any virtual machines that don't already have it.
+[Data collection rules](../essentials/data-collection-rule-overview.md) in Azure Monitor define data to collect and where it should be sent. When you define the data collection rule using the Azure portal, you specify the virtual machines it should be applied to. The Azure Monitor agent will automatically be installed on any virtual machines that don't already have it.
> [!NOTE] > You must currently install the Azure Monitor agent from **Monitor** menu in the Azure portal. This functionality is not yet available from the virtual machine's menu.
azure-monitor Vminsights Health Configure Dcr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-health-configure-dcr.md
The following table lists the default configuration for each monitor. This defau
## Overrides An *override* changes one ore more properties of a monitor. For example, an override could disable a monitor that's enabled by default, define warning criteria for the monitor, or modify the monitor's critical threshold.
-Overrides are defined in a [Data Collection Rule (DCR)](../agents/data-collection-rule-overview.md). You can create multiple DCRs with different sets of overrides and apply them to multiple virtual machines. You apply a DCR to a virtual machine by creating an association as described in [Configure data collection for the Azure Monitor agent (preview)](../agents/data-collection-rule-azure-monitor-agent.md#data-collection-rule-associations).
+Overrides are defined in a [Data Collection Rule (DCR)](../essentials/data-collection-rule-overview.md). You can create multiple DCRs with different sets of overrides and apply them to multiple virtual machines. You apply a DCR to a virtual machine by creating an association as described in [Configure data collection for the Azure Monitor agent (preview)](../agents/data-collection-rule-azure-monitor-agent.md#data-collection-rule-associations).
## Multiple overrides
For a sample data collection rule enabling guest monitoring, see [Enable a virtu
## Next steps -- Read more about [data collection rules](../agents/data-collection-rule-overview.md).
+- Read more about [data collection rules](../essentials/data-collection-rule-overview.md).
azure-monitor Vminsights Health Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-health-enable.md
There are three steps required to enable virtual machines using Azure Resource M
> [!NOTE] > If you enable a virtual machine using the Azure portal, then the data collection rule described here is created for you. In this case, you do not need to perform this step.
-Configuration for the monitors in VM insights guest health is stored in [data Collection Rules (DCR)](../agents/data-collection-rule-overview.md). Each virtual machine with the guest health extension will need an association with this rule.
+Configuration for the monitors in VM insights guest health is stored in [data Collection Rules (DCR)](../essentials/data-collection-rule-overview.md). Each virtual machine with the guest health extension will need an association with this rule.
> [!NOTE] > You can create additional data collection rules to modify the default configuration of monitors as described in [Configure monitoring in VM insights guest health (preview)](vminsights-health-configure.md).
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
Azure NetApp Files standard network features are supported for the following reg
* North Central US * South Central US * West US 3
+* West Europe
## Considerations
azure-netapp-files Cross Region Replication Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-introduction.md
na Previously updated : 02/09/2022 Last updated : 02/23/2022 # Cross-region replication of Azure NetApp Files volumes
-The Azure NetApp Files replication functionality provides data protection through cross-region volume replication. You can asynchronously replicate data from an Azure NetApp Files volume (source) in one region to another Azure NetApp Files volume (destination) in another region. This capability enables you to failover your critical application in case of a region-wide outage or disaster.
+The Azure NetApp Files replication functionality provides data protection through cross-region volume replication. You can asynchronously replicate data from an Azure NetApp Files volume (source) in one region to another Azure NetApp Files volume (destination) in another region. This capability enables you to fail over your critical application if a region-wide outage or disaster happens.
## <a name="supported-region-pairs"></a>Supported cross-region replication pairs
-Azure NetApp Files volume replication is supported between various [Azure regional pairs](../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) and non-pairs. Azure NetApp Files volume replication is currently available between the following regions:
+Azure NetApp Files volume replication is supported between various [Azure regional pairs](../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) and non-standard pairs. Azure NetApp Files volume replication is currently available between the following regions. You can replicate Azure NetApp Files volumes from Regional Pair A to Regional Pair B, and vice versa.
### Azure regional pairs
Recovery Time Objective (RTO), or the maximum tolerable business application dow
## Cost model for cross-region replication
-With Azure NetApp Files cross-region replication, you pay only for the amount of data you replicate. There is no setup charges or minimum usage fee. The replication price is based on the replication frequency and the region of the *destination* volume you choose during the initial replication configuration. See the [Azure NetApp Files Pricing](https://azure.microsoft.com/pricing/details/netapp/) page for more information.
+With Azure NetApp Files cross-region replication, you pay only for the amount of data you replicate. There's no setup charge or minimum usage fee. The replication price is based on the replication frequency and the region of the *destination* volume you choose during the initial replication configuration. For more information, see the [Azure NetApp Files Pricing](https://azure.microsoft.com/pricing/details/netapp/) page.
Regular Azure NetApp Files storage capacity charge applies to the replication destination volume (also called the *data protection* volume).
Assume the following situations:
* Your *source* volume is from the Azure NetApp Files *Premium* service level. It has a volume quota size of 1000 GiB and a volume consumed size of 500 GiB at the beginning of the first day of a month. The volume is in the *US South Central* region. * Your *destination* volume is from the Azure NetApp Files *Standard* service level. It is in the *US East 2* region. * YouΓÇÖve configured an *hourly* based cross-region replication between the two volumes above. Therefore, the price of replication is $0.12 per GiB.
-* For simplicity, assume your source volume has a constant 0.5-GiB data change every hour, but the total volume consumed size does not grow (remains at 500 GiB).
+* For simplicity, assume your source volume has a constant 0.5-GiB data change every hour, but the total volume consumed size doesn't grow (remains at 500 GiB).
After the initial setup, the baseline replication happens immediately.
Regular Azure NetApp Files storage capacity charge applies to the destination vo
#### Example 2: Month 2 incremental replications and resync replications
-Assume you have a source volume, a destination volume, and a replication relationship between the two set up as described in Example 1. For 29 days of the second month (a 30-day month), the hourly replications occurred as expected.
+Assume you have a source volume, a destination volume, and a replication relationship between the two setups as described in Example 1. For 29 days of the second month (a 30-day month), the hourly replications occurred as expected.
* Sum of data amount replicated across incremental replications for 29 days: `0.5 GiB * 24 hours * 29 days = 348 GiB`
azure-netapp-files Enable Continuous Availability Existing SMB https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/enable-continuous-availability-existing-SMB.md
na Previously updated : 12/02/2021 Last updated : 02/23/2022 # Enable Continuous Availability on existing SMB volumes
You can enable the SMB Continuous Availability (CA) feature when you [create a n
> > See the [**Enable Continuous Availability**](azure-netapp-files-create-volumes-smb.md#continuous-availability) option for additional details and considerations.
-## Considerations
-
-* The [**Hide Snapshot Path**](snapshots-edit-hide-path.md) option currently does not have any effect for CA-enabled SMB volumes.
-
-* The `~snapshot` directory (which can be used to traverse in other SMB volumes) is not visible for CA-enabled SMB volumes. You can still manually type `~snapshot\<snapshotName>` to access the snapshot.
- ## Steps 1. Make sure that you have [registered the SMB Continuous Availability Shares](https://aka.ms/anfsmbcasharespreviewsignup) feature.
azure-percept Azure Percept Devkit Software Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/azure-percept-devkit-software-release-notes.md
This page provides information of changes and fixes for each Azure Percept DK OS
To download the update images, refer to [Azure Percept DK software releases for USB cable update](./software-releases-usb-cable-updates.md) or [Azure Percept DK software releases for OTA update](./software-releases-over-the-air-updates.md).
+## February (2202) Release
+
+- Operating System
+ - Latest security updates on vim and expat packages.
+ ## January (2201) Release - Setup Experience
azure-percept Create People Counting Solution With Azure Percept Devkit Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/create-people-counting-solution-with-azure-percept-devkit-vision.md
Title: Create a people counting solution with Azure Percept Vision description: This guide will focus on detecting and counting people using the Azure Percept DK hardware, Azure IoT Hub, Azure Stream Analytics, and Power BI dashboard. -+
azure-percept How To Update Over The Air https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-update-over-the-air.md
Follow this guide to learn how to update the OS and firmware of the carrier boar
- [Azure subscription](https://azure.microsoft.com/free/) - [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md): you connected your dev kit to a Wi-Fi network, created an IoT Hub, and connected your dev kit to the IoT Hub - [Device Update for IoT Hub has been successfully configured](./how-to-set-up-over-the-air-updates.md)-- Make sure you are using the Devic Update for IoT Hub with its **old version** (public preview) UX. When navigate to "device management - updates" in your IoT Hub, click the **"switch to the older version"** link in the banner.
+- Make sure you are using the **old version** of the Device Update for IoT Hub. To do that, navigate to **Device management** > **Updates** in your IoT Hub, select the **switch to the older version** link in the banner.
:::image type="content" source="media/how-to-update-over-the-air/switch-banner.png" alt-text="Screenshot of banner." lightbox="media/how-to-update-over-the-air/switch-banner.png"::: > [!CAUTION]
- > As the Device Update for IoT Hub has launched the public preview refresh, the new UX is only compatible with edge device that is using the newer client agent. The devkit is current using an older version of client agent, therefore you need to use the old device update UX accordingly. **Otherwise you will encounter issues when import updates or group device for deploying updates.**
+ > The devkit is currently incompatible with latest changes in the Device Update for IoT Hub service. Therefore, it is important to switch to the **older version** of the Device Update of Iot Hub as instructed above before moving forward.
## Import your update file and manifest file
azure-percept Software Releases Over The Air Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/software-releases-over-the-air-updates.md
Microsoft would service each dev kit release with OTA packages. However, as ther
>[!IMPORTANT] >If the current version of you dev kit isn't included in any of the releases below, it's NOT supported for OTA update. Please do a USB cable update to get to the latest version.
+>[!CAUTION]
+>Make sure you are using the **old version** of the Device Update for IoT Hub. To do that, navigate to **Device management > Updates** in your IoT Hub, select the **switch to the older version** link in the banner. For more information, please refer to [Update Azure Percept DK over-the-air](./how-to-update-over-the-air.md).
+ **Latest release:** |Release|Applicable Version(s)|Download Links|Note| |||||
-|January Service Release (2201)|2021.106.111.115,<br>2021.107.129.116,<br>2021.109.129.108, <br>2021.111.124.109 |[2022.101.112.106 OTA update package](<https://download.microsoft.com/download/e/b/3/eb3a3c51-a60a-4d45-9406-9a4805127c62/2022.101.112.106 OTA update package.zip>)||
+|February Service Release (2202)|2021.106.111.115,<br>2021.107.129.116,<br>2021.109.129.108, <br>2021.111.124.109, <br>2022.101.112.106|[2022.102.109.102 OTA update package](<https://download.microsoft.com/download/f/f/3/ff37dfee-ee0e-4b2d-82ef-5926062fcdbd/2022.102.109.102 OTA update package.zip>)|Make sure you are using the **old version** of the Device Update for IoT Hub. To do that, navigate to **Device management > Updates** in your IoT Hub, select the **switch to the older version** link in the banner. For more information, please refer to [Update Azure Percept DK over-the-air](./how-to-update-over-the-air.md).|
**Hard-stop releases:**
azure-percept Software Releases Usb Cable Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/software-releases-usb-cable-updates.md
This page provides information and download links for all the dev kit OS/firmwar
## Latest releases - **Latest service release**
-January Service Release (2201): [Azure-Percept-DK-1.0.20220112.1519-public_preview_1.0.zip](<https://download.microsoft.com/download/1/6/4/164cfcf2-ce52-4e75-9dee-63bb4a128e71/Azure-Percept-DK-1.0.20220112.1519-public_preview_1.0.zip>)
+February Service Release (2202): [Azure-Percept-DK-1.0.20220209.1156-public_preview_1.0.zip](<https://download.microsoft.com/download/f/8/6/f86ce7b3-8d76-494e-82d9-dcfb71fc2580/Azure-Percept-DK-1.0.20220209.1156-public_preview_1.0.zip>)
+ - **Latest major update or known stable version** Feature Update (2104): [Azure-Percept-DK-1.0.20210409.2055.zip](https://download.microsoft.com/download/6/4/d/64d53e60-f702-432d-a446-007920a4612c/Azure-Percept-DK-1.0.20210409.2055.zip)
Feature Update (2104): [Azure-Percept-DK-1.0.20210409.2055.zip](https://download
|Release|Download Links|Note| |||::|
+|February Service Release (2202)|[Azure-Percept-DK-1.0.20220209.1156-public_preview_1.0.zip](<https://download.microsoft.com/download/f/8/6/f86ce7b3-8d76-494e-82d9-dcfb71fc2580/Azure-Percept-DK-1.0.20220209.1156-public_preview_1.0.zip>)||
|January Service Release (2201)|[Azure-Percept-DK-1.0.20220112.1519-public_preview_1.0.zip](<https://download.microsoft.com/download/1/6/4/164cfcf2-ce52-4e75-9dee-63bb4a128e71/Azure-Percept-DK-1.0.20220112.1519-public_preview_1.0.zip>)|| |November Service Release (2111)|[Azure-Percept-DK-1.0.20211124.1851-public_preview_1.0.zip](<https://download.microsoft.com/download/9/5/4/95464a73-109e-46c7-8624-251ceed0c5ea/Azure-Percept-DK-1.0.20211124.1851-public_preview_1.0.zip>)|| |September Service Release (2109)|[Azure-Percept-DK-1.0.20210929.1747-public_preview_1.0.zip](https://go.microsoft.com/fwlink/?linkid=2174462)||
azure-percept Voice Control Your Inventory Then Visualize With Power Bi Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/voice-control-your-inventory-then-visualize-with-power-bi-dashboard.md
Title: Voice control your inventory with Azure Percept Audio description: This article will give detailed instructions for building the main components of the solution and deploying the edge speech AI.-+
azure-resource-manager Add Template To Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/add-template-to-azure-pipelines.md
Title: CI/CD with Azure Pipelines and Bicep files description: In this quickstart, you learn how to configure continuous integration in Azure Pipelines by using Bicep files. It shows how to use an Azure CLI task to deploy a Bicep file.- - Previously updated : 11/16/2021- Last updated : 02/23/2022 + # Quickstart: Integrate Bicep with Azure Pipelines This quickstart shows you how to integrate Bicep files with Azure Pipelines for continuous integration and continuous deployment (CI/CD).
variables:
azureServiceConnection: '<your-connection-name>' resourceGroupName: 'exampleRG' location: '<your-resource-group-location>'
- templateFile: './main.bicep'
+ templateFile: 'main.bicep'
pool: vmImage: $(vmImageName)
azure-resource-manager Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Resource Manager description: Sample Azure Resource Graph queries for Azure Resource Manager showing use of resource types and tables to access Azure Resource Manager related resources and properties. Previously updated : 01/20/2022 Last updated : 02/16/2022
azure-sql Connect Query Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/connect-query-portal.md
Previously updated : 03/01/2021 Last updated : 02/18/2022 # Quickstart: Use the Azure portal's query editor (preview) to query an Azure SQL Database [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-The query editor is a tool in the Azure portal for running SQL queries against your database in Azure SQL Database or data warehouse in Azure Synapse Analytics.
+The query editor is a tool in the Azure portal to run SQL queries against your database in Azure SQL Database or data warehouse in Azure Synapse Analytics.
In this quickstart, you'll use the query editor to run Transact-SQL (T-SQL) queries against a database. ## Prerequisites
+Completing this quickstart requires the AdventureWorksLT sample database. You may optionally wish to set an Azure Active Directory (Azure AD) admin for your [server](logical-servers.md).
+ ### Create a database with sample data
-Completing this quickstart requires the AdventureWorksLT sample database. If you don't have a working copy of the AdventureWorksLT sample database in SQL Database, the following quickstart helps you quickly create one:
+If you don't have a working copy of the AdventureWorksLT sample database in SQL Database, the following quickstart helps you quickly create one:
[Quickstart: Create a database in Azure SQL Database using the Azure portal, PowerShell, or Azure CLI](single-database-create-quickstart.md)
This process is optional, you can instead use SQL authentication to connect to t
5. Back in the SQL Server **Active Directory admin** page toolbar, select **Save**.
-## Using SQL Query Editor
+## Use the SQL Query Editor
1. Sign in to the [Azure portal](https://portal.azure.com/) and select the database you want to query.
Run the following [DELETE](/sql/t-sql/statements/delete-transact-sql/) T-SQL sta
2. Select **Run** to delete the specified row in the `Product` table. The **Messages** pane displays **Query succeeded: Affected rows: 1**.
-## Troubleshooting and considerations
+## Troubleshooting
There are a few things to know when working with the query editor.
If you get one of the following errors in the query editor:
- *Your local network settings might be preventing the Query Editor from issuing queries. Please click here for instructions on how to configure your network settings* - *A connection to the server could not be established. This might indicate an issue with your local firewall configuration or your network proxy settings*
-This is because the query editor uses port 443 and 1443 to communicate. You will need to ensure you have enabled outbound HTTPS traffic on these ports. The instructions below will walk you through how to do this, depending on your Operating System. You might need to work with your corporate IT to grant approval to open this connection on your local network.
+These errors occur because the query editor uses port 443 and 1443 to communicate. You will need to ensure you have enabled outbound HTTPS traffic on these ports. The instructions below will walk you through how to do this, depending on your Operating System. You might need to work with your corporate IT to grant approval to open this connection on your local network.
#### Steps for Windows
-1. Open **Windows Defender Firewall**
-2. On the left-side menu, select **Advanced settings**
+1. Open **Windows Defender Firewall**.
+2. On the left-side menu, select **Advanced settings**.
3. In **Windows Defender Firewall with Advanced Security**, select **Outbound rules** on the left-side menu.
-4. Select **New Rule...** on the right-side menu
+4. Select **New Rule...** on the right-side menu.
In the **New outbound rule wizard** follow these steps:
-1. Select **port** as the type of rule you want to create. Select **Next**
-2. Select **TCP**
-3. Select **Specific remote ports** and enter "443, 1443". Then select **Next**
-4. Select "Allow the connection if it is secure"
-5. Select **Next** then select **Next** again
-5. Keep "Domain", "Private", and "Public" all selected
-6. Give the rule a name, for example "Access Azure SQL query editor" and optionally a description. Then select **Finish**
+1. Select **port** as the type of rule you want to create. Select **Next**.
+2. Select **TCP**.
+3. Select **Specific remote ports** and enter "443, 1443". Then select **Next**.
+4. Select "Allow the connection if it is secure".
+5. Select **Next** then select **Next** again.
+5. Keep "Domain", "Private", and "Public" all selected.
+6. Give the rule a name, for example "Access Azure SQL query editor" and optionally a description. Then select **Finish**.
#### Steps for Mac 1. Open **System Preferences** (Apple menu > System Preferences).
-2. Click **Security & Privacy**.
-3. Click **Firewall**.
-4. If Firewall is off, select **Click the lock to make changes** at the bottom and select **Turn on Firewall**
-4. Click **Firewall Options**.
-5. In the **Security & Privacy** window select this option: "Automatically allow signed software to receive incoming connections."
+2. Select **Security & Privacy**.
+3. Select **Firewall**.
+4. If Firewall is off, select **Click the lock to make changes** at the bottom and select **Turn on Firewall**.
+4. Select **Firewall Options**.
+5. In the **Security & Privacy** window select this option: 'Automatically allow signed software to receive incoming connections'.
#### Steps for Linux Run these commands to update iptables
Run these commands to update iptables
### Connection considerations
-* For public connections to query editor, you need to [add your outbound IP address to the server's allowed firewall rules](firewall-create-server-level-portal-quickstart.md) to access your databases and data warehouses.
-
-* If you have a Private Link connection set up on the server and you are connecting to query editor from an IP in the private Virtual Network, the Query Editor works without needing to add the Client IP address into the SQL database server firewall rules.
-
-* The most basic RBAC permissions needed to use query editor are Read access to the server and database. Anyone with this level of access can access the query editor feature. To limit access to particular users, you must prevent them from being able to sign in to the query editor with Azure Active Directory or SQL authentication credentials. If they cannot assign themselves as the AAD admin for the server or access/add a SQL administrator account, they should not be able to use query editor.
-
-* Query editor doesn't support connecting to the `master` database.
+- For public connections to query editor, you need to [add your outbound IP address to the server's allowed firewall rules](firewall-create-server-level-portal-quickstart.md) to access your databases and data warehouses.
+- If you have a Private Link connection set up on the server and you are connecting to query editor from an IP in the private Virtual Network, the Query Editor works without needing to add the Client IP address into the SQL database server firewall rules.
+- The most basic role-based access control (RBAC) permissions needed to use the query editor are 'Read access to the server and database'. Anyone with this level of access can access the query editor feature. To limit access to particular users, you must prevent them from being able to sign in to the query editor with Azure Active Directory or SQL authentication credentials. If they cannot assign themselves as the AAD admin for the server or access/add a SQL administrator account, they should not be able to use query editor.
+- If you see the error message "The X-CSRF-Signature header could not be validated", take the following action to resolve the issue:
+ - Verify that your computer's clock is set to the right time and time zone. You can also try to match your computer's time zone with Azure by searching for the time zone for the location of your instance, such as East US, Pacific, and so on.
+ - If you are on a proxy network, make sure that the request header ΓÇ£X-CSRF-SignatureΓÇ¥ is not being modified or dropped.
-* Query editor cannot connect to a replica database with `ApplicationIntent=ReadOnly`
-* If you saw this error message "The X-CSRF-Signature header could not be validated", take the following action to resolve the issue:
+## Limitations
- * Make sure your computer's clock is set to the right time and time zone. You can also try to match your computer's time zone with Azure by searching for the time zone for the location of your instance, such as East US, Pacific, and so on.
- * If you are on a proxy network, make sure that the request header ΓÇ£X-CSRF-SignatureΓÇ¥ is not being modified or dropped.
+- Query editor doesn't support connecting to the `master` database. To connect to the `master` database, explore one or more clients in [Next steps](#next-steps).
+- Query editor cannot connect to a [replica database](read-scale-out.md) with `ApplicationIntent=ReadOnly`. To connect in this way from a rich client, you can connect using SQL Server Management Studio and specify `ApplicationIntent=ReadOnly` in the 'Additional Connection Parameters' [tab in connection options](/sql/database-engine/availability-groups/windows/listeners-client-connectivity-application-failover#ConnectToSecondary).
+- Query editor has a 5-minute timeout for query execution. To run longer queries, explore one or more clients in [Next steps](#next-steps).
+- Query editor only supports cylindrical projection for geography data types.
+- Query editor does not support IntelliSense for database tables and views, but does support autocomplete on names that have already been typed. For IntelliSense support, explore one or more clients in [Next steps](#next-steps).
+- Pressing **F5** refreshes the query editor page. Any query being worked on will be lost.
-### Other considerations
-
-* Pressing **F5** refreshes the query editor page and any query being worked on is lost.
-
-* There's a 5-minute timeout for query execution.
+## Next steps
-* The query editor only supports cylindrical projection for geography data types.
+You can query a database in Azure SQL Database with a variety of clients, including:
-* There's no support for IntelliSense for database tables and views, but the editor does support autocomplete on names that have already been typed.
+- [Use SSMS to connect to and query Azure SQL Database or Azure SQL Managed Instance](connect-query-ssms.md).
+- [Use Visual Studio Code to connect and query](connect-query-vscode.md).
+- [Use Azure Data Studio to connect and query Azure SQL database](/sql/azure-data-studio/quickstart-sql-database).
-## Next steps
+Learn more about Azure SQL Database in the following articles:
-To learn more about the Transact-SQL (T-SQL) supported in Azure SQL Database, see [Resolving Transact-SQL differences during migration to SQL Database](transact-sql-tsql-differences-sql-server.md).
+- [Learn more about the Transact-SQL (T-SQL) supported in Azure SQL Database](transact-sql-tsql-differences-sql-server.md).
+- [Azure SQL glossary of terms](../glossary-terms.md).
+- [What is Azure SQL?](../azure-sql-iaas-vs-paas-what-is-overview.md)
azure-sql Logins Create Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/logins-create-manage.md
You can create accounts for non-administrative users using one of two methods:
With this approach, the user authentication information is stored in each database, and replicated to geo-replicated databases automatically. However, if the same account exists in multiple databases and you are using Azure SQL Authentication, you must keep the passwords synchronized manually. Additionally, if a user has an account in different databases with different passwords, remembering those passwords can become a problem. > [!IMPORTANT]
-> To create contained users mapped to Azure AD identities, you must be logged in using an Azure AD account that is an administrator in the database in Azure SQL Database. In SQL Managed Instance, a SQL login with `sysadmin` permissions can also create an Azure AD login or user.
+> To create contained users mapped to Azure AD identities, you must be logged in using an Azure AD account in the database in Azure SQL Database. In SQL Managed Instance, a SQL login with `sysadmin` permissions can also create an Azure AD login or user.
For examples showing how to create logins and users, see:
azure-sql Long Term Retention Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/long-term-retention-overview.md
The following table illustrates the cadence and expiration of the long-term back
W=12 weeks (84 days), M=12 months (365 days), Y=10 years (3650 days), WeekOfYear=20 (week after May 13) ![ltr example](./media/long-term-retention-overview/ltr-example.png)-
+
If you modify the above policy and set W=0 (no weekly backups), Azure only retains the monthly and yearly backups. No weekly backups are stored under the LTR policy. The storage amount needed to keep these backups reduces accordingly. > [!IMPORTANT] > The timing of individual LTR backups is controlled by Azure. You cannot manually create an LTR backup or control the timing of the backup creation. After configuring an LTR policy, it may take up to 7 days before the first LTR backup will show up on the list of available backups.
+>
+> If you delete a server or a managed instance, all databases on that server or managed instance are also deleted and can't be recovered. You can't restore a deleted server or managed instance. However, if you had configured LTR for a database or managed instance, LTR backups are not deleted, and they can be used to restore databases on a different server or managed instance in the same subscription, to a point in time when an LTR backup was taken.
## Geo-replication and long-term backup retention
azure-sql Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure SQL Database description: Sample Azure Resource Graph queries for Azure SQL Database showing use of resource types and tables to access Azure SQL Database related resources and properties. Previously updated : 01/20/2022 Last updated : 02/16/2022
azure-sql Troubleshoot Common Errors Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/troubleshoot-common-errors-issues.md
To resolve this issue, follow these steps:
![Connection properties](./media/troubleshoot-common-errors-issues/cannot-open-database-master.png)
+## Read-only errors
+
+If you attempt to write to a database that is read-only, you'll receive an error. In some scenarios, the cause of the database's read-only status may not be immediately clear.
+
+### Error 3906: Failed to update database "DatabaseName" because the database is read-only.
+
+When attempting to modify a read-only database, the following error will be raised.
+
+```
+Msg 3906, Level 16, State 2, Line 1
+Failed to update database "%d" because the database is read-only.
+```
+
+#### You may be connected to a read-only replica
+
+For both Azure SQL Database and Azure SQL Managed Instance, you may be connected to a database on a read-only replica. In this case, the following query using the [DATABASEPROPERTYEX() function](/sql/t-sql/functions/databasepropertyex-transact-sql) will return `READ_ONLY`:
+
+```sql
+SELECT DATABASEPROPERTYEX(DB_NAME(), 'Updateability');
+GO
+```
+
+If you're connecting using SQL Server Management Studio, verify if you have specified `ApplicationIntent=ReadOnly` in the **Additional Connection Parameters** [tab on your connection options](/sql/database-engine/availability-groups/windows/listeners-client-connectivity-application-failover#ConnectToSecondary).
+
+If the connection is from an application or a client using a connection string, validate if the connection string has specified `ApplicationIntent=ReadOnly`. Learn more in [Connect to a read-only replica](read-scale-out.md#connect-to-a-read-only-replica).
+
+#### The database may be set to read-only
+
+If you're using Azure SQL Database, the database itself may have been set to read-only. You can verify the database's status with the following query:
+
+```sql
+SELECT name, is_read_only
+FROM sys.databases
+WHERE database_id = DB_ID();
+```
+
+You can modify the read-only status for a database in Azure SQL Database using [ALTER DATABASE Transact-SQL](/sql/t-sql/statements/alter-database-transact-sql?view=azuresqldb-current&preserve-view=true). You canΓÇÖt currently set a database in a managed instance to read-only.
+ ## Confirm whether an error is caused by a connectivity issue To confirm whether an error is caused by a connectivity issue, review the stack trace for frames that show calls to open a connection like the following ones (note the reference to the **SqlConnection** class):
For more information about how to enable logging, see [Enable diagnostics loggin
## Next steps
+Learn more about related topics in the following articles:
+ - [Azure SQL Database connectivity architecture](./connectivity-architecture.md) - [Azure SQL Database and Azure Synapse Analytics network access controls](./network-access-controls-overview.md)-
-## See also
- - [Troubleshooting transaction log errors with Azure SQL Database and Azure SQL Managed Instance](troubleshoot-transaction-log-errors-issues.md) - [Troubleshoot transient connection errors in SQL Database and SQL Managed Instance](troubleshoot-common-connectivity-issues.md)
azure-sql Hadr Cluster Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/hadr-cluster-best-practices.md
For your SQL Server availability group or failover cluster instance, consider th
- Use a unique DNN port in the connection string when connecting to the DNN listener for an availability group. - Use a database mirroring connection string for a basic availability group to bypass the need for a load balancer or DNN. - Validate the sector size of your VHDs before deploying your high availability solution to avoid having misaligned I/Os. See [KB3009974](https://support.microsoft.com/topic/kb3009974-fix-slow-synchronization-when-disks-have-different-sector-sizes-for-primary-and-secondary-replica-log-files-in-sql-server-ag-and-logshipping-environments-ed181bf3-ce80-b6d0-f268-34135711043c) to learn more. -
+- If the SQL Server database engine, Always On availability group listener, or failover cluster instance health probe are configured to use a port between 49,152 and 65,536 (the [default dynamic port range for TCP/IP](/windows/client-management/troubleshoot-tcpip-port-exhaust#default-dynamic-port-range-for-tcpip)), add an exclusion for each port. Doing so will prevent other systems from being dynamically assigned the same port. The following example creates an exclusion for port 59999:
+`netsh int ipv4 add excludedportrange tcp startport=59999 numberofports=1 store=persistent`
## VM availability settings
azure-sql Performance Guidelines Best Practices Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist.md
For your SQL Server availability group or failover cluster instance, consider th
- Use a unique DNN port in the connection string when connecting to the DNN listener for an availability group. - Use a database mirroring connection string for a basic availability group to bypass the need for a load balancer or DNN. - Validate the sector size of your VHDs before deploying your high availability solution to avoid having misaligned I/Os. See [KB3009974](https://support.microsoft.com/topic/kb3009974-fix-slow-synchronization-when-disks-have-different-sector-sizes-for-primary-and-secondary-replica-log-files-in-sql-server-ag-and-logshipping-environments-ed181bf3-ce80-b6d0-f268-34135711043c) to learn more. -
+- If the SQL Server database engine, Always On availability group listener, or failover cluster instance health probe are configured to use a port between 49,152 and 65,536 (the [default dynamic port range for TCP/IP](/windows/client-management/troubleshoot-tcpip-port-exhaust#default-dynamic-port-range-for-tcpip)), add an exclusion for each port. Doing so will prevent other systems from being dynamically assigned the same port. The following example creates an exclusion for port 59999:
+`netsh int ipv4 add excludedportrange tcp startport=59999 numberofports=1 store=persistent`
To learn more, see the comprehensive [HADR best practices](hadr-cluster-best-practices.md).
azure-web-pubsub Reference Server Sdk Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-server-sdk-csharp.md
You can use this library in your app server side to manage the WebSocket client
![The overflow diagram shows the overflow of using the service client library.](media/sdk-reference/service-client-overflow.png) Use this library to:+ - Send messages to hubs and groups. - Send messages to particular users and connections. - Organize users and connections into groups. - Close connections - Grant, revoke, and check permissions for an existing connection
-Details about the terms used here are described in [Key concepts](#key-concepts) section.
-
[Source code](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/webpubsub/Azure.Messaging.WebPubSub/src) | [Package](https://www.nuget.org/packages/Azure.Messaging.WebPubSub) |
-[API reference documentation]() |
+[API reference documentation](https://docs.microsoft.com/dotnet/api/overview/azure/messaging.webpubsub-readme-pre) |
[Product documentation](./index.yml) | [Samples][samples_ref]
In order to interact with the service, you'll need to create an instance of the
var serviceClient = new WebPubSubServiceClient(new Uri(endpoint), "some_hub", new AzureKeyCredential(key)); ```
-## Key concepts
-
-### Connection
-
-A connection, also known as a client or a client connection, represents an individual WebSocket connection connected to the Web PubSub service. When successfully connected, a unique connection ID is assigned to this connection by the Web PubSub service.
-
-### Hub
-
-A hub is a logical concept for a set of client connections. Usually you use one hub for one purpose, for example, a chat hub, or a notification hub. When a client connection is created, it connects to a hub, and during its lifetime, it belongs to that hub. Different applications can share one Azure Web PubSub service by using different hub names.
-
-### Group
-
-A group is a subset of connections to the hub. You can add a client connection to a group, or remove the client connection from the group, anytime you want. For example, when a client joins a chat room, or when a client leaves the chat room, this chat room can be considered to be a group. A client can join multiple groups, and a group can contain multiple clients.
-
-### User
-
-Connections to Web PubSub can belong to one user. A user might have multiple connections, for example when a single user is connected across multiple devices or multiple browser tabs.
-
-### Message
-
-When a client is connected, it can send messages to the upstream application, or receive messages from the upstream application, through the WebSocket connection.
- ## Examples ### Broadcast a text message to all clients
serviceClient.SendToAll(RequestContent.Create(stream), ContentType.ApplicationOc
## Troubleshooting ### Setting up console logging
-You can also easily [enable console logging](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/core/Azure.Core/samples/Diagnostics.md#logging) if you want to dig deeper into the requests you're making against the service.
+
+You can also [enable console logging](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/core/Azure.Core/samples/Diagnostics.md#logging) if you want to dig deeper into the requests you're making against the service.
## Next steps [!INCLUDE [next step](includes/include-next-step.md)] - [azure_sub]: https://azure.microsoft.com/free/dotnet/ [samples_ref]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/webpubsub/Azure.Messaging.WebPubSub/tests/Samples/ [awps_sample]: https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp
azure-web-pubsub Reference Server Sdk Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-server-sdk-java.md
You can use this library in your app server side to manage the WebSocket client
![The overflow diagram shows the overflow of using the service client library.](media/sdk-reference/service-client-overflow.png) Use this library to:-- Send messages to hubs and groups. +
+- Send messages to hubs and groups.
- Send messages to particular users and connections. - Organize users and connections into groups. - Close connections - Grant, revoke, and check permissions for an existing connection
-Details about the terms used here are described in [Key concepts](#key-concepts) section.
- [Source code][source_code] | [API reference documentation][api] | [Product Documentation][product_documentation] | [Samples][samples_readme] ## Getting started
WebPubSubServiceClient webPubSubServiceClient = new WebPubSubServiceClientBuilde
.buildClient(); ```
-## Key concepts
-
-### Connection
-
-A connection, also known as a client or a client connection, represents an individual WebSocket connection connected to the Web PubSub service. When successfully connected, a unique connection ID is assigned to this connection by the Web PubSub service.
-
-### Hub
-
-A hub is a logical concept for a set of client connections. Usually you use one hub for one purpose, for example, a chat hub, or a notification hub. When a client connection is created, it connects to a hub, and during its lifetime, it belongs to that hub. Different applications can share one Azure Web PubSub service by using different hub names.
-
-### Group
-
-A group is a subset of connections to the hub. You can add a client connection to a group, or remove the client connection from the group, anytime you want. For example, when a client joins a chat room, or when a client leaves the chat room, this chat room can be considered to be a group. A client can join multiple groups, and a group can contain multiple clients.
-
-### User
-
-Connections to Web PubSub can belong to one user. A user might have multiple connections, for example when a single user is connected across multiple devices or multiple browser tabs.
-
-### Message
-
-When the client is connected, it can send messages to the upstream application, or receive messages from the upstream application, through the WebSocket connection.
- ## Examples
-* [Broadcast message to entire hub](#broadcast-all "Broadcast message to entire hub")
-* [Broadcast message to a group](#broadcast-group "Broadcast message to a group")
-* [Send message to a connection](#send-to-connection "Send message to a connection")
-* [Send message to a user](#send-to-user "Send message to a user")
+- [Broadcast message to entire hub](#broadcast-all "Broadcast message to entire hub")
+- [Broadcast message to a group](#broadcast-group "Broadcast message to a group")
+- [Send message to a connection](#send-to-connection "Send message to a connection")
+- [Send message to a user](#send-to-user "Send message to a user")
<a name="broadcast-all"></a>
webPubSubServiceClient.sendToUser("Andy", "Hello Andy!", WebPubSubContentType.TE
## Troubleshooting ### Enable client logging+ You can set the `AZURE_LOG_LEVEL` environment variable to view logging statements made in the client library. For example, setting `AZURE_LOG_LEVEL=2` would show all informational, warning, and error log messages. The log levels can be found here: [log levels][log_levels]. ### Default HTTP Client+ All client libraries by default use the Netty HTTP client. Adding the above dependency will automatically configure the client library to use the Netty HTTP client. Configuring or changing the HTTP client is detailed in the [HTTP clients wiki](https://github.com/Azure/azure-sdk-for-java/wiki/HTTP-clients). ### Default SSL library+ All client libraries, by default, use the Tomcat-native Boring SSL library to enable native-level performance for SSL operations. The Boring SSL library is an uber jar containing native libraries for Linux / macOS / Windows, and provides better performance compared to the default SSL implementation within the JDK. For more information, including how to
reduce the dependency size, refer to the [performance tuning][performance_tuning
[!INCLUDE [next step](includes/include-next-step.md)] - <!-- LINKS --> [azure_subscription]: https://azure.microsoft.com/free
azure-web-pubsub Reference Server Sdk Js https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-server-sdk-js.md
You can use this library in your app server side to manage the WebSocket client
- Close connections - Grant, revoke, and check permissions for an existing connection
-Details about the terms used here are described in [Key concepts](#key-concepts) section.
- [Source code](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/web-pubsub/web-pubsub) | [Package (NPM)](https://www.npmjs.com/package/@azure/web-pubsub) |
-[API reference documentation]() |
+[API reference documentation](https://docs.microsoft.com/javascript/api/overview/azure/webpubsub) |
[Product documentation](./index.yml) | [Samples][samples_ref]
Or authenticate the `WebPubSubServiceClient` using [Azure Active Directory][aad_
npm install @azure/identity ```
-2. Update the source code to use `DefaultAzureCredential`:
+1. Update the source code to use `DefaultAzureCredential`:
```js const { WebPubSubServiceClient, AzureKeyCredential } = require("@azure/web-pubsub");
const key = new DefaultAzureCredential();
const serviceClient = new WebPubSubServiceClient("<Endpoint>", key, "<hubName>"); ```
-### Key concepts
-
-#### Connection
-
-A connection, also known as a client or a client connection, represents an individual WebSocket connection connected to the Web PubSub service. When successfully connected, a unique connection ID is assigned to this connection by the Web PubSub service.
-
-#### Hub
-
-A hub is a logical concept for a set of client connections. Usually you use one hub for one purpose, for example, a chat hub, or a notification hub. When a client connection is created, it connects to a hub, and during its lifetime, it belongs to that hub. Different applications can share one Azure Web PubSub service by using different hub names.
-
-#### Group
-
-A group is a subset of connections to the hub. You can add a client connection to a group, or remove the client connection from the group, anytime you want. For example, when a client joins a chat room, or when a client leaves the chat room, this chat room can be considered to be a group. A client can join multiple groups, and a group can contain multiple clients.
-
-#### User
-
-Connections to Web PubSub can belong to one user. A user might have multiple connections, for example when a single user is connected across multiple devices or multiple browser tabs.
-
-#### Message
-
-When the client is connected, it can send messages to the upstream application, or receive messages from the upstream application, through the WebSocket connection.
- ### Examples #### Get the access token for a client to start the WebSocket connection
const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName
await serviceClient.sendToAll({ message: "Hello world!" }, { onResponse }); ```
-### Troubleshooting
+### Service client troubleshooting
#### Enable logs
Use **Live Trace** from the Web PubSub service portal to view the live traffic.
<a name="express"></a>
-## Azure Web PubSub CloudEvents handlers for express
+## Azure Web PubSub CloudEvents handlers for Express
When a WebSocket connection connects, the Web PubSub service transforms the connection lifecycle and messages into [events in CloudEvents format](concept-service-internals.md#workflow). This library provides an express middleware to handle events representing the WebSocket connection's lifecycle and messages, as shown in below diagram: ![The overflow diagram shows the overflow of using the event handler middleware.](media/sdk-reference/event-handler-middleware.png)
-Details about the terms used here are described in [Key concepts](#key-concepts) section.
- [Source code](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/web-pubsub/web-pubsub-express) | [Package (NPM)](https://www.npmjs.com/package/@azure/web-pubsub-express) |
-[API reference documentation]() |
+[API reference documentation](https://docs.microsoft.com/javascript/api/overview/azure/web-pubsub-express-readme?view=azure-node-latest) |
[Product documentation](./index.yml) | [Samples][samples_ref]
app.listen(3000, () =>
); ```
-### Key concepts
-
-#### Connection
-
-A connection, also known as a client or a client connection, represents an individual WebSocket connection connected to the Web PubSub service. When successfully connected, a unique connection ID is assigned to this connection by the Web PubSub service.
-
-#### Hub
-
-A hub is a logical concept for a set of client connections. Usually you use one hub for one purpose, for example, a chat hub, or a notification hub. When a client connection is created, it connects to a hub, and during its lifetime, it belongs to that hub. Different applications can share one Azure Web PubSub service by using different hub names.
-
-#### Group
-
-A group is a subset of connections to the hub. You can add a client connection to a group, or remove the client connection from the group, anytime you want. For example, when a client joins a chat room, or when a client leaves the chat room, this chat room can be considered to be a group. A client can join multiple groups, and a group can contain multiple clients.
-
-#### User
-
-Connections to Web PubSub can belong to one user. A user might have multiple connections, for example when a single user is connected across multiple devices or multiple browser tabs.
-
-#### Client Events
-
-Events are created during the lifecycle of a client connection. For example, a simple WebSocket client connection creates a `connect` event when it tries to connect to the service, a `connected` event when it successfully connected to the service, a `message` event when it sends messages to the service and a `disconnected` event when it disconnects from the service.
-
-#### Event Handler
-
-Event handler contains the logic to handle the client events. Event handler needs to be registered and configured in the service through the portal or Azure CLI beforehand. The place to host the event handler logic is generally considered as the server-side.
-
-### Examples
+### Express examples
#### Handle the `connect` request and assign `<userId>`
You can set the following environment variable to get the debug logs when using
export AZURE_LOG_LEVEL=verbose ```
-For more detailed instructions on how to enable logs, you can look at the [@azure/logger package docs](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/core/logger).
+For more detailed instructions on how to enable logs, see [@azure/logger package docs](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/core/logger).
#### Live Trace
azure-web-pubsub Reference Server Sdk Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-server-sdk-python.md
You can use this library in your app server side to manage the WebSocket client
![The overflow diagram shows the overflow of using the service client library.](media/sdk-reference/service-client-overflow.png) Use this library to:+ - Send messages to hubs and groups. - Send messages to particular users and connections. - Organize users and connections into groups.
Use this library to:
[Source code](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/webpubsub/azure-messaging-webpubsubservice) | [Package (Pypi)][package] | [API reference documentation](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/webpubsub/azure-messaging-webpubsubservice) | [Product documentation][webpubsubservice_docs]
-## _Disclaimer_
-
-_Azure SDK Python packages support for Python 2.7 is ending 01 January 2022. For more information and questions, please refer to https://github.com/Azure/azure-sdk-for-python/issues/20691_
+> [!IMPORTANT]
+> Azure SDK Python packages support for Python 2.7 is ending 01 January 2022. For more information and questions, please refer to https://github.com/Azure/azure-sdk-for-python/issues/20691.
## Getting started ### Prerequisites - Python 2.7, or 3.6 or later is required to use this package.-- You need an [Azure subscription][azure_sub], and a [Azure WebPubSub service instance][webpubsubservice_docs] to use this package.
+- You need an [Azure subscription][azure_sub] and a [Azure WebPubSub service instance][webpubsubservice_docs] to use this package.
- An existing Azure Web PubSub service instance. ### 1. Install the package
Or using the service endpoint and the access key:
``` Or using [Azure Active Directory][aad_doc]:+ 1. [pip][pip] install [`azure-identity`][azure_identity_pip] 2. Follow the document to [enable AAD authentication on your Webpubsub resource][aad_doc] 3. Update code to use [DefaultAzureCredential][default_azure_credential]
Or using [Azure Active Directory][aad_doc]:
>>> service = WebPubSubServiceClient(endpoint='<endpoint>', hub='hub', credential=DefaultAzureCredential()) ```
-## Key concepts
-
-### Connection
-
-A connection, also known as a client or a client connection, represents an individual WebSocket connection connected to the Web PubSub service. When successfully connected, a unique connection ID is assigned to this connection by the Web PubSub service.
-
-### Hub
-
-A hub is a logical concept for a set of client connections. Usually you use one hub for one purpose, for example, a chat hub, or a notification hub. When a client connection is created, it connects to a hub, and during its lifetime, it belongs to that hub. Different applications can share one Azure Web PubSub service by using different hub names.
-
-### Group
-
-A group is a subset of connections to the hub. You can add a client connection to a group, or remove the client connection from the group, anytime you want. For example, when a client joins a chat room, or when a client leaves the chat room, this chat room can be considered to be a group. A client can join multiple groups, and a group can contain multiple clients.
-
-### User
-
-Connections to Web PubSub can belong to one user. A user might have multiple connections, for example when a single user is connected across multiple devices or multiple browser tabs.
-
-### Message
-
-When the client is connected, it can send messages to the upstream application, or receive messages from the upstream application, through the WebSocket connection.
- ## Examples ### Broadcast messages in JSON format
The WebSocket client will receive text: `Hello world`.
>>> service = WebPubSubServiceClient.from_connection_string('<connection_string>', hub='hub') >>> service.send_to_all(message=io.StringIO('Hello World'), content_type='application/octet-stream') ```+ The WebSocket client will receive binary text: `b'Hello world'`. ## Troubleshooting
cdn Cdn How Caching Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-how-caching-works.md
Azure CDN supports the following HTTP cache-directive headers, which define cach
- Introduced in HTTP 1.1 to give web publishers more control over their content and to address the limitations of the `Expires` header. - Overrides the `Expires` header, if both it and `Cache-Control` are defined. - When used in an HTTP request from the client to the CDN POP, `Cache-Control` is ignored by all Azure CDN profiles, by default.-- When used in an HTTP response from the client to the CDN POP:
+- When used in an HTTP response from the origin server to the CDN POP:
- **Azure CDN Standard/Premium from Verizon** and **Azure CDN Standard from Microsoft** support all `Cache-Control` directives. - **Azure CDN Standard/Premium from Verizon** and **Azure CDN Standard from Microsoft** honors caching behaviors for Cache-Control directives in [RFC 7234 - Hypertext Transfer Protocol (HTTP/1.1): Caching (ietf.org)](https://tools.ietf.org/html/rfc7234#section-5.2.2.8). - **Azure CDN Standard from Akamai** supports only the following `Cache-Control` directives; all others are ignored:
chaos-studio Chaos Studio Tutorial Aks Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-aks-cli.md
When you create a chaos experiment, Chaos Studio creates a system-assigned manag
Give the experiment access to your resource(s) using the command below, replacing `$EXPERIMENT_PRINCIPAL_ID` with the principalId from the previous step and `$RESOURCE_ID` with the resource ID of the target resource (in this case, the AKS cluster resource ID). Run this command for each resource targeted in your experiment. ```azurecli-interactive
-az role assignment create --role "Azure Kubernetes Cluster User Role" --assignee-object-id $EXPERIMENT_PRINCIPAL_ID --scope $RESOURCE_ID
+az role assignment create --role "Azure Kubernetes Cluster Admin Role" --assignee-object-id $EXPERIMENT_PRINCIPAL_ID --scope $RESOURCE_ID
``` ## Run your experiment
cloud-services-extended-support In Place Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/in-place-migration-overview.md
These are top scenarios involving combinations of resources, features, and Cloud
| Service | Configuration | Comments | |||| | [Azure AD Domain Services](../active-directory-domain-services/migrate-from-classic-vnet.md) | Virtual networks that contain Azure Active Directory Domain services. | Virtual network containing both Cloud Service deployment and Azure AD Domain services is supported. Customer first needs to separately migrate Azure AD Domain services and then migrate the virtual network left only with the Cloud Service deployment |
-| Cloud Service | Cloud Service with a deployment in a single slot only. | Cloud Services containing either a prod or staging slot deployment can be migrated |
+| Cloud Service | Cloud Service with a deployment in a single slot only. | Cloud Services containing a prod slot deployment can be migrated. It is not reccomended to migrate staging slot as this can result in issues with retaining service FQDN |
| Cloud Service | Deployment not in a publicly visible virtual network (default virtual network deployment) | A Cloud Service can be in a publicly visible virtual network, in a hidden virtual network or not in any virtual network. Cloud Services in a hidden virtual network and publicly visible virtual networks are supported for migration. Customer can use the Validate API to tell if a deployment is inside a default virtual network or not and thus determine if it can be migrated. | |Cloud Service | XML extensions (BGInfo, Visual Studio Debugger, Web Deploy, and Remote Debugging). | All xml extensions are supported for migration | Virtual Network | Virtual network containing multiple Cloud Services. | Virtual network contain multiple cloud services is supported for migration. The virtual network and all the Cloud Services within it will be migrated together to Azure Resource Manager. |
cognitive-services Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-neural-voice.md
- Previously updated : 01/23/2022+ Last updated : 02/18/2022
Custom Neural Voice is a text-to-speech feature that lets you create a one-of-a-
Based on the neural text-to-speech technology and the multilingual, multi-speaker, universal model, Custom Neural Voice lets you create synthetic voices that are rich in speaking styles, or adaptable cross languages. The realistic and natural sounding voice of Custom Neural Voice can represent brands, personify machines, and allow users to interact with applications conversationally. See the [supported languages](language-support.md#custom-neural-voice) for Custom Neural Voice. > [!NOTE]
-> Custom Neural Voice requires registration, and access to it is limited based on eligibility and use criteria. To use this feature, register your use cases by using the [intake form](https://aka.ms/customneural).
+> Custom Neural Voice access is limited based on eligibility and usage criteria. Request access on the [intake form](https://aka.ms/customneural).
## The basics of Custom Neural Voice
the recording samples of human voices. For more information, see [this Microsoft
You can adapt the neural text-to-speech engine to fit your needs. To create a custom neural voice, use [Speech Studio](https://speech.microsoft.com/customvoice) to upload the recorded audio and corresponding scripts, train the model, and deploy the voice to a custom endpoint. Custom Neural Voice can use text provided by the user to convert text into speech in real time, or generate audio content offline with text input. You can do this by using the [REST API](./rest-text-to-speech.md), the [Speech SDK](./get-started-text-to-speech.md), or the [web portal](https://speech.microsoft.com/audiocontentcreation).
-## Get started
+## Custom Neural Voice project types
-The following articles help you start using this feature:
+Speech Studio provides two Custom Neural Voice (CNV) project types: CNV Pro and CNV Lite.
+
+The following table summarizes key differences between the CNV Pro and CNV Lite project types.
+
+|**Items**|**Lite (Preview)**| **Pro**|
+||||
+|Target scenarios |Demonstration or evaluation |Professional scenarios like brand and character voices for chat bots, or audio content reading.|
+|Training data |Record online using Speech Studio |Bring your own data. Recording in a professional studio is recommended. |
+|Scripts for recording |Provided in Speech Studio |Use your own scripts that match the use case scenario. Microsoft provides [example scripts](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice/script) for reference. |
+|Required data size |20-50 utterances |300-2000 utterances|
+|Training time |Less than 1 compute hour| Approximately 20-40 compute hours |
+|Voice quality |Moderate quality|High quality |
+|Availability |Anyone can record samples online and train a model for demo and evaluation purpose. Full access to Custom Neural Voice is required if you want to deploy the CNV Lite model for business use. |Data upload is not restricted, but you can only train and deploy a CNV Pro model after access is approved. CNV Pro access is limited based on eligibility and usage criteria. Request access on the [intake form](https://aka.ms/customneural).|
+|Pricing |Per unit prices apply equally for both the CNV Lite and CNV Pro projects. Check the [pricing details here](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). |Per unit prices apply equally for both the CNV Lite and CNV Pro projects. Check the [pricing details here](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). |
+
+### Custom Neural Voice Lite (preview)
+
+Custom Neural Voice (CNV) Lite is a new project type in public preview. You can demo and evaluate Custom Neural Voice before investing in professional recordings to create a higher-quality voice.
+
+With a CNV Lite project, you record your voice online by reading 20-50 pre-defined scripts provided by Microsoft. After you've recorded at least 20 samples, you can start to train a model. Once the model is trained successfully, you can review the model and check out 20 output samples produced with another set of pre-defined scripts.
+
+Full access to Custom Neural Voice is required if you want to deploy a CNV Lite model and use it beyond reading the pre-defined scripts. A verbal statement recorded by the voice talent is also required before you can deploy the model for your business use.
+
+### Custom Neural Voice Pro
+
+Custom Neural Voice (CNV) Pro allows you to upload your training data collected through professional recording studios and create a higher-quality voice that is nearly indistinguishable from its human samples. Training a voice in a CNV Pro project is restricted to those who are approved.
+
+Review these CNV Pro articles to learn more and get started.
-* To get started with Custom Neural Voice and create a project, see [Get started with Custom Neural Voice](how-to-custom-voice.md).
* To prepare and upload your audio data, see [Prepare training data](how-to-custom-voice-prepare-data.md).
-* To train and deploy your models, see [Train your voice model](how-to-custom-voice-create-voice.md) and [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md).
+* To train your model, see [Train your voice model](how-to-custom-voice-create-voice.md).
+* To deploy your model and use it in your apps, see [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md).
+* Learn how to prepare for the script and record your voice samples, see [How to record voice samples](record-custom-voice-samples.md).
## Terms and definitions
-| **Term** | **Definition** |
+| **Term** | **Definition** |
|||
-| Voice model | A text-to-speech model that can mimic the unique vocal characteristics of a target speaker. A *voice model* is also known as a *voice font* or *synthetic voice*. A voice model is a set of parameters in binary format that is not human readable and does not contain audio recordings. It can't be reverse engineered to derive or construct the audio of a human voice. |
+| Voice model | A text-to-speech model that can mimic the unique vocal characteristics of a target speaker. A *voice model* is also known as a *voice font* or *synthetic voice*. A voice model is a set of parameters in binary format that isn't human readable and doesn't contain audio recordings. It can't be reverse engineered to derive or construct the audio of a human voice. |
| Voice talent | Individuals or target speakers whose voices are recorded and used to create voice models. These voice models are intended to sound like the voice talentΓÇÖs voice.| | Standard text-to-speech | The standard, or "traditional," method of text-to-speech. This method breaks down spoken language into phonetic snippets so that they can be remixed and matched by using classical programming or statistical methods.| | Neural text-to-speech | This method synthesizes speech by using deep neural networks. These networks have "learned" the way phonetics are combined in natural human speech, rather than using procedural programming or statistical methods. In addition to the recordings of a target voice talent, neural text-to-speech uses a source library or base model that is built with voice recordings from many different speakers. |
The following articles help you start using this feature:
## Responsible use of AI
-To learn how to use Custom Neural Voice responsibly, see the [transparency note](/legal/cognitive-services/speech-service/custom-neural-voice/transparency-note-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context). Transparency notes are intended to help you understand how the AI technology from Microsoft works, and the choices system owners can make that influence system performance and behavior. Transparency notes also discuss the importance of thinking about the whole system, including the technology, the people, and the environment.
+To learn how to use Custom Neural Voice responsibly, check the following articles.
+
+* [Transparency note and use cases for Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/transparency-note-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
+* [Characteristics and limitations for using Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
+* [Limited access to Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
+* [Guidelines for responsible deployment of synthetic voice technology](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-guidelines-responsible-deployment-synthetic?context=/azure/cognitive-services/speech-service/context/context)
+* [Disclosure for voice talent](/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/cognitive-services/speech-service/context/context)
+* [Disclosure design guidelines](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-guidelines?context=/azure/cognitive-services/speech-service/context/context)
+* [Disclosure design patterns](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-patterns?context=/azure/cognitive-services/speech-service/context/context)
+* [Code of Conduct for Text-to-Speech integrations](/legal/cognitive-services/speech-service/tts-code-of-conduct?context=/azure/cognitive-services/speech-service/context/context)
+* [Data, privacy, and security for Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)
## Next steps
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
- Previously updated : 01/23/2022+ Last updated : 02/18/2022
In [Prepare training data](how-to-custom-voice-prepare-data.md), you learned about the different data types you can use to train a custom neural voice, and the different format requirements. After you've prepared your data and the voice talent verbal statement, you can start to upload them to [Speech Studio](https://aka.ms/custom-voice-portal). In this article, you learn how to train a custom neural voice through the Speech Studio portal.
+> [!NOTE]
+> See [Custom Neural Voice project types](custom-neural-voice.md#custom-neural-voice-project-types) for information about capabilities, requirements, and differences between Custom Neural Voice Pro and Custom Neural Voice Lite projects. This article focuses on the creation of a professional Custom Neural Voice using the Pro project.
+ ## Prerequisites * [Create a custom voice project](how-to-custom-voice.md)
All data you upload must meet the requirements for the data type that you choose
> [!NOTE] > - Standard subscription (S0) users can upload five data files simultaneously. If you reach the limit, wait until at least one of your data files finishes importing. Then try again.
-> - The maximum number of data files allowed to be imported per subscription is 10 .zip files for free subscription (F0) users, and 500 for standard subscription (S0) users.
+> - The maximum number of data files allowed to be imported per subscription is 500 .zip files for standard subscription (S0) users.
Data files are automatically validated when you select **Submit**. Data validation includes series of checks on the audio files to verify their file format, size, and sampling rate. If there are any errors, fix them and submit again.
A higher signal-to-noise ratio (SNR) indicates lower noise in your audio. You ca
Consider re-recording any utterances with low pronunciation scores or poor signal-to-noise ratios. If you can't re-record, consider excluding those utterances from your data.
+### Typical data issues
+ On **Data details**, you can check the data details of the training set. If there are any typical issues with the data, follow the instructions in the message that appears, to fix them before training.
-The issues are divided into three types. Refer to the following tables to check the respective types of errors. Data with these errors will be excluded during training.
+The issues are divided into three types. Refer to the following tables to check the respective types of errors.
+
+**Auto-rejected**
+
+Data with these errors will be excluded during training.
| Category | Name | Description | | | -- | |
The issues are divided into three types. Refer to the following tables to check
| Audio | Too long audio| Audio duration is longer than 30 seconds. Split the long audio into multiple files. It's a good idea to make utterances shorter than 15 seconds.| | Audio | No valid audio| No valid audio is found in this dataset. Check your audio data and upload again.|
+**Auto-fixed**
+ The following errors are fixed automatically, but you should confirm that the fixes have been made. | Category | Name | Description |
The following errors are fixed automatically, but you should confirm that the fi
| Mismatch |Silence auto fixed |The start silence is detected to be shorter than 100 ms, and has been extended to 100 ms automatically. Download the normalized dataset and review it. | | Mismatch |Silence auto fixed | The end silence is detected to be shorter than 100 ms, and has been extended to 100 ms automatically. Download the normalized dataset and review it.|
+**Manual check required**
+ Unresolved errors listed in the next table affect the quality of training, but data with these errors won't be excluded during training. For higher-quality training, it's a good idea to fix these errors manually. | Category | Name | Description |
cognitive-services How To Custom Voice Prepare Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-prepare-data.md
Previously updated : 11/04/2019 Last updated : 02/18/2022
When you're ready to create a custom Text-to-Speech voice for your application, the first step is to gather audio recordings and associated scripts to start training the voice model. The Speech service uses this data to create a unique voice tuned to match the voice in the recordings. After you've trained the voice, you can start synthesizing speech in your applications.
+> [!NOTE]
+> See [Custom Neural Voice project types](custom-neural-voice.md#custom-neural-voice-project-types) for information about capabilities, requirements, and differences between Custom Neural Voice Pro and Custom Neural Voice Lite projects. This article focuses on the creation of a professional Custom Neural Voice using the Pro project.
+ ## Voice talent verbal statement
-Before you can train your own Text-to-Speech voice model, you'll need audio recordings and the associated text transcriptions. On this page, we'll review data types, how they are used, and how to manage each.
+Before you can train your own Text-to-Speech voice model, you'll need audio recordings and the associated text transcriptions. On this page, we'll review data types, how they're used, and how to manage each.
> [!IMPORTANT] > To train a neural voice, you must create a voice talent profile with an audio file recorded by the voice talent consenting to the usage of their speech data to train a custom voice model. When preparing your recording script, make sure you include the statement sentence. You can find the statement in multiple languages [here](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice/script/verbal-statement-all-locales.txt). The language of the verbal statement must be the same as your recording. You need to upload this audio file to the Speech Studio as shown below to create a voice talent profile, which is used to verify against your training data when you create a voice model. Read more about the [voice talent verification](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) here.
This table lists data types and how each is used to create a custom Text-to-Spee
| Data type | Description | When to use | Additional processing required | | | -- | -- | | | **Individual utterances + matching transcript** | A collection (.zip) of audio files (.wav) as individual utterances. Each audio file should be 15 seconds or less in length, paired with a formatted transcript (.txt). | Professional recordings with matching transcripts | Ready for training. |
-| **Long audio + transcript (beta)** | A collection (.zip) of long, unsegmented audio files (longer than 20 seconds), paired with a collection (.zip) of transcripts that contains all spoken words. | You have audio files and matching transcripts, but they are not segmented into utterances. | Segmentation (using batch transcription).<br>Audio format transformation where required. |
-| **Audio only (beta)** | A collection (.zip) of audio files without a transcript. | You only have audio files available, without transcripts. | Segmentation + transcript generation (using batch transcription).<br>Audio format transformation where required.|
+| **Long audio + transcript (beta)** | A collection (.zip) of long, unsegmented audio files (.wav or .mp3, longer than 20 seconds), paired with a collection (.zip) of transcripts that contains all spoken words. | You have audio files and matching transcripts, but they aren't segmented into utterances. | Segmentation (using batch transcription).<br>Audio format transformation where required. |
+| **Audio only (beta)** | A collection (.zip) of audio files (.wav or .mp3) without a transcript. | You only have audio files available, without transcripts. | Segmentation + transcript generation (using batch transcription).<br>Audio format transformation where required.|
Files should be grouped by type into a dataset and uploaded as a zip file. Each dataset can only contain a single data type. > [!NOTE]
-> The maximum number of datasets allowed to be imported per subscription is 10 zip files for free subscription (F0) users and 500 for standard subscription (S0) users.
+> The maximum number of datasets allowed to be imported per subscription is 500 zip files for standard subscription (S0) users.
+>
+> For the two beta options, only these languages are supported: Chinese (Mandarin, Simplified), English (India), English (United Kingdom), English (United States), French (France), German (Germany), Italian (Italy), Japanese (Japan), Portuguese (Brazil), and Spanish (Mexico).
## Individual utterances + matching transcript
To produce a good voice model, create the recordings in a quiet room with a high
### Audio files
-Each audio file should contain a single utterance (a single sentence or a single turn of a dialog system), less than 15 seconds long. All files must be in the same spoken language. Multi-language custom Text-to-Speech voices are not supported, with the exception of the Chinese-English bi-lingual. Each audio file must have a unique numeric filename with the filename extension .wav.
+Each audio file should contain a single utterance (a single sentence or a single turn of a dialog system), less than 15 seconds long. All files must be in the same spoken language. Multi-language custom Text-to-Speech voices aren't supported, with the exception of the Chinese-English bi-lingual. Each audio file must have a unique numeric filename with the filename extension .wav.
Follow these guidelines when preparing audio. | Property | Value | | -- | -- | | File format | RIFF (.wav), grouped into a .zip file |
-| Sampling rate | At least 16,000 Hz. For creating a neural voice, 24,000 Hz is required. |
-| Sample format | PCM, 16-bit |
| File name | Numeric, with .wav extension. No duplicate file names allowed. |
+| Sampling rate | For creating a custom neural voice, 24,000 Hz is required. |
+| Sample format | PCM, 16-bit |
| Audio length | Shorter than 15 seconds | | Archive format | .zip | | Maximum archive size | 2048 MB | > [!NOTE]
-> .wav files with a sampling rate lower than 16,000 Hz will be rejected. If a .zip file contains .wav files with different sample rates, only those equal to or higher than 16,000 Hz will be imported. The portal currently imports .zip archives up to 2048 MB. However, multiple archives can be uploaded.
-
-> [!NOTE]
-> The default sampling rate for a custom neural voice is 24,000 Hz. Your .wav files with a sampling rate lower than 16,000 Hz will be up-sampled to 24,000 Hz to train a neural voice. ItΓÇÖs recommended that you should use a sample rate of 24,000 Hz for your training data.
+> The default sampling rate for a custom neural voice is 24,000 Hz. Audio files with a sampling rate lower than 16,000 Hz will be rejected. If a .zip file contains .wav files with different sample rates, only those equal to or higher than 16,000 Hz will be imported. Your audio files with a sampling rate higher than 16,000 Hz and lower than 24,000 Hz will be up-sampled to 24,000 Hz to train a neural voice. ItΓÇÖs recommended that you should use a sample rate of 24,000 Hz for your training data.
### Transcripts
The transcription file is a plain text file. Use these guidelines to prepare you
| Property | Value | | -- | -- | | File format | Plain text (.txt) |
-| Encoding format | ANSI/ASCII, UTF-8, UTF-8-BOM, UTF-16-LE, or UTF-16-BE. For zh-CN, ANSI/ASCII and UTF-8 encodings are not supported. |
+| Encoding format | ANSI, ASCII, UTF-8, UTF-8-BOM, UTF-16-LE, or UTF-16-BE. For zh-CN, ANSI and ASCII encoding aren't supported. |
| # of utterances per line | **One** - Each line of the transcription file should contain the name of one of the audio files, followed by the corresponding transcription. The file name and transcription should be separated by a tab (\t). | | Maximum file size | 2048 MB |
Follow these guidelines when preparing audio for segmentation.
| -- | -- | | File format | RIFF (.wav) with a sampling rate of at least 16 khz-16-bit in PCM or .mp3 with a bit rate of at least 256 KBps, grouped into a .zip file | | File name | ASCII and Unicode characters supported. No duplicate names allowed. |
+| Sampling rate | For creating a custom neural voice, 24,000 Hz is required. |
| Audio length | Longer than 20 seconds | | Archive format | .zip | | Maximum archive size | 2048 MB | > [!NOTE]
-> The default sampling rate for a custom neural voice is 24,000 Hz. Your .wav files with a sampling rate lower than 16,000 Hz will be up sampled to 24,000 Hz to train a neural voice. ItΓÇÖs recommended that you should use a sample rate of 24,000 Hz for your training data.
+> The default sampling rate for a custom neural voice is 24,000 Hz. Audio files with a sampling rate lower than 16,000 Hz will be rejected. Your audio files with a sampling rate higher than 16,000 Hz and lower than 24,000 Hz will be up-sampled to 24,000 Hz to train a neural voice. ItΓÇÖs recommended that you should use a sample rate of 24,000 Hz for your training data.
All audio files should be grouped into a zip file. ItΓÇÖs OK to put .wav files and .mp3 files into one audio zip. For example, you can upload a zip file containing an audio file named ΓÇÿkingstory.wavΓÇÖ, 45-second-long, and another audio named ΓÇÿqueenstory.mp3ΓÇÖ, 200-second-long. All .mp3 files will be transformed into the .wav format after processing.
Transcripts must be prepared to the specifications listed in this table. Each au
| -- | -- | | File format | Plain text (.txt), grouped into a .zip | | File name | Use the same name as the matching audio file |
-| Encoding format | UTF-8-BOM only |
+| Encoding format |ANSI, ASCII, UTF-8, UTF-8-BOM, UTF-16-LE, or UTF-16-BE. For zh-CN, ANSI and ASCII encoding aren't supported. |
| # of utterances per line | No limit | | Maximum file size | 2048 MB |
-All transcripts files in this data type should be grouped into a zip file. For example, you have uploaded a zip file containing an audio file named ΓÇÿkingstory.wavΓÇÖ, 45 seconds long, and another one named ΓÇÿqueenstory.mp3ΓÇÖ, 200 seconds long. You will need to upload another zip file containing two transcripts, one named ΓÇÿkingstory.txtΓÇÖ, the other one ΓÇÿqueenstory.txtΓÇÖ. Within each plain text file, you will provide the full correct transcription for the matching audio.
+All transcripts files in this data type should be grouped into a zip file. For example, you've uploaded a zip file containing an audio file named ΓÇÿkingstory.wavΓÇÖ, 45 seconds long, and another one named ΓÇÿqueenstory.mp3ΓÇÖ, 200 seconds long. You'll need to upload another zip file containing two transcripts, one named ΓÇÿkingstory.txtΓÇÖ, the other one ΓÇÿqueenstory.txtΓÇÖ. Within each plain text file, you'll provide the full correct transcription for the matching audio.
-After your dataset is successfully uploaded, we will help you segment the audio file into utterances based on the transcript provided. You can check the segmented utterances and the matching transcripts by downloading the dataset. Unique IDs will be assigned to the segmented utterances automatically. ItΓÇÖs important that you make sure the transcripts you provide are 100% accurate. Errors in the transcripts can reduce the accuracy during the audio segmentation and further introduce quality loss in the training phase that comes later.
+After your dataset is successfully uploaded, we'll help you segment the audio file into utterances based on the transcript provided. You can check the segmented utterances and the matching transcripts by downloading the dataset. Unique IDs will be assigned to the segmented utterances automatically. ItΓÇÖs important that you make sure the transcripts you provide are 100% accurate. Errors in the transcripts can reduce the accuracy during the audio segmentation and further introduce quality loss in the training phase that comes later.
## Audio only (beta)
Follow these guidelines when preparing audio.
| -- | -- | | File format | RIFF (.wav) with a sampling rate of at least 16 khz-16-bit in PCM or .mp3 with a bit rate of at least 256 KBps, grouped into a .zip file | | File name | ASCII and Unicode characters supported. No duplicate name allowed. |
-| Audio length | Longer than 20 seconds |
+| Sampling rate | For creating a custom neural voice, 24,000 Hz is required. |
+| Audio length | No limit |
| Archive format | .zip | | Maximum archive size | 2048 MB | > [!NOTE]
-> The default sampling rate for a custom neural voice is 24,000 Hz. Your .wav files with a sampling rate lower than 16,000 Hz will be up sampled to 24,000 Hz to train a neural voice. ItΓÇÖs recommended that you should use a sample rate of 24,000 Hz for your training data.
+> The default sampling rate for a custom neural voice is 24,000 Hz. Your audio files with a sampling rate higher than 16,000 Hz and lower than 24,000 Hz will be up-sampled to 24,000 Hz to train a neural voice. ItΓÇÖs recommended that you should use a sample rate of 24,000 Hz for your training data.
-All audio files should be grouped into a zip file. Once your dataset is successfully uploaded, we will help you segment the audio file into utterances based on our speech batch transcription service. Unique IDs will be assigned to the segmented utterances automatically. Matching transcripts will be generated through speech recognition. All .mp3 files will be transformed into the .wav format after processing. You can check the segmented utterances and the matching transcripts by downloading the dataset.
+All audio files should be grouped into a zip file. Once your dataset is successfully uploaded, we'll help you segment the audio file into utterances based on our speech batch transcription service. Unique IDs will be assigned to the segmented utterances automatically. Matching transcripts will be generated through speech recognition. All .mp3 files will be transformed into the .wav format after processing. You can check the segmented utterances and the matching transcripts by downloading the dataset.
## Next steps
cognitive-services How To Custom Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice.md
- Previously updated : 01/23/2022+ Last updated : 02/18/2022
[Custom Neural Voice](https://aka.ms/customvoice) is a set of online tools that you use to create a recognizable, one-of-a-kind voice for your brand. All it takes to get started are a handful of audio files and the associated transcriptions. See if Custom Neural Voice supports your [language](language-support.md#custom-neural-voice) and [region](regions.md#custom-neural-voices). > [!NOTE]
-> Microsoft is committed to designing responsible AI. For that reason, we have limited the use of Custom Neural Voice. You can gain access to the technology only after your applications are reviewed and you have committed to using it in alignment with our responsible AI principles. Learn more about our [policy on the limit access](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext), and [apply here](https://aka.ms/customneural).
+> Custom Neural Voice Pro can be used to create higher-quality models that are indistinguishable from human recordings. For access you must commit to using it in alignment with our responsible AI principles. Learn more about our [policy on the limited access](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) and [apply here](https://aka.ms/customneural).
+>
+> With [Custom Neural Voice Lite](custom-neural-voice.md#custom-neural-voice-project-types) (public preview), you can create a model for demonstration and evaluation purpose. No application is required. Microsoft restricts and selects the recording and testing samples for use with Custom Neural Voice Lite. You must apply the full access to Custom Neural Voice in order to deploy and use the Custom Neural Voice Lite model for business purpose.
## Set up your Azure account
Once you've created an Azure account and a Speech service subscription, you'll n
1. If you want to switch to another Speech subscription, select the **cog** icon at the top. > [!NOTE]
-> You must have an F0 or S0 Speech service key created in Azure before you can use the service. Custom Neural Voice only supports the S0 tier.
+> Custom Neural Voice training is currently only available in East US, Southeast Asia, UK South, with the S0 tier. Make sure you select the right Speech resource if you would like to create a neural voice.
## Create a project
Content like data, models, tests, and endpoints are organized into projects in S
To create a custom voice project: 1. Sign in to [Speech Studio](https://speech.microsoft.com).
-1. Select **Text-to-Speech** > **Custom Voice** > **Create project**.
-1. Follow the instructions provided by the wizard to create your project.
-1. After you've created a project, you see four tabs: **Set up voice talent**, **Prepare training data**, **Train model**, and **Deploy model**. See [Prepare data for Custom Neural Voice](how-to-custom-voice-prepare-data.md) to set up the voice talent, and proceed to training data.
+1. Select **Text-to-Speech** > **Custom Voice** > **Create project**.
-## Tips for creating a custom neural voice
+ See [Custom Neural Voice project types](custom-neural-voice.md#custom-neural-voice-project-types) for information about capabilities, requirements, and differences between Custom Neural Voice Pro and Custom Neural Voice Lite projects.
+
+1. After you've created a CNV Pro project, you'll see four tabs: **Set up voice talent**, **Prepare training data**, **Train model**, and **Deploy model**. See [Prepare data for Custom Neural Voice](how-to-custom-voice-prepare-data.md) to set up the voice talent, and proceed to training data.
+
+## Tips for creating a professional custom neural voice
Creating a great custom neural voice requires careful quality control in each step, from voice design and data preparation, to the deployment of the voice model to your system. The following sections discuss some key steps to take when you're creating a custom neural voice for your organization.
First, design a persona of the voice that represents your brand by using a perso
### Script selection
-Carefully select the recording script to represent the user scenarios for your voice. For example, you can use the phrases from bot conversations as your recording script if you are creating a customer service bot. Include different sentence types in your scripts, including statements, questions, and exclamations.
+Carefully select the recording script to represent the user scenarios for your voice. For example, you can use the phrases from bot conversations as your recording script if you're creating a customer service bot. Include different sentence types in your scripts, including statements, questions, and exclamations.
### Preparing training data
After the recordings are ready, follow [Prepare training data](how-to-custom-voi
### Training
-After you have prepared the training data, go to [Speech Studio](https://aka.ms/custom-voice) to create your custom neural voice. Select at least 300 utterances to create a custom neural voice. A series of data quality checks are automatically performed when you upload them. To build high-quality voice models, you should fix any errors and submit again.
+After you've prepared the training data, go to [Speech Studio](https://aka.ms/custom-voice) to create your custom neural voice. Select at least 300 utterances to create a custom neural voice. A series of data quality checks are automatically performed when you upload them. To build high-quality voice models, you should fix any errors and submit again.
### Testing
cognitive-services How To Deploy And Use Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-deploy-and-use-endpoint.md
Previously updated : 02/09/2022 Last updated : 02/18/2022
After you've successfully created and trained your voice model, you deploy it to a custom neural voice endpoint. Use the custom neural voice endpoint instead of the usual text-to-speech endpoint for requests with the REST API. Use the speech studio to create a custom neural voice endpoint. Use the REST API to suspend or resume a custom neural voice endpoint.
+> [!NOTE]
+> See [Custom Neural Voice project types](custom-neural-voice.md#custom-neural-voice-project-types) for information about capabilities, requirements, and differences between Custom Neural Voice Pro and Custom Neural Voice Lite projects. This article focuses on the creation of a professional Custom Neural Voice using the Pro project.
+ ## Create a custom neural voice endpoint To create a custom neural voice endpoint: 1. On the **Deploy model** tab, select **Deploy model**.
+1. Select a voice model that you want to associate with this endpoint.
1. Enter a **Name** and **Description** for your custom endpoint.
-1. Select a voice model that you want to associate with this endpoint.
1. Select **Deploy** to create your endpoint. In the endpoint table, you now see an entry for your new endpoint. It might take a few minutes to instantiate a new endpoint. When the status of the deployment is **Succeeded**, the endpoint is ready for use.
cognitive-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-pronunciation-assessment.md
A typical pronunciation assessment result in JSON:
* Try out the [pronunciation assessment demo](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment/BrowserJS) and watch the [video tutorial](https://www.youtube.com/watch?v=zFlwm7N4Awc) of pronunciation assessment. ::: zone pivot="programming-language-csharp"
-* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#L949) on GitHub for pronunciation assessment.
+* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs) on GitHub for pronunciation assessment.
::: zone-end ::: zone pivot="programming-language-cpp"
-* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/speech_recognition_samples.cpp#L633) on GitHub for pronunciation assessment.
+* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/speech_recognition_samples.cpp) on GitHub for pronunciation assessment.
::: zone-end ::: zone pivot="programming-language-java"
-* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/java/jre/console/src/com/microsoft/cognitiveservices/speech/samples/console/SpeechRecognitionSamples.java#L697) on GitHub for pronunciation assessment.
+* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/java/jre/console/src/com/microsoft/cognitiveservices/speech/samples/console/SpeechRecognitionSamples.java) on GitHub for pronunciation assessment.
::: zone-end ::: zone pivot="programming-language-python"
-* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py#L576) on GitHub for pronunciation assessment.
+* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py) on GitHub for pronunciation assessment.
::: zone-end ::: zone pivot="programming-language-objectivec"
-* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/objective-c/ios/speech-samples/speech-samples/ViewController.m#L642) on GitHub for pronunciation assessment.
+* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/objective-c/ios/speech-samples/speech-samples/ViewController.m) on GitHub for pronunciation assessment.
::: zone-end+
cognitive-services Record Custom Voice Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/record-custom-voice-samples.md
- Previously updated : 04/13/2020+ Last updated : 02/18/2022
-# Record voice samples to create a custom neural voice
+# Record voice samples to create a professional custom neural voice
+
+This article provides you instructions on preparing high-quality voice samples for creating a professional voice model using the Custom Neural Voice Pro project.
+
+> [!NOTE]
+> See [Custom Neural Voice project types](custom-neural-voice.md#custom-neural-voice-project-types) for information about capabilities, requirements, and differences between Custom Neural Voice Pro and Custom Neural Voice Lite projects.
Creating a high-quality production custom neural voice from scratch isn't a casual undertaking. The central component of a custom neural voice is a large collection of audio samples of human speech. It's vital that these audio recordings be of high quality. Choose a voice talent who has experience making these kinds of recordings, and have them recorded by a recording engineer using professional equipment.
Before you can make these recordings, though, you need a script: the words that
Many small but important details go into creating a professional voice recording. This guide is a roadmap for a process that will help you get good, consistent results.
-> [!NOTE]
-> To train a neural voice, you must create a voice talent profile with an audio file recorded by the voice talent consenting to the usage of their speech data to train a custom voice model. When preparing your recording script, make sure you include the statement sentence. You can find the statement in multiple languages [here](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice/script/verbal-statement-all-locales.txt). The language of the verbal statement must be the same as your recording. You need to upload this audio file to the Speech Studio as shown below to create a voice talent profile, which is used to verify against your training data when you create a voice model. Read more about the [voice talent verification](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) here.
->
- :::image type="content" source="media/custom-voice/upload-verbal-statement.png" alt-text="Upload voice talent statement":::
->
-> Custom Neural Voice is available with limited access. Make sure you understand the [responsible AI requirements](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) and [apply the access here](https://aka.ms/customneural).
- ## Voice recording roles There are four basic roles in a custom neural voice recording project:
Leave enough space after each row to write notes. Be sure that no utterance is s
Print three copies of the script: one for the talent, one for the engineer, and one for the director (you). Use a paper clip instead of staples: an experienced voice artist will separate the pages to avoid making noise as the pages are turned.
+### Voice talent statement
+
+To train a neural voice, you must create a voice talent profile with an audio file recorded by the voice talent consenting to the usage of their speech data to train a custom voice model. When preparing your recording script, make sure you include the statement sentence.
+
+You can find the statement in multiple languages on [GitHub](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice/script/verbal-statement-all-locales.txt). The language of the verbal statement must be the same as your recording. You need to upload this audio file to the Speech Studio as shown below to create a voice talent profile, which is used to verify against your training data when you create a voice model.
++
+Read more about the [voice talent verification](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) here.
+ ### Legalities Under copyright law, an actor's reading of copyrighted text might be a performance for which the author of the work should be compensated. This performance won't be recognizable in the final product, the custom neural voice. Even so, the legality of using a copyrighted work for this purpose isn't well established. Microsoft can't provide legal advice on this issue; consult your own counsel.
You can refer to below specification to prepare for the audio samples as best pr
For high-quality training results, avoiding audio errors is highly recommended. The errors of audio normally involve the following categories: - Audio file name doesn't match the script ID.-- War file has an invalid format and cannot be read.-- Audio sampling rate is lower than 16 KHz. Also, it is recommended that wav file sampling rate should be equal or higher than 24 KHz for high-quality neural voice.
+- WAR file has an invalid format and can't be read.
+- Audio sampling rate is lower than 16 KHz. Also, it's recommended that wav file sampling rate should be equal or higher than 24 KHz for high-quality neural voice.
- Volume peak isn't within the range of -3 dB (70% of max volume) to -6 dB (50%). - Waveform overflow. That is, the waveform at its peak value is cut and thus not complete.
For high-quality training results, avoiding audio errors is highly recommended.
![overall volume](media/custom-voice/overall-volume.png) -- No silence before the first word or after the last word. Also, the start or end silence should not be longer than 200 ms or shorter than 100 ms.
+- No silence before the first word or after the last word. Also, the start or end silence shouldn't be longer than 200 ms or shorter than 100 ms.
![No silence](media/custom-voice/no-silence.png)
Set levels so that most of the available dynamic range of digital recording is u
![A good recording waveform](media/custom-voice/good-recording.png)
-Here, most of the range (height) is used, but the highest peaks of the signal do not reach the top or bottom of the window. You can also see that the silence in the recording approximates a thin horizontal line, indicating a low noise floor. This recording has acceptable dynamic range and signal-to-noise ratio.
+Here, most of the range (height) is used, but the highest peaks of the signal don't reach the top or bottom of the window. You can also see that the silence in the recording approximates a thin horizontal line, indicating a low noise floor. This recording has acceptable dynamic range and signal-to-noise ratio.
Record directly into the computer via a high-quality audio interface or a USB port, depending on the mic you're using. For analog, keep the audio chain simple: mic, preamp, audio interface, computer. You can license both [Avid Pro Tools](https://www.avid.com/en/pro-tools) and [Adobe Audition](https://www.adobe.com/products/audition.html) monthly at a reasonable cost. If your budget is extremely tight, try the free [Audacity](https://www.audacityteam.org/).
-Record at 44.1 KHz 16 bit monophonic (CD quality) or better. Current state-of-the-art is 48 KHz 24 bit, if your equipment supports it. You will down-sample your audio to 24 KHz 16-bit before you submit it to Speech Studio. Still, it pays to have a high-quality original recording in the event edits are needed.
+Record at 44.1 KHz 16 bit monophonic (CD quality) or better. Current state-of-the-art is 48 KHz 24 bit, if your equipment supports it. You'll down-sample your audio to 24 KHz 16-bit before you submit it to Speech Studio. Still, it pays to have a high-quality original recording in the event edits are needed.
Ideally, have different people serve in the roles of director, engineer, and talent. Don't try to do it all yourself. In a pinch, one person can be both the director and the engineer.
To avoid wasting studio time, run through the script with your voice talent befo
> [!NOTE] > Most recording studios offer electronic display of scripts in the recording booth. In this case, type your run-through notes directly into the script's document. You'll still want a paper copy to take notes on during the session, though. Most engineers will want a hard copy, too. And you'll still want a third printed copy as a backup for the talent in case the computer is down.
-Your voice talent might ask which word you want emphasized in an utterance (the "operative word"). Tell them that you want a natural reading with no particular emphasis. Emphasis can be added when speech is synthesized; it should not be a part of the original recording.
+Your voice talent might ask which word you want emphasized in an utterance (the "operative word"). Tell them that you want a natural reading with no particular emphasis. Emphasis can be added when speech is synthesized; it shouldn't be a part of the original recording.
-Direct the talent to pronounce words distinctly. Every word of the script should be pronounced as written. Sounds should not be omitted or slurred together, as is common in casual speech, *unless they have been written that way in the script*.
+Direct the talent to pronounce words distinctly. Every word of the script should be pronounced as written. Sounds shouldn't be omitted or slurred together, as is common in casual speech, *unless they have been written that way in the script*.
|Written text|Unwanted casual pronunciation| |-|-|
Use your notes to find the exact takes you want, and then use a sound editing ut
Leave only about 0.2 second of silence at the beginning and end of each clip, except for the first. That file should start with a full five seconds of silence. Do not use an audio editor to "zero out" silent parts of the file. Including the "room tone" will help the algorithms compensate for any residual background noise.
-Listen to each file carefully. At this stage, you can edit out small unwanted sounds that you missed during recording, like a slight lip smack before a line, but be careful not to remove any actual speech. If you can't fix a file, remove it from your dataset and note that you have done so.
+Listen to each file carefully. At this stage, you can edit out small unwanted sounds that you missed during recording, like a slight lip smack before a line, but be careful not to remove any actual speech. If you can't fix a file, remove it from your dataset and note that you've done so.
Convert each file to 16 bits and a sample rate of 24 KHz before saving and if you recorded the studio chatter, remove the second channel. Save each file in WAV format, naming the files with the utterance number from your script.
cognitive-services Speech Service Vnet Service Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-service-vnet-service-endpoint.md
In this scenario, private endpoints aren't enabled and one of these statements i
This scenario is equivalent to [using a Speech resource that has a custom domain name and that doesn't have private endpoints](speech-services-private-link.md#adjust-an-application-to-use-a-speech-resource-without-private-endpoints). [!INCLUDE [](includes/speech-vnet-service-enpoints-private-endpoints-simultaneously.md)] - ## Learn more * [Use Speech service through a private endpoint](speech-services-private-link.md)
cognitive-services Speech Services Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-private-link.md
Follow these steps to modify your code:
After this modification, your application should work with the private-endpoint-enabled Speech resources. We're working on more seamless support of private endpoint scenarios. ++ ## Adjust an application to use a Speech resource without private endpoints In this article, we've pointed out several times that enabling a custom domain for a Speech resource is *irreversible*. Such a resource will use a different way of communicating with Speech service, compared to the ones that are using [regional endpoint names](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints).
cognitive-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/managed-identity.md
Title: Create and use managed identity
+ Title: Create and use managed identities for Document Translation
-description: Understand how to create and use managed identity in the Azure portal
+description: Understand how to create and use managed identities in the Azure portal
Previously updated : 09/09/2021 Last updated : 02/22/2022
-# Create and use managed identity
+# Managed identities for Document Translation
> [!IMPORTANT] >
-> Managed identity for Document Translation is currently unavailable in the global region. If you intend to use managed identity for Document Translation operations, [create your Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in a non-global Azure region.
+> Managed identities for Azure resources are currently unavailable for Document Translation service in the global region. If you intend to use managed identities for Document Translation operations, [create your Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in a non-global Azure region.
-## What is managed identity?
+Managed identities for Azure resources are service principals that create an Azure Active Directory (Azure AD) identity and specific permissions for Azure managed resources:
- Azure managed identity is a service principal that creates an Azure Active Directory (Azure AD) identity and specific permissions for Azure managed resources. You can use a managed identity to grant access to any resource that supports Azure AD authentication. To grant access, assign a role to a managed identity using [Azure role-based access control](../../../role-based-access-control/overview.md) (Azure RBAC). There is no added cost to use managed identity in Azure.
+* You can use managed identities to grant access to any resource that supports Azure AD authentication, including your own applications. Unlike security keys and authentication tokens, managed identities eliminate the need for developers to manage credentials.
-Managed identity supports both privately and publicly accessible Azure blob storage accounts. For storage accounts **with public access**, you can opt to use a shared access signature (SAS) to grant limited access. In this article, we will examine how to manage access to translation documents in your Azure blob storage account using system-assigned managed identity.
+* To grant access to an Azure resource, assign an Azure role to a managed identity using [Azure role-based access control (Azure RBAC)](../../../role-based-access-control/overview.md).
+
+* There's no added cost to use managed identities in Azure.
+
+> [!TIP]
+> Managed identities eliminate the need for you to manage credentials, including Shared Access Signature (SAS) tokens. Managed identities are a safer way to grant access to data without having credentials in your code.
## Prerequisites
To get started, you'll need:
* In the main window, select **Allow access from Selected networks**. :::image type="content" source="../media/managed-identities/firewalls-and-virtual-networks.png" alt-text="Screenshot: Selected networks radio button selected.":::
- * On the selected networks page navigate to the **Exceptions** category and make certain that the [**Allow Azure services on the trusted services list to access this storage account**](../../../storage/common/storage-network-security.md?tabs=azure-portal#manage-exceptions) checkbox is enabled.
+ * On the selected networks page, navigate to the **Exceptions** category and make certain that the [**Allow Azure services on the trusted services list to access this storage account**](../../../storage/common/storage-network-security.md?tabs=azure-portal#manage-exceptions) checkbox is enabled.
:::image type="content" source="../media/managed-identities/allow-trusted-services-checkbox-portal-view.png" alt-text="Screenshot: allow trusted services checkbox, portal view":::
-## Managed Identity assignments
+## Managed identity assignments
+
+There are two types of managed identities: **system-assigned** and **user-assigned**. Currently, Document Translation supports system-assigned managed identities:
-There are two types of managed identity: **system-assigned** and **user-assigned**. Currently, Document Translation is supported by system-assigned managed identity. A system-assigned managed identity is **enabled** directly on a service instance. It is not enabled by default; you must go to your resource and update the identity setting. The system-assigned managed identity is tied to your resource throughout its lifecycle. If you delete your resource, the managed identity will be deleted as well.
+* A system-assigned managed identity is **enabled** directly on a service instance. It isn't enabled by default; you must go to your resource and update the identity setting.
+
+* The system-assigned managed identity is tied to your resource throughout its lifecycle. If you delete your resource, the managed identity will be deleted as well.
In the following steps, we'll enable a system-assigned managed identity and grant your Translator resource limited access to your Azure blob storage account.
-## Enable a system-assigned managed identity using the Azure portal
+## Enable a system-assigned managed identity
>[!IMPORTANT] >
In the following steps, we'll enable a system-assigned managed identity and gran
1. In the main window, toggle the **System assigned Status** tab to **On**.
+## Grant access to your storage account
+
+You need to grant Translator access to your storage account before it can create, read, or delete blobs. Now that you enabled Translator with a system-assigned managed identity, you can use Azure role-based access control (Azure RBAC), to give Translator access to Azure storage.
+
+The **Storage Blob Data Contributor** role gives Translator (represented by the system-assigned managed identity) read, write, and delete access to the blob container and data.
+ 1. Under **Permissions** select **Azure role assignments**: :::image type="content" source="../media/managed-identities/enable-system-assigned-managed-identity-portal.png" alt-text="Screenshot: enable system-assigned managed identity in Azure portal.":::
In the following steps, we'll enable a system-assigned managed identity and gran
| Field | Value| ||--|
- |**Scope**| ***Storage***.|
- |**Subscription**| ***The subscription associated with your storage resource***.|
- |**Resource**| ***The name of your storage resource***.|
- |**Role** | ***Storage Blob Data Contributor***.|
+ |**Scope**| **_Storage_**.|
+ |**Subscription**| **_The subscription associated with your storage resource_**.|
+ |**Resource**| **_The name of your storage resource_**.|
+ |**Role** | **_Storage Blob Data Contributor_**.|
:::image type="content" source="../media/managed-identities/add-role-assignment-window.png" alt-text="Screenshot: add role assignments page in the Azure portal.":::
-1. After you've received the _Added Role assignment_ confirmation message, refresh the page to see the added role assignment.
+1. After you've received the _Added Role assignment_ confirmation message, refresh the page to see the added role assignment.
:::image type="content" source="../media/managed-identities/add-role-assignment-confirmation.png" alt-text="Screenshot: Added role assignment confirmation pop-up message.":::
In the following steps, we'll enable a system-assigned managed identity and gran
:::image type="content" source="../media/managed-identities/assigned-roles-window.png" alt-text="Screenshot: Azure role assignments window.":::
- Great! You have completed the steps to enable a system-assigned managed identity. With this identity credential, you can grant Translator specific access rights to your storage resource.
+ Great! You've completed the steps to enable a system-assigned managed identity. With managed identity and Azure RBAC, you granted Translator specific access rights to your storage resource without having to manage credentials such as SAS tokens.
## Next steps > [!div class="nextstepaction"]
-> [Managed identities for Azure resources: frequently asked questions](../../../active-directory/managed-identities-azure-resources/managed-identities-faq.md)
-
-> [!div class="nextstepaction"]
->[Use managed identities to acquire an access token](../../../app-service/overview-managed-identity.md?tabs=dotnet#configure-target-resource)
+> [Access Azure Storage from a web app using managed identities](/azure/app-service/scenario-secure-app-access-storage?toc=/azure/cognitive-services/translator/toc.json&bc=/azure/cognitive-services/translator/breadcrumb/toc.json)
communication-services Azure Function Rule Engine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/azure-function-rule-engine.md
+
+ Title: Azure Function Rule concepts for Azure Communication Services
+
+description: Learn about the Azure Communication Services Job Router Azure Function Rule concepts.
+++++ Last updated : 02/23/2022++
+
+
+# Azure function rule concepts
++
+As part of customer extensibility model, Azure Communication Services Job Router supports Azure Function Rule Engine. It gives you the ability to bring your own Azure function. With Azure function Rule, you can incorporate custom and complex logic into the process of routing.
+
+A couple of examples are given below to showcase the flexibility that Azure Function Rule provides.
+
+## Scenario: Custom scoring rule in best worker distribution mode
+
+We want to distribute offers among their workers associated with a queue. The workers will be given a score based on their labels and skill set. The worker with the highest score should get the first offer (_BestWorker Distribution Mode_).
++
+### Situation
+
+- A job has been created and classified.
+ - Job has the following **labels** associated with it
+ - ["CommunicationType"] = "Chat"
+ - ["IssueType"] = "XboxSupport"
+ - ["Language"] = "en"
+ - ["HighPriority"] = true
+ - ["SubIssueType"] = "ConsoleMalfunction"
+ - ["ConsoleType"] = "XBOX_SERIES_X"
+ - ["Model"] = "XBOX_SERIES_X_1TB"
+ - Job has the following **WorkerSelectors** associated with it
+ - ["English"] >= 7
+ - ["ChatSupport"] = true
+ - ["XboxSupport"] = true
+- Job currently is in a state of '**Queued**'; enqueued in *Xbox Hardware Support Queue* waiting to be matched to a worker.
+- Multiple workers become available simultaneously.
+ - **Worker 1** has been created with the following **labels**
+ - ["HighPrioritySupport"] = true
+ - ["HardwareSupport"] = true
+ - ["Support_XBOX_SERIES_X"] = true
+ - ["English"] = 10
+ - ["ChatSupport"] = true
+ - ["XboxSupport"] = true
+ - **Worker 2** has been created with the following **labels**
+ - ["HighPrioritySupport"] = true
+ - ["HardwareSupport"] = true
+ - ["Support_XBOX_SERIES_X"] = true
+ - ["Support_XBOX_SERIES_S"] = true
+ - ["English"] = 8
+ - ["ChatSupport"] = true
+ - ["XboxSupport"] = true
+ - **Worker 3** has been created with the following **labels**
+ - ["HighPrioritySupport"] = false
+ - ["HardwareSupport"] = true
+ - ["Support_XBOX"] = true
+ - ["English"] = 7
+ - ["ChatSupport"] = true
+ - ["XboxSupport"] = true
+
+### Expectation
+
+We would like the following behavior when scoring workers to select which worker gets the first offer.
++
+The decision flow (as shown above) is as follows:
+
+- If a job is **NOT HighPriority**:
+ - Workers with label: **["Support_XBOX"] = true**; get a score of *100*
+ - Otherwise, get a score of *1*
+
+- If a job is **HighPriority**:
+ - Workers with label: **["HighPrioritySupport"] = false**; get a score of *1*
+ - Otherwise, if **["HighPrioritySupport"] = true**:
+ - Does Worker specialize in console type -> Does worker have label: **["Support_<**jobLabels.ConsoleType**>"] = true**? If true, worker gets score of *200*
+ - Otherwise, get a score of *100*
+
+### Creating an Azure function
+
+Before moving on any further in the process, let us first define an Azure function that scores worker.
+> [!NOTE]
+> The following Azure function is using Javascript. For more information, please refer to [Quickstart: Create a JavaScript function in Azure using Visual Studio Code](../../../azure-functions/create-first-function-vs-code-node.md)
+
+Sample input for **Worker 1**
+
+```json
+{
+ "job": {
+ "CommunicationType": "Chat",
+ "IssueType": "XboxSupport",
+ "Language": "en",
+ "HighPriority": true,
+ "SubIssueType": "ConsoleMalfunction",
+ "ConsoleType": "XBOX_SERIES_X",
+ "Model": "XBOX_SERIES_X_1TB"
+ },
+ "selectors": [
+ {
+ "key": "English",
+ "operator": "GreaterThanEqual",
+ "value": 7,
+ "ttl": null
+ },
+ {
+ "key": "ChatSupport",
+ "operator": "Equal",
+ "value": true,
+ "ttl": null
+ },
+ {
+ "key": "XboxSupport",
+ "operator": "Equal",
+ "value": true,
+ "ttl": null
+ }
+ ],
+ "worker": {
+ "Id": "e3a3f2f9-3582-4bfe-9c5a-aa57831a0f88",
+ "HighPrioritySupport": true,
+ "HardwareSupport": true,
+ "Support_XBOX_SERIES_X": true,
+ "English": 10,
+ "ChatSupport": true,
+ "XboxSupport": true
+ }
+}
+```
+
+Sample implementation:
+
+```javascript
+module.exports = async function (context, req) {
+ context.log('Best Worker Distribution Mode using Azure Function');
+
+ let score = 0;
+ const jobLabels = req.body.job;
+ const workerLabels = req.body.worker;
+
+ const isHighPriority = !!jobLabels["HighPriority"];
+ context.log('Job is high priority? Status: ' + isHighPriority);
+
+ if(!isHighPriority) {
+ const isGenericXboxSupportWorker = !!workerLabels["Support_XBOX"];
+ context.log('Worker provides general xbox support? Status: ' + isGenericXboxSupportWorker);
+
+ score = isGenericXboxSupportWorker ? 100 : 1;
+
+ } else {
+ const workerSupportsHighPriorityJob = !!workerLabels["HighPrioritySupport"];
+ context.log('Worker provides high priority support? Status: ' + workerSupportsHighPriorityJob);
+
+ if(!workerSupportsHighPriorityJob) {
+ score = 1;
+ } else {
+ const key = `Support_${jobLabels["ConsoleType"]}`;
+
+ const workerSpecializeInConsoleType = !!workerLabels[key];
+ context.log(`Worker specializes in consoleType: ${jobLabels["ConsoleType"]} ? Status: ${workerSpecializeInConsoleType}`);
+
+ score = workerSpecializeInConsoleType ? 200 : 100;
+ }
+ }
+ context.log('Final score of worker: ' + score);
+
+ context.res = {
+ // status: 200, /* Defaults to 200 */
+ body: score
+ };
+}
+```
+
+Output for **Worker 1**
+
+```markdown
+200
+```
+
+With the aforementioned implementation, for the given job we'll get the following scores for workers:
+
+| Worker | Score |
+|--|-|
+| Worker 1 | 200 |
+| Worker 2 | 200 |
+| Worker 3 | 1 |
+
+### Distribute offers based on best worker mode
+
+Now that the Azure function app is ready, let us create an instance of **BestWorkerDistribution** mode using Router SDK.
+
+```csharp
+ // -- initialize router client
+ // Setup Distribution Policy
+ var bestWorkerDistributionMode = new BestWorkerMode(
+ scoringRule: new AzureFunctionRule(
+ functionAppUrl: "<insert function url>");
+
+ var distributionPolicy = await client.SetDistributionPolicyAsync(
+ id: "BestWorkerDistributionMode",
+ mode: bestWorkerDistributionMode,
+ name: "XBox hardware support distribution",
+ offerTTL: TimeSpan.FromMinutes(5));
+
+ // Setup Queue
+ var queue = await client.SetQueueAsync(
+ id: "XBox_Hardware_Support_Q",
+ distributionPolicyId: distributionPolicy.Value.Id,
+ name: "XBox Hardware Support Queue");
+
+ // Setup Channel
+ var channel = await client.SetChannelAsync("Xbox_Chat_Channel");
+
+ // Create workers
+
+ var worker1Labels = new LabelCollection()
+ {
+ ["HighPrioritySupport"] = true,
+ ["HardwareSupport"] = true,
+ ["Support_XBOX_SERIES_X"] = true,
+ ["English"] = 10,
+ ["ChatSupport"] = true,
+ ["XboxSupport"] = true
+ };
+ var worker1 = await client.RegisterWorkerAsync(
+ id: "Worker_1",
+ totalCapacity: 100,
+ queueIds: new[] {queue.Value.Id},
+ labels: worker1Labels,
+ channelConfigurations: new[] {new ChannelConfiguration(channel.Value.Id, 10)});
+
+ var worker2Labels = new LabelCollection()
+ {
+ ["HighPrioritySupport"] = true,
+ ["HardwareSupport"] = true,
+ ["Support_XBOX_SERIES_X"] = true,
+ ["Support_XBOX_SERIES_S"] = true,
+ ["English"] = 8,
+ ["ChatSupport"] = true,
+ ["XboxSupport"] = true
+ };
+ var worker2 = await client.RegisterWorkerAsync(
+ id: "Worker_2",
+ totalCapacity: 100,
+ queueIds: new[] { queue.Value.Id },
+ labels: worker2Labels,
+ channelConfigurations: new[] { new ChannelConfiguration(channel.Value.Id, 10) });
+
+ var worker3Labels = new LabelCollection()
+ {
+ ["HighPrioritySupport"] = false,
+ ["HardwareSupport"] = true,
+ ["Support_XBOX"] = true,
+ ["English"] = 7,
+ ["ChatSupport"] = true,
+ ["XboxSupport"] = true
+ };
+ var worker3 = await client.RegisterWorkerAsync(
+ id: "Worker_3",
+ totalCapacity: 100,
+ queueIds: new[] { queue.Value.Id },
+ labels: worker3Labels,
+ channelConfigurations: new[] { new ChannelConfiguration(channel.Value.Id, 10) });
+
+ // Create Job
+ var jobLabels = new LabelCollection()
+ {
+ ["CommunicationType"] = "Chat",
+ ["IssueType"] = "XboxSupport",
+ ["Language"] = "en",
+ ["HighPriority"] = true,
+ ["SubIssueType"] = "ConsoleMalfunction",
+ ["ConsoleType"] = "XBOX_SERIES_X",
+ ["Model"] = "XBOX_SERIES_X_1TB"
+ };
+ var workerSelectors = new List<LabelSelector>()
+ {
+ new LabelSelector("English", LabelOperator.GreaterThanEqual, 7),
+ new LabelSelector("ChatSupport", LabelOperator.Equal, true),
+ new LabelSelector("XboxSupport", LabelOperator.Equal, true)
+ };
+ var job = await client.CreateJobAsync(
+ channelId: channel.Value.Id,
+ queueId: queue.Value.Id,
+ priority: 100,
+ channelReference: "ChatChannel",
+ labels: jobLabels,
+ workerSelectors: workerSelectors);
+
+ var getJob = await client.GetJobAsync(job.Value.Id);
+ Console.WriteLine(getJob.Value.Assignments.Select(assignment => assignment.Value.WorkerId).First());
+```
+
+Output
+
+```markdown
+Worker_1 // or Worker_2
+
+Since both workers, Worker_1 and Worker_2, get the same score of 200,
+the worker who has been idle the longest will get the first offer.
+```
+
+## Next steps
+
+- [Router Rule concepts](router-rule-concepts.md)
communication-services Router Rule Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/router-rule-concepts.md
await client.upsertClassificationPolicy({
}); ``` +
+## Next steps
+
+- [Azure Function Rule](azure-function-rule-engine.md)
communication-services Subscribe Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/subscribe-events.md
# Subscribe to Job Router events
-This guide outlines the steps to setup a subscription for Job Router events and how to receive them.
+This guide outlines the steps to set up a subscription for Job Router events and how to receive them.
-For more details on Event Grid, please see the [Event Grid documentation][event-grid-overview].
+For more details on Event Grid, see the [Event Grid documentation][event-grid-overview].
[!INCLUDE [Private Preview Disclaimer](../../includes/private-preview-include-section.md)]
For more details on Event Grid, please see the [Event Grid documentation][event-
> [!NOTE] > Since Job Router is still in preview, the events are not included in the portal UI. You have to use an Azure Resource Manager (ARM) template to create a subscription that references them.
-This template deploys an EventGrid subscription on a Storage Queue for Job Router events.
-If the storage account, queue or system topic do not exist, they will be created as well.
+This template deploys an Event Grid subscription on a Storage Queue for Job Router events.
+If the storage account, queue or system topic doesn't exist, they'll be created as well.
[![Deploy To Azure](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.svg?sanitize=true)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FMicrosoftDocs%2Fazure-docs%2Fmain%2Farticles%2Fcommunication-services%2Fhow-tos%2Frouter-sdk%2Fmedia%2Fdeploy-subscription.json) ### Parameters - **Azure Communication Services Resource Name**: The name of your Azure Communication Services resource. For example, if the endpoint to your resource is `https://contoso.communication.azure.net`, then set to `contoso`.-- **Storage Name**: The name of your Azure Storage Account. If it does not exist, it will be created.
+- **Storage Name**: The name of your Azure Storage Account. If it doesn't exist, it will be created.
- **Event Sub Name**: The name of the event subscription to create. - **System Topic Name**: If you have existing event subscriptions on your ACS resource, find the `System Topic` name in the `Events` tab of your ACS resource. Otherwise, specify a unique name such as the ACS resource name itself.-- **Queue Name**: The name of your Queue within your Storage Account. If it does not exist, it will be created.
+- **Queue Name**: The name of your Queue within your Storage Account. If it doesn't exist, it will be created.
### Deployed resources The following resources are deployed as part of the solution -- **Storage Account**: If the storage account name does not exist.-- **Storage Queue**: If the queue does not exist within the storage account.-- **Event Grid System Topic**: If the topic does not exist.
+- **Storage Account**: If the storage account name doesn't exist.
+- **Storage Queue**: If the queue doesn't exist within the storage account.
+- **Event Grid System Topic**: If the topic doesn't exist.
- **Event Grid Subscription**: A subscription for all Job Router events on the storage queue.
-## Quick-start: Receive EventGrid events via an Azure Storage Queue
+## Quick-start: Receive Event Grid events via an Azure Storage Queue
### Create a new C# application
dotnet build
### Install the packages
-Install the Azure Storage Queues and EventGrid packages.
+Install the Azure Storage Queues and Event Grid packages.
```console dotnet add package Azure.Storage.Queues
dotnet run
||::| - | | [`RouterJobReceived`](#microsoftcommunicationrouterjobreceived) | `Job` | A new job was created for routing | | [`RouterJobClassified`](#microsoftcommunicationrouterjobclassified)| `Job` | The classification policy was applied to a job |
-| [`RouterJobLabelsUpdated`](#microsoftcommunicationrouterjoblabelsupdated) | `Job` | The labels of the job were changed |
+| [`RouterJobQueued`](#microsoftcommunicationrouterjobqueued) | `Job` | A job has been successfully enqueued |
| [`RouterJobClassificationFailed`](#microsoftcommunicationrouterjobclassificationfailed) | `Job` | Router failed to classify job using classification policy | | [`RouterJobCompleted`](#microsoftcommunicationrouterjobcompleted) | `Job` | A job was completed and enters wrap-up | | [`RouterJobClosed`](#microsoftcommunicationrouterjobclosed) | `Job` | A job was closed and wrap-up is finished | | [`RouterJobCancelled`](#microsoftcommunicationrouterjobcancelled) | `Job` | A job was canceled | | [`RouterJobExceptionTriggered`](#microsoftcommunicationrouterjobexceptiontriggered) | `Job` | A job exception has been triggered |
-| [`RouterJobExceptionCleared`](#microsoftcommunicationrouterjobexceptioncleared) | `Job` | A job exception has cleared |
+| [`RouterJobWorkerSelectorsExpired`](#microsoftcommunicationrouterjobworkerselectorsexpired) | `Job` | One or more worker selectors on a job have expired |
| [`RouterWorkerOfferIssued`](#microsoftcommunicationrouterworkerofferissued) | `Worker` | A job was offered to a worker | | [`RouterWorkerOfferAccepted`](#microsoftcommunicationrouterworkerofferaccepted) | `Worker` | An offer to a worker was accepted | | [`RouterWorkerOfferDeclined`](#microsoftcommunicationrouterworkerofferdeclined) | `Worker` | An offer to a worker was declined |
dotnet run
"Locale": "en-us", "Segment": "Enterprise", "Token": "FooToken"
- }
+ },
+ "tags": {
+ "Locale": "en-us",
+ "Segment": "Enterprise",
+ "Token": "FooToken"
+ },
+ "requestedWorkerSelectors": [
+ {
+ "key": "string",
+ "labelOperator": "equal",
+ "value": 5,
+ "ttl": "P3Y6M4DT12H30M5S"
+ }
+ ]
}, "eventType": "Microsoft.Communication.RouterJobReceived", "dataVersion": "1.0", "metadataVersion": "1",
- "eventTime": "2021-06-23T02:43:30Z"
+ "eventTime": "2022-02-17T00:55:25.1736293Z"
} ```
dotnet run
|: |:--:|:-:|-|-| | jobId| `string` | ❌ | | channelReference | `string` | ❌ |
-| jobStatus| `enum` | ❌ | Possible values <ul> <li>PendingClassification</li> </ul> | When this event is sent out, the classification process is yet to have been executed. |
+| jobStatus| `enum` | ❌ | Possible values <ul> <li>PendingClassification</li><li>Queued</li> </ul> | When a this event is sent out, classification process is yet to have been executed or job was created with an associated queueId.
|channelId | `string` | ❌ |
-| classificationPolicyId | `string` | ✔️ | | `null` when `queueId` is specified for a job |
-| queueId | `string` | ✔️ | | `null` when `classificationPolicyId` is specified for a job |
-| priority | `int` | ✔️ | | Null when `classificationPolicyId` is specified. Non-null value if there is a direct queue assignment. |
-| labels | `Dictionary<string, object>` | ✔️ | | Based on user input |
+| classificationPolicyId | `string` | ✔️ | | `null` when `queueId` is specified for a job
+| queueId | `string` | ✔️ | | `null` when `classificationPolicyId` is specified for a job
+| priority | `int` | ✔️ | | Null when `classificationPolicyId` is specified. Non-null value in case of direct queue assignment.
+| labels | `Dictionary<string, object>` | ✔️ | | Based on user input
+| tags | `Dictionary<string, object>` | ✔️ | | Based on user input
+| requestedWorkerSelectors | `List<WorkerSelector>` | ✔️ | | Based on user input
### Microsoft.Communication.RouterJobClassified
dotnet run
"Locale": "en-us", "Segment": "Enterprise", "Token": "FooToken"
- }
+ },
+ "tags": {
+ "Locale": "en-us",
+ "Segment": "Enterprise",
+ "Token": "FooToken"
+ },
+ "attachedWorkerSelectors": [
+ {
+ "key": "string",
+ "labelOperator": "equal",
+ "value": 5,
+ "ttl": "P3Y6M4DT12H30M5S"
+ }
+ ]
}, "eventType": "Microsoft.Communication.RouterJobClassified", "dataVersion": "1.0", "metadataVersion": "1",
- "eventTime": "2021-06-23T02:43:30Z"
+ "eventTime": "2022-02-17T00:55:25.1736293Z"
} ```
dotnet run
| jobId| `string` | ❌ | | channelReference | `string` | ❌ | |channelId | `string` | ❌ |
-| classificationPolicyId | `string` | ✔️ | | `null` when `queueId` is specified for a job (direct queue assignment) |
-| queueId | `string` | ✔️ | | `null` when `classificationPolicyId` is specified for a job |
-| priority | `int` | ❌ |
-| labels | `Dictionary<string, object>` | ✔️ | | Based on user input |
+| classificationPolicyId | `string` | ❌ | |
+| queueId | `string` | ✔️ | | `null` when `classificationPolicy` is not used for queue selection
+| priority | `int` | ✔️ | | `null` when `classificationPolicy` is not used for applying priority on job
+| labels | `Dictionary<string, object>` | ✔️ | | Based on user input
+| tags | `Dictionary<string, object>` | ✔️ | | Based on user input
+| attachedWorkerSelectors | `List<WorkerSelector>` | ✔️ | | List of worker selectors attached by a classification policy
-### Microsoft.Communication.RouterJobLabelsUpdated
+### Microsoft.Communication.RouterJobQueued
[Back to Event Catalog](#events-catalog)
dotnet run
{ "id": "b6d8687a-5a1a-42ae-b8b5-ff7ec338c872", "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "job/{job-id}/channel/{channel-id}",
+ "subject": "job/{job-id}/channel/{channel-id}/queue/{queue-id}",
"data": { "jobId": "7f1df17b-570b-4ae5-9cf5-fe6ff64cc712", "channelReference": "test-abc",
- "jobStatus": "Queued",
"channelId": "FooVoiceChannelId",
- "classificationPolicyId": "test-policy",
"queueId": "625fec06-ab81-4e60-b780-f364ed96ade1",
- "priority": 5,
- "labelsAddedOrChanged": {
- "English": "5",
- "Office": "7"
- },
+ "priority": 1,
"labels": { "Locale": "en-us", "Segment": "Enterprise",
- "Token": "FooToken",
- "English": "5",
- "Office": "7"
- }
+ "Token": "FooToken"
+ },
+ "tags": {
+ "Locale": "en-us",
+ "Segment": "Enterprise",
+ "Token": "FooToken"
+ },
+ "requestedWorkerSelectors": [
+ {
+ "key": "string",
+ "labelOperator": "equal",
+ "value": 5,
+ "ttl": "P3Y6M4DT12H30M5S"
+ }
+ ],
+ "attachedWorkerSelectors": [
+ {
+ "key": "string",
+ "labelOperator": "equal",
+ "value": 5,
+ "ttl": "P3Y6M4DT12H30M5S"
+ }
+ ]
},
- "eventType": "Microsoft.Communication.RouterJobLabelsUpdated",
+ "eventType": "Microsoft.Communication.RouterJobQueued",
"dataVersion": "1.0", "metadataVersion": "1",
- "eventTime": "2021-06-23T02:43:30Z"
+ "eventTime": "2022-02-17T00:55:25.1736293Z"
} ```
dotnet run
| Attribute | Type | Nullable |Description | Notes | |: |:--:|:-:|-|-| | jobId| `string` | ❌ |
-| channelReference | `string` | ❌ |
-| jobStatus| `enum` | ❌ | Possible values <ul> <li>PendingClassification</li> <li>Queued</li> <li>Assigned</li> <li>Completed</li> <li>Closed</li> <li>Canceled</li> <li>ClassificationFailed</li> </ul> |
+| channelReference | `string` | ✔️ |
|channelId | `string` | ❌ |
-| classificationPolicyId | `string` | ✔️ | | `null` when `queueId` is specified for a job |
-| queueId | `string` | ✔️ | | `null` when `classificationPolicyId` is specified for a job |
+| queueId | `string` | ❌ | |
| priority | `int` | ❌ |
-| labelsAddedOrChanged | `Dictionary<string, object>` | ✔️ | | Labels added or changed based on user input. |
-| labels | `Dictionary<string, object>` | ✔️ | | Complete set of labels associated with job. |
+| labels | `Dictionary<string, object>` | ✔️ | | Based on user input
+| tags | `Dictionary<string, object>` | ✔️ | | Based on user input
+| requestedWorkerSelectors | `List<WorkerSelector>` | ✔️ | | Based on user input while creating job
+| attachedWorkerSelectors | `List<WorkerSelector>` | ✔️ | | List of worker selectors attached by a classification policy
### Microsoft.Communication.RouterJobClassificationFailed
dotnet run
"Locale": "en-us", "Segment": "Enterprise", "Token": "FooToken"
+ },
+ "tags": {
+ "Locale": "en-us",
+ "Segment": "Enterprise",
+ "Token": "FooToken"
} }, "eventType": "Microsoft.Communication.RouterJobClassificationFailed", "dataVersion": "1.0", "metadataVersion": "1",
- "eventTime": "2021-06-23T02:43:30Z"
+ "eventTime": "2022-02-17T00:55:25.1736293Z"
} ```
dotnet run
| channelReference | `string` | ❌ | |channelId | `string` | ❌ | | classificationPolicyId | `string` | ❌ | |
-| labels | `Dictionary<string, object>` | ✔️ | | Based on user input |
+| labels | `Dictionary<string, object>` | ✔️ | | Based on user input
+| tags | `Dictionary<string, object>` | ✔️ | | Based on user input
### Microsoft.Communication.RouterJobCompleted
dotnet run
"Locale": "en-us", "Segment": "Enterprise", "Token": "FooToken"
- }
- "workerId": ""
+ },
+ "tags": {
+ "Locale": "en-us",
+ "Segment": "Enterprise",
+ "Token": "FooToken"
+ },
+ "workerId": "e3a3f2f9-3582-4bfe-9c5a-aa57831a0f88"
}, "eventType": "Microsoft.Communication.RouterJobCompleted", "dataVersion": "1.0", "metadataVersion": "1",
- "eventTime": "2021-06-23T02:43:30Z"
+ "eventTime": "2022-02-17T00:55:25.1736293Z"
} ```
dotnet run
| channelReference | `string` | ❌ | |channelId | `string` | ❌ | | queueId | `string` | ❌ | |
+| labels | `Dictionary<string, object>` | ✔️ | | Based on user input
+| tags | `Dictionary<string, object>` | ✔️ | | Based on user input
| assignmentId| `string` | ❌ | |
-| labels | `Dictionary<string, object>` | ✔️ | | Based on user input |
| workerId | `string` | ❌ | | ### Microsoft.Communication.RouterJobClosed
dotnet run
"Locale": "en-us", "Segment": "Enterprise", "Token": "FooToken"
+ },
+ "tags": {
+ "Locale": "en-us",
+ "Segment": "Enterprise",
+ "Token": "FooToken"
} }, "eventType": "Microsoft.Communication.RouterJobClosed", "dataVersion": "1.0", "metadataVersion": "1",
- "eventTime": "2021-06-23T02:43:30Z"
+ "eventTime": "2022-02-17T00:55:25.1736293Z"
} ```
dotnet run
| channelReference | `string` | ❌ | |channelId | `string` | ❌ | | queueId | `string` | ❌ | |
-| dispositionCode| `string` | ✔️ | | Based on user input |
+| labels | `Dictionary<string, object>` | ✔️ | | Based on user input
+| tags | `Dictionary<string, object>` | ✔️ | | Based on user input
+| dispositionCode| `string` | ✔️ | | Based on user input
| workerId | `string` | ❌ | | | assignmentId | `string` | ❌ | |
-| labels | `Dictionary<string, object>` | ✔️ | | Based on user input |
### Microsoft.Communication.RouterJobCancelled
dotnet run
"Segment": "Enterprise", "Token": "FooToken" },
+ "tags": {
+ "Locale": "en-us",
+ "Segment": "Enterprise",
+ "Token": "FooToken"
+ },
"queueId": "" }, "eventType": "Microsoft.Communication.RouterJobCancelled", "dataVersion": "1.0", "metadataVersion": "1",
- "eventTime": "2021-06-23T02:43:30Z"
+ "eventTime": "2022-02-17T00:55:25.1736293Z"
} ```
dotnet run
| Attribute | Type | Nullable |Description | Notes | |: |:--:|:-:|-|-|
-| note| `string` | ✔️ | | Based on user input |
+| note| `string` | ✔️ | | Based on user input
| dispositionCode| `string` | ❌ | | jobId| `string` | ❌ | | channelReference | `string` | ❌ | |channelId | `string` | ❌ |
-| queueId | `string` | ✔️ | | Non-null when job is canceled after successful classification. |
-| labels | `Dictionary<string, object>` | ✔️ | | Based on user input |
+| labels | `Dictionary<string, object>` | ✔️ | | Based on user input
+| tags | `Dictionary<string, object>` | ✔️ | | Based on user input
+| queueId | `string` | ✔️ | |
### Microsoft.Communication.RouterJobExceptionTriggered
dotnet run
"Locale": "en-us", "Segment": "Enterprise", "Token": "FooToken"
+ },
+ "tags": {
+ "Locale": "en-us",
+ "Segment": "Enterprise",
+ "Token": "FooToken"
} }, "eventType": "Microsoft.Communication.RouterJobExceptionTriggered", "dataVersion": "1.0", "metadataVersion": "1",
- "eventTime": "2021-06-23T02:43:31Z"
+ "eventTime": "2022-02-17T00:55:25.1736293Z"
} ```
dotnet run
| jobId| `string` | ❌ | | channelReference | `string` | ❌ | | channelId | `string` | ❌ |
-| labels | `Dictionary<string, object>` | ✔️ | | Based on user input |
+| labels | `Dictionary<string, object>` | ✔️ | | Based on user input
+| tags | `Dictionary<string, object>` | ✔️ | | Based on user input
-### Microsoft.Communication.RouterJobExceptionCleared
+### Microsoft.Communication.RouterJobWorkerSelectorsExpired
[Back to Event Catalog](#events-catalog) ```json {
- "id": "1027db4a-17fe-4a7f-ae67-276c3120a29f",
+ "id": "b6d8687a-5a1a-42ae-b8b5-ff7ec338c872",
"topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
- "subject": "job/{job-id}/channel/{channel-id}/exceptionrule/{rulekey}",
+ "subject": "job/{job-id}/channel/{channel-id}/queue/{queue-id}",
"data": {
- "ruleKey": "r100",
"jobId": "7f1df17b-570b-4ae5-9cf5-fe6ff64cc712", "channelReference": "test-abc", "channelId": "FooVoiceChannelId",
+ "queueId": "625fec06-ab81-4e60-b780-f364ed96ade1",
"labels": { "Locale": "en-us", "Segment": "Enterprise", "Token": "FooToken"
- }
+ },
+ "tags": {
+ "Locale": "en-us",
+ "Segment": "Enterprise",
+ "Token": "FooToken"
+ },
+ "requestedWorkerSelectorsExpired": [
+ {
+ "key": "string",
+ "labelOperator": "equal",
+ "value": 5,
+ "ttl": "P3Y6M4DT12H30M5S"
+ }
+ ],
+ "attachedWorkerSelectorsExpired": [
+ {
+ "key": "string",
+ "labelOperator": "equal",
+ "value": 5,
+ "ttl": "P3Y6M4DT12H30M5S"
+ }
+ ]
},
- "eventType": "Microsoft.Communication.RouterJobExceptionCleared",
+ "eventType": "Microsoft.Communication.RouterJobWorkerSelectorsExpired",
"dataVersion": "1.0", "metadataVersion": "1",
- "eventTime": "2021-06-23T02:43:31Z"
+ "eventTime": "2022-02-17T00:55:25.1736293Z"
} ```
dotnet run
| Attribute | Type | Nullable |Description | Notes | |: |:--:|:-:|-|-|
-| ruleKey | `string` | ❌ | |
| jobId| `string` | ❌ |
-| channelReference | `string` | ❌ |
+| channelReference | `string` | ✔️ |
+| queueId | `string` | ❌ | |
| channelId | `string` | ❌ |
-| labels | `Dictionary<string, object>` | ✔️ | | Based on user input |
+| labels | `Dictionary<string, object>` | ✔️ | | Based on user input
+| tags | `Dictionary<string, object>` | ✔️ | | Based on user input
+| requestedWorkerSelectorsExpired | `List<WorkerSelector>` | ✔️ | | Based on user input while creating a job
+| attachedWorkerSelectorsExpired | `List<WorkerSelector>` | ✔️ | | List of worker selectors attached by a classification policy
## Worker Events
dotnet run
"Locale": "en-us", "Segment": "Enterprise", "Token": "FooToken"
+ },
+ "jobTags": {
+ "Locale": "en-us",
+ "Segment": "Enterprise",
+ "Token": "FooToken"
} }, "eventType": "Microsoft.Communication.RouterWorkerOfferIssued", "dataVersion": "1.0", "metadataVersion": "1",
- "eventTime": "2021-06-23T02:43:31Z"
+ "eventTime": "2022-02-17T00:55:25.1736293Z"
} ```
dotnet run
| offerTimeUtc | `DateTimeOffset` | ❌ | | expiryTimeUtc| `DateTimeOffset` | ❌ | | jobPriority| `int` | ❌ |
-| jobLabels | `Dictionary<string, object>` | ✔️ | | Based on user input |
+| jobLabels | `Dictionary<string, object>` | ✔️ | | Based on user input
+| jobTags | `Dictionary<string, object>` | ✔️ | | Based on user input
### Microsoft.Communication.RouterWorkerOfferAccepted
dotnet run
"Segment": "Enterprise", "Token": "FooToken" },
+ "jobTags": {
+ "Locale": "en-us",
+ "Segment": "Enterprise",
+ "Token": "FooToken"
+ },
"channelReference": "test-abc", "channelId": "FooVoiceChannelId", "queueId": "625fec06-ab81-4e60-b780-f364ed96ade1",
dotnet run
"eventType": "Microsoft.Communication.RouterWorkerOfferAccepted", "dataVersion": "1.0", "metadataVersion": "1",
- "eventTime": "2021-06-23T02:43:31Z"
+ "eventTime": "2022-02-17T00:55:25.1736293Z"
} ```
dotnet run
| workerId | `string` | ❌ | | jobId| `string` | ❌ | | jobPriority| `int` | ❌ |
-| jobLabels | `Dictionary<string, object>` | ✔️ | | Based on user input |
+| jobLabels | `Dictionary<string, object>` | ✔️ | | Based on user input
+| jobTags | `Dictionary<string, object>` | ✔️ | | Based on user input
| channelReference | `string` | ❌ | |channelId | `string` | ❌ | | queueId | `string` | ❌ |
dotnet run
"eventType": "Microsoft.Communication.RouterWorkerOfferDeclined", "dataVersion": "1.0", "metadataVersion": "1",
- "eventTime": "2021-06-23T02:43:31Z"
+ "eventTime": "2022-02-17T00:55:25.1736293Z"
} ```
dotnet run
"eventType": "Microsoft.Communication.RouterWorkerOfferRevoked", "dataVersion": "1.0", "metadataVersion": "1",
- "eventTime": "2021-06-23T02:43:31Z"
+ "eventTime": "2022-02-17T00:55:25.1736293Z"
} ```
dotnet run
"eventType": "Microsoft.Communication.RouterWorkerOfferExpired", "dataVersion": "1.0", "metadataVersion": "1",
- "eventTime": "2021-06-23T02:43:31Z"
+ "eventTime": "2022-02-17T00:55:25.1736293Z"
} ```
dotnet run
| Attribute | Type | Nullable |Description | Notes | |: |:--:|:-:|-|-|
-| offerId | `string` | ❌ |
| workerId | `string` | ❌ |
+| offerId | `string` | ❌ |
| jobId| `string` | ❌ | | channelReference | `string` | ❌ | |channelId | `string` | ❌ |
dotnet run
"eventType": "Microsoft.Communication.RouterWorkerRegistered", "dataVersion": "1.0", "metadataVersion": "1",
- "eventTime": "2021-06-23T02:43:31Z"
+ "eventTime": "2022-02-17T00:55:25.1736293Z"
} ```
dotnet run
"eventType": "Microsoft.Communication.RouterWorkerDeregistered", "dataVersion": "1.0", "metadataVersion": "1",
- "eventTime": "2021-06-23T02:43:31Z"
+ "eventTime": "2022-02-17T00:55:25.1736293Z"
} ```+ **Attribute list** | Attribute | Type | Nullable |Description | Notes |
container-apps Manage Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/manage-secrets.md
+
+ Title: Manage secrets in Azure Container Apps Preview
+description: Learn to store and consume sensitive configuration values in Azure Container Apps.
++++ Last updated : 11/02/2021++++
+# Manage secrets in Azure Container Apps Preview
+
+Azure Container Apps allows your application to securely store sensitive configuration values. Once defined at the application level, secured values are available to containers, inside scale rules, and via Dapr.
+
+- Secrets are scoped to an application, outside of any specific revision of an application.
+- Adding, removing, or changing secrets does not generate new revisions.
+- Each application revision can reference one or more secrets.
+- Multiple revisions can reference the same secret(s).
+
+When a secret is updated or deleted, you can respond to changes in one of two ways:
+
+ 1. Deploy a new revision.
+ 2. Restart an existing revision.
+
+An updated or removed secret does not automatically restart a revision.
+
+- Before you delete a secret, deploy a new revision that no longer references the old secret.
+- If you change a secret value, you need to restart the revision to consume the new value.
+
+## Defining secrets
+
+# [ARM template](#tab/arm-template)
+
+Secrets are defined at the application level in the `resources.properties.configuration.secrets` section.
+
+```json
+"resources": [
+{
+ ...
+ "properties": {
+ "configuration": {
+ "secrets": [
+ {
+ "name": "queue-connection-string",
+ "value": "<MY-CONNECTION-STRING-VALUE>"
+ }],
+ }
+ }
+}
+```
+
+Here, a connection string to a queue storage account is declared in the `secrets` array. To use this configuration you would replace `<MY-CONNECTION-STRING-VALUE>` with the value of your connection string.
+
+# [Azure CLI](#tab/azure-cli)
+
+Secrets are defined using the `--secrets` parameter.
+
+- The parameter accepts a comma-delimited set of name/value pairs.
+- Each pair is delimited by an equals sign (`=`).
+
+```bash
+az containerapp create \
+ --resource-group "my-resource-group" \
+ --name queuereader \
+ --environment "my-environment-name" \
+ --image demos/queuereader:v1 \
+ --secrets "queue-connection-string=$CONNECTION_STRING" \
+```
+
+Here, a connection string to a queue storage account is declared in the `--secrets` parameter. The value for `queue-connection-string` comes from an environment variable named `$CONNECTION_STRING`.
+
+# [PowerShell](#tab/powershell)
+
+Secrets are defined using the `--secrets` parameter.
+
+- The parameter accepts a comma-delimited set of name/value pairs.
+- Each pair is delimited by an equals sign (`=`).
+
+```azurecli
+az containerapp create `
+ --resource-group "my-resource-group" `
+ --name queuereader `
+ --environment "my-environment-name" `
+ --image demos/queuereader:v1 `
+ --secrets "queue-connection-string=$CONNECTION_STRING" `
+```
+
+Here, a connection string to a queue storage account is declared in the `--secrets` parameter. The value for `queue-connection-string` comes from an environment variable named `$CONNECTION_STRING`.
+++
+## Using secrets
+
+Application secrets are referenced via the `secretref` property. Secret values are mapped to application-level secrets where the `secretref` value matches the secret name declared at the application level.
+
+## Example
+
+The following example shows an application that declares a connection string at the application level and is used throughout the configuration via `secretref`.
+
+# [ARM template](#tab/arm-template)
+
+In this example, the application connection string is declared as `queue-connection-string` and becomes available elsewhere in the configuration sections.
++
+Here, the environment variable named `connection-string` gets its value from the application-level `queue-connection-string` secret. Also, the Azure Queue Storage scale rule's authorization configuration uses the `queue-connection-string` as a connection is established.
+
+To avoid committing secret values to source control with your ARM template, pass secret values as ARM template parameters.
+
+# [Azure CLI](#tab/azure-cli)
+
+In this example, you create an application with a secret that's referenced in an environment variable using the Azure CLI.
+
+```bash
+az containerapp create \
+ --resource-group "my-resource-group" \
+ --name myQueueApp \
+ --environment "my-environment-name" \
+ --image demos/myQueueApp:v1 \
+ --secrets "queue-connection-string=$CONNECTIONSTRING" \
+ --environment-variables "QueueName=myqueue,ConnectionString=secretref:queue-connection-string"
+```
+
+Here, the environment variable named `connection-string` gets its value from the application-level `queue-connection-string` secret by using `secretref`.
+
+# [PowerShell](#tab/powershell)
+
+In this example, you create an application with a secret that's referenced in an environment variable using the Azure CLI.
+
+```azurecli
+az containerapp create `
+ --resource-group "my-resource-group" `
+ --name myQueueApp `
+ --environment "my-environment-name" `
+ --image demos/myQueueApp:v1 `
+ --secrets "queue-connection-string=$CONNECTIONSTRING" `
+ --environment-variables "QueueName=myqueue,ConnectionString=secretref:queue-connection-string"
+```
+
+Here, the environment variable named `connection-string` gets its value from the application-level `queue-connection-string` secret by using `secretref`.
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Containers](containers.md)
container-apps Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/monitor.md
my-container-app listening on port 80 PrimaryResult 2021-10-23T02:11:43.1
## Next steps > [!div class="nextstepaction"]
-> [Secure your container app](secure-app.md)
+> [Manage secrets](manage-secrets.md)
container-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/overview.md
With Azure Container Apps, you can:
- [**Provide an existing virtual network**](vnet-custom.md) when creating an environment for your container apps. -- [**Securely manage secrets**](secure-app.md) directly in your application.
+- [**Securely manage secrets**](manage-secrets.md) directly in your application.
- [**View application logs**](monitor.md) using Azure Log Analytics.
container-apps Revisions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/revisions.md
The following types of changes do not create a new revision:
- Changes to [traffic splitting rules](revisions-manage.md#traffic-splitting) - Turning [ingress](ingress.md) on or off-- Changes to [secret values](secure-app.md)
+- Changes to [secret values](manage-secrets.md)
- Any change outside the `template` section of the configuration While changes to secrets are an application-scope change, revisions must be [restarted](revisions.md) before a container recognizes new secret values.
container-apps Scale App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/scale-app.md
The following example shows how to create a memory scaling rule.
## Next steps > [!div class="nextstepaction"]
-> [Secure your container app](secure-app.md)
+> [Manage secrets](manage-secrets.md)
container-registry Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Container Registry description: Sample Azure Resource Graph queries for Azure Container Registry showing use of resource types and tables to access Azure Container Registry related resources and properties. Previously updated : 01/20/2022 Last updated : 02/16/2022
cosmos-db Consistency Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/consistency-levels.md
Previously updated : 09/20/2021 Last updated : 02/17/2022 # Consistency levels in Azure Cosmos DB [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
-Distributed databases that rely on replication for high availability, low latency, or both, must make a fundamental tradeoff between the read consistency, availability, latency, and throughput as defined by the [PACELC theorem](https://en.wikipedia.org/wiki/PACELC_theorem). The linearizability of the strong consistency model is the gold standard of data programmability. But it adds a steep price from higher write latencies due to data having to replicate and commit across large distances. Strong consistency may also suffer from reduced availability (during failures) because data cannot replicate and commit in every region. Eventual consistency offers higher availability and better performance, but its more difficult to program applications because data may not be completely consistent across all regions.
+Distributed databases that rely on replication for high availability, low latency, or both, must make a fundamental tradeoff between the read consistency, availability, latency, and throughput as defined by the [PACELC theorem](https://en.wikipedia.org/wiki/PACELC_theorem). The linearizability of the strong consistency model is the gold standard of data programmability. But it adds a steep price from higher write latencies due to data having to replicate and commit across large distances. Strong consistency may also suffer from reduced availability (during failures) because data cannot replicate and commit in every region. Eventual consistency offers higher availability and better performance, but it's more difficult to program applications because data may not be completely consistent across all regions.
Most commercially available distributed NoSQL databases available in the market today provide only strong and eventual consistency. Azure Cosmos DB offers five well-defined levels. From strongest to weakest, the levels are: -- *Strong*-- *Bounded staleness*-- *Session*-- *Consistent prefix*-- *Eventual*
+- [*Strong*](#strong-consistency)
+- [*Bounded staleness*](#bounded-staleness-consistency)
+- [*Session*](#session-consistency)
+- [*Consistent prefix*](#consistent-prefix-consistency)
+- [*Eventual*](#eventual-consistency)
+
+For more information on the default consistency level, see [configuring the default consistency level](how-to-manage-consistency.md#configure-the-default-consistency-level) or [override the default consistency level](how-to-manage-consistency.md#override-the-default-consistency-level).
Each level provides availability and performance tradeoffs. The following image shows the different consistency levels as a spectrum.
Read consistency applies to a single read operation scoped within a logical part
You can configure the default consistency level on your Azure Cosmos account at any time. The default consistency level configured on your account applies to all Azure Cosmos databases and containers under that account. All reads and queries issued against a container or a database use the specified consistency level by default. To learn more, see how to [configure the default consistency level](how-to-manage-consistency.md#configure-the-default-consistency-level). You can also override the default consistency level for a specific request, to learn more, see how to [Override the default consistency level](how-to-manage-consistency.md?#override-the-default-consistency-level) article. > [!TIP]
-> Overriding the default consistency level only applies to reads within the SDK client. An account configured for strong consistency by default will still write and replicate data synchronously to every region in the account. When the SDK client instance or request overrides this with Session or weaker consistency, reads will be performed using a single replica. See [Consistency levels and throughput](consistency-levels.md#consistency-levels-and-throughput) for more details.
+> Overriding the default consistency level only applies to reads within the SDK client. An account configured for strong consistency by default will still write and replicate data synchronously to every region in the account. When the SDK client instance or request overrides this with Session or weaker consistency, reads will be performed using a single replica. For more information, see [Consistency levels and throughput](consistency-levels.md#consistency-levels-and-throughput).
> [!IMPORTANT] > It is required to recreate any SDK instance after changing the default consistency level. This can be done by restarting the application. This ensures the SDK uses the new default consistency level.
Bounded staleness offers total global order outside of the "staleness window." W
Inside the staleness window, Bounded Staleness provides the following consistency guarantees: -- Consistency for clients in the same region for an account with single write region = Strong-- Consistency for clients in different regions for an account with single write region = Consistent Prefix-- Consistency for clients writing to a single region for an account with multiple write regions = Consistent Prefix-- Consistency for clients writing to different regions for an account with multiple write regions = Eventual
+- Consistency for clients in the same region for an account with single write region = [Strong](#strong-consistency)
+- Consistency for clients in different regions for an account with single write region = [Consistent Prefix](#consistent-prefix-consistency)
+- Consistency for clients writing to a single region for an account with multiple write regions = [Consistent Prefix](#consistent-prefix-consistency)
+- Consistency for clients writing to different regions for an account with multiple write regions = [Eventual](#eventual-consistency)
Bounded staleness is frequently chosen by globally distributed applications that expect low write latencies but require total global order guarantee. Bounded staleness is great for applications featuring group collaboration and sharing, stock ticker, publish-subscribe/queueing etc. The following graphic illustrates the bounded staleness consistency with musical notes. After the data is written to the "West US 2" region, the "East US 2" and "Australia East" regions read the written value based on the configured maximum lag time or the maximum operations:
In session consistency, within a single client session reads are guaranteed to h
Clients outside of the session performing writes will see the following guarantees: -- Consistency for clients in same region for an account with single write region = Consistent Prefix-- Consistency for clients in different regions for an account with single write region = Consistent Prefix-- Consistency for clients writing to a single region for an account with multiple write regions = Consistent Prefix-- Consistency for clients writing to multiple regions for an account with multiple write regions = Eventual-- Consistency for clients using the [Azure Cosmos DB integrated cache](integrated-cache.md) = Eventual
+- Consistency for clients in same region for an account with single write region = [Consistent Prefix](#consistent-prefix-consistency)
+- Consistency for clients in different regions for an account with single write region = [Consistent Prefix](#consistent-prefix-consistency)
+- Consistency for clients writing to a single region for an account with multiple write regions = [Consistent Prefix](#consistent-prefix-consistency)
+- Consistency for clients writing to multiple regions for an account with multiple write regions = [Eventual](#eventual-consistency)
+- Consistency for clients using the [Azure Cosmos DB integrated cache](integrated-cache.md) = [Eventual](#eventual-consistency)
Session consistency is the most widely used consistency level for both single region as well as globally distributed applications. It provides write latencies, availability, and read throughput comparable to that of eventual consistency but also provides the consistency guarantees that suit the needs of applications written to operate in the context of a user. The following graphic illustrates the session consistency with musical notes. The "West US 2 writer" and the "West US 2 reader" are using the same session (Session A) so they both read the same data at the same time. Whereas the "Australia East" region is using "Session B" so, it receives data later but in the same order as the writes.
If writes were performed in the order `A, B, C`, then a client sees either `A`,
Below are the consistency guarantees for Consistent Prefix: -- Consistency for clients in same region for an account with single write region = Consistent Prefix-- Consistency for clients in different regions for an account with single write region = Consistent Prefix-- Consistency for clients writing to a single region for an account with multiple write region = Consistent Prefix-- Consistency for clients writing to multiple regions for an account with multiple write region = Eventual
+- Consistency for clients in same region for an account with single write region = [Consistent Prefix](#consistent-prefix-consistency)
+- Consistency for clients in different regions for an account with single write region = [Consistent Prefix](#consistent-prefix-consistency)
+- Consistency for clients writing to a single region for an account with multiple write region = [Consistent Prefix](#consistent-prefix-consistency)
+- Consistency for clients writing to multiple regions for an account with multiple write region = [Eventual](#eventual-consistency)
The following graphic illustrates the consistency prefix consistency with musical notes. In all the regions, the reads never see out of order writes:
The following graphic illustrates the consistency prefix consistency with musica
### Eventual consistency In eventual consistency, there's no ordering guarantee for reads. In the absence of any further writes, the replicas eventually converge. + Eventual consistency is the weakest form of consistency because a client may read the values that are older than the ones it had read before. Eventual consistency is ideal where the application does not require any ordering guarantees. Examples include count of Retweets, Likes, or non-threaded comments. The following graphic illustrates the eventual consistency with musical notes. :::image type="content" source="media/consistency-levels/eventual-consistency.gif" alt-text="viIllustration of eventual consistency":::
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md
For example, if you have 1-TB of data in two regions then:
* Restore cost is calculated as (1000 * 0.15) = $150 per restore
+## Customer-managed keys
+
+See [How do customer-managed keys affect continuous backups?](./how-to-setup-cmk.md#how-do-customer-managed-keys-affect-continuous-backups) to learn:
+
+- How to configure your Azure Cosmos DB account when using customer-managed keys in conjunction with continuous backups.
+- How do customer-managed keys affect restores.
+ ## Current limitations Currently the point in time restore functionality has the following limitations: * Only Azure Cosmos DB APIs for SQL and MongoDB are supported for continuous backup. Cassandra, Table, and Gremlin APIs are not yet supported.
-* Accounts with customer-managed keys are not supported to use continuous backup.
- * Multi-regions write accounts are not supported. * Azure Synapse Link and periodic backup mode can coexist in the same database account. However, analytical store data isn't included in backups and restores. When Synapse Link is enabled, Azure Cosmos DB will continue to automatically take backups of your data in the transactional store at a scheduled backup interval.
cosmos-db How To Setup Cmk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-cmk.md
Azure Cosmos DB takes [regular and automatic backups](./online-backup-and-restor
The following conditions are necessary to successfully restore a periodic backup: - The encryption key that you used at the time of the backup is required and must be available in Azure Key Vault. This means that no revocation was made and the version of the key that was used at the time of the backup is still enabled.-- If you [used a system-assigned managed identity in the Azure Key Vault access policy](#to-use-a-system-assigned-managed-identity) of the source account, you must temporarily grant access to the Azure Cosmos DB first-party identity in that access policy as described [here](#add-access-policy) before restoring your data. Once the data is fully restored to the target account, you can remove the first-party identity from the Key Vault access policy and set your desired identity configuration.
+- If you [used a system-assigned managed identity in the Azure Key Vault access policy](#to-use-a-system-assigned-managed-identity) of the source account, you must temporarily grant access to the Azure Cosmos DB first-party identity in that access policy as described [here](#add-access-policy) before restoring your data. This is because a system-assigned managed identity is specific to an account and cannot be re-used in the target account. Once the data is fully restored to the target account, you can set your desired identity configuration and remove the first-party identity from the Key Vault access policy.
+
+### How do customer-managed keys affect continuous backups?
+
+Azure Cosmos DB gives you the option to configure [continuous backups](./continuous-backup-restore-introduction.md) on your account. With continuous backups, you can restore your data to any point in time within the past 30 days. To use continuous backups on an account where customer-managed keys are enabled, you must [use a user-assigned managed identity](#to-use-a-user-assigned-managed-identity) in the Key Vault access policy; the Azure Cosmos DB first-party identity or a system-assigned managed identity aren't currently supported on accounts using continuous backups.
+
+The following conditions are necessary to successfully perform a point-in-time restore:
+- The encryption key that you used at the time of the backup is required and must be available in Azure Key Vault. This means that no revocation was made and the version of the key that was used at the time of the backup is still enabled.
+- You must ensure that the user-assigned managed identity originally used on the source account is still declared in the Key Vault access policy.
+
+> [!IMPORTANT]
+> If you revoke the encryption key before deleting your account, your account's backup may miss the data written up to 1 hour before the revocation was made.
### How do I revoke an encryption key?
cosmos-db Migrate Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/migrate-continuous-backup.md
The following are the key reasons to migrate into continuous mode:
> > * If the account is of type SQL API or API for MongoDB. > * If the account has a single write region.
-> * If the account isn't enabled with customer managed keys(CMK).
> * If the account isn't enabled with analytical store.
+>
+> If the account is using [customer-managed keys](./how-to-setup-cmk.md), a user-assigned managed identity must be declared in the Key Vault access policy and must be set as the default identity on the account.
## Permissions
Yes.
#### Which accounts can be targeted for backup migration? Currently, SQL API and API for MongoDB accounts with single write region, that have shared, provisioned, or autoscale provisioned throughput support migration.
-Accounts enabled with analytical storage, multiple-write regions, and Customer Managed Keys(CMK) are not supported for migration.
+Accounts enabled with analytical storage and multiple-write regions are not supported for migration.
#### Does the migration take time? What is the typical time? Migration takes time and it depends on the size of data and the number of regions in your account. You can get the migration status using Azure CLI or PowerShell commands. For large accounts with 10s of terabytes of data, the migration can take up to few days to complete.
cosmos-db Feature Support 42 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-42.md
+
+ Title: 4.2 server version supported features and syntax in Azure Cosmos DB API for MongoDB
+description: Learn about Azure Cosmos DB API for MongoDB 4.2 server version supported features and syntax. Learn about the database commands, query language support, datatypes, aggregation pipeline commands, and operators supported.
+++ Last updated : 02/23/2022++++
+# Azure Cosmos DB API for MongoDB (4.2 server version): supported features and syntax
+
+Azure Cosmos DB is Microsoft's globally distributed multi-model database service, offering [multiple database APIs](../choose-api.md). You can communicate with the Azure Cosmos DB API for MongoDB using any of the open-source MongoDB client [drivers](https://docs.mongodb.org/ecosystem/drivers). The Azure Cosmos DB API for MongoDB enables the use of existing client drivers by adhering to the MongoDB [wire protocol](https://docs.mongodb.org/manual/reference/mongodb-wire-protocol).
+
+By using the Azure Cosmos DB API for MongoDB, you can enjoy the benefits of the MongoDB you're used to, with all of the enterprise capabilities that Cosmos DB provides: [global distribution](../distribute-data-globally.md), [automatic sharding](../partitioning-overview.md), availability and latency guarantees, encryption at rest, backups, and much more.
+
+## Protocol Support
+
+The supported operators and any limitations or exceptions are listed below. Any client driver that understands these protocols should be able to connect to Azure Cosmos DB API for MongoDB. When using Azure Cosmos DB API for MongoDB accounts, the 3.6+ versions of accounts have the endpoint in the format `*.mongo.cosmos.azure.com` whereas the 3.2 version of accounts has the endpoint in the format `*.documents.azure.com`.
+
+> [!NOTE]
+> This article only lists the supported server commands, and excludes client-side wrapper functions. Client-side wrapper functions such as `deleteMany()` and `updateMany()` internally utilize the `delete()` and `update()` server commands. Functions utilizing supported server commands are compatible with the Azure Cosmos DB API for MongoDB.
+
+## Query language support
+
+Azure Cosmos DB API for MongoDB provides comprehensive support for MongoDB query language constructs. Below you can find the detailed list of currently supported operations, operators, stages, commands, and options.
+
+## Database commands
+
+Azure Cosmos DB API for MongoDB supports the following database commands:
+
+### Query and write operation commands
+
+| Command | Supported |
+|||
+| [change streams](change-streams.md) | Yes |
+| delete | Yes |
+| eval | No |
+| find | Yes |
+| findAndModify | Yes |
+| getLastError | Yes |
+| getMore | Yes |
+| getPrevError | No |
+| insert | Yes |
+| parallelCollectionScan | No |
+| resetError | No |
+| update | Yes |
+
+### Transaction commands
+> [!NOTE]
+> Multi-document transactions are only supported within a single non-sharded collection. Cross-collection and cross-shard multi-document transactions are not yet supported in the API for MongoDB.
+
+| Command | Supported |
+|||
+| abortTransaction | Yes |
+| commitTransaction | Yes |
+
+### Authentication commands
+
+| Command | Supported |
+|||
+| authenticate | Yes |
+| getnonce | Yes |
+| logout | Yes |
+
+### Administration commands
+
+| Command | Supported |
+|||
+| cloneCollectionAsCapped | No |
+| collMod | No |
+| connectionStatus | No |
+| convertToCapped | No |
+| copydb | No |
+| create | Yes |
+| createIndexes | Yes |
+| currentOp | Yes |
+| drop | Yes |
+| dropDatabase | Yes |
+| dropIndexes | Yes |
+| filemd5 | Yes |
+| killCursors | Yes |
+| killOp | No |
+| listCollections | Yes |
+| listDatabases | Yes |
+| listIndexes | Yes |
+| reIndex | Yes |
+| renameCollection | No |
+
+### Diagnostics commands
+
+| Command | Supported |
+|||
+| buildInfo | Yes |
+| collStats | Yes |
+| connPoolStats | No |
+| connectionStatus | No |
+| dataSize | No |
+| dbHash | No |
+| dbStats | Yes |
+| explain | Yes |
+| features | No |
+| hostInfo | Yes |
+| listDatabases | Yes |
+| listCommands | No |
+| profiler | No |
+| serverStatus | No |
+| top | No |
+| whatsmyuri | Yes |
+
+## <a name="aggregation-pipeline"></a>Aggregation pipeline
+
+### Aggregation commands
+
+| Command | Supported |
+|||
+| aggregate | Yes |
+| count | Yes |
+| distinct | Yes |
+| mapReduce | No |
+
+### Aggregation stages
+
+| Command | Supported |
+|||
+| $addFields | Yes |
+| $bucket | No |
+| $bucketAuto | No |
+| $changeStream | Yes |
+| $collStats | No |
+| $count | Yes |
+| $currentOp | No |
+| $facet | Yes |
+| $geoNear | Yes |
+| $graphLookup | Yes |
+| $group | Yes |
+| $indexStats | No |
+| $limit | Yes |
+| $listLocalSessions | No |
+| $listSessions | No |
+| $lookup | Partial |
+| $match | Yes |
+| $merge | Yes |
+| $out | Yes |
+| $planCacheStats | Yes |
+| $project | Yes |
+| $redact | Yes |
+| $regexFind | Yes |
+| $regexFindAll | Yes |
+| $regexMatch | Yes |
+| $replaceRoot | Yes |
+| $replaceWith | Yes |
+| $sample | Yes |
+| $set | Yes |
+| $skip | Yes |
+| $sort | Yes |
+| $sortByCount | Yes |
+| $unset | Yes |
+| $unwind | Yes |
+
+> [!NOTE]
+> The `$lookup` aggregation does not yet support the [uncorrelated subqueries](https://docs.mongodb.com/manual/reference/operator/aggregation/lookup/#join-conditions-and-uncorrelated-sub-queries) feature introduced in server version 3.6. You will receive an error with a message containing `let is not supported` if you attempt to use the `$lookup` operator with `let` and `pipeline` fields.
+
+### Boolean expressions
+
+| Command | Supported |
+|||
+| $and | Yes |
+| $not | Yes |
+| $or | Yes |
+
+### Conversion expressions
+
+| Command | Supported |
+|||
+| $convert | Yes |
+| $toBool | Yes |
+| $toDate | Yes |
+| $toDecimal | Yes |
+| $toDouble | Yes |
+| $toInt | Yes |
+| $toLong | Yes |
+| $toObjectId | Yes |
+| $toString | Yes |
+
+### Set expressions
+
+| Command | Supported |
+|||
+| $setEquals | Yes |
+| $setIntersection | Yes |
+| $setUnion | Yes |
+| $setDifference | Yes |
+| $setIsSubset | Yes |
+| $anyElementTrue | Yes |
+| $allElementsTrue | Yes |
+
+### Comparison expressions
+
+> [!NOTE]
+> The API for MongoDB does not support comparison expressions with an array literal in the query.
+
+| Command | Supported |
+|||
+| $cmp | Yes |
+| $eq | Yes |
+| $gt | Yes |
+| $gte | Yes |
+| $lt | Yes |
+| $lte | Yes |
+| $ne | Yes |
+| $in | Yes |
+| $nin | Yes |
+
+### Arithmetic expressions
+
+| Command | Supported |
+|||
+| $abs | Yes |
+| $add | Yes |
+| $ceil | Yes |
+| $divide | Yes |
+| $exp | Yes |
+| $floor | Yes |
+| $ln | Yes |
+| $log | Yes |
+| $log10 | Yes |
+| $mod | Yes |
+| $multiply | Yes |
+| $pow | Yes |
+| $round | Yes |
+| $sqrt | Yes |
+| $subtract | Yes |
+| $trunc | Yes |
+
+### Trigonometry expressions
+
+| Command | Supported |
+|||
+| $acos | Yes |
+| $acosh | Yes |
+| $asin | Yes |
+| $asinh | Yes |
+| $atan | Yes |
+| $atan2 | Yes |
+| $atanh | Yes |
+| $cos | Yes |
+| $cosh | Yes |
+| $degreesToRadians | Yes |
+| $radiansToDegrees | Yes |
+| $sin | Yes |
+| $sinh | Yes |
+| $tan | Yes |
+| $tanh | Yes |
+
+### String expressions
+
+| Command | Supported |
+|||
+| $concat | Yes |
+| $indexOfBytes | Yes |
+| $indexOfCP | Yes |
+| $ltrim | Yes |
+| $rtrim | Yes |
+| $trim | Yes |
+| $split | Yes |
+| $strLenBytes | Yes |
+| $strLenCP | Yes |
+| $strcasecmp | Yes |
+| $substr | Yes |
+| $substrBytes | Yes |
+| $substrCP | Yes |
+| $toLower | Yes |
+| $toUpper | Yes |
+
+### Text search operator
+
+| Command | Supported |
+|||
+| $meta | No |
+
+### Array expressions
+
+| Command | Supported |
+|||
+| $arrayElemAt | Yes |
+| $arrayToObject | Yes |
+| $concatArrays | Yes |
+| $filter | Yes |
+| $indexOfArray | Yes |
+| $isArray | Yes |
+| $objectToArray | Yes |
+| $range | Yes |
+| $reverseArray | Yes |
+| $reduce | Yes |
+| $size | Yes |
+| $slice | Yes |
+| $zip | Yes |
+| $in | Yes |
+
+### Variable operators
+
+| Command | Supported |
+|||
+| $map | Yes |
+| $let | Yes |
+
+### System variables
+
+| Command | Supported |
+|||
+| $$CLUSTERTIME | Yes |
+| $$CURRENT | Yes |
+| $$DESCEND | Yes |
+| $$KEEP | Yes |
+| $$NOW | Yes |
+| $$PRUNE | Yes |
+| $$REMOVE | Yes |
+| $$ROOT | Yes |
+
+### Literal operator
+
+| Command | Supported |
+|||
+| $literal | Yes |
+
+### Date expressions
+
+| Command | Supported |
+|||
+| $dayOfYear | Yes |
+| $dayOfMonth | Yes |
+| $dayOfWeek | Yes |
+| $year | Yes |
+| $month | Yes |
+| $week | Yes |
+| $hour | Yes |
+| $minute | Yes |
+| $second | Yes |
+| $millisecond | Yes |
+| $dateToString | Yes |
+| $isoDayOfWeek | Yes |
+| $isoWeek | Yes |
+| $dateFromParts | Yes |
+| $dateToParts | Yes |
+| $dateFromString | Yes |
+| $isoWeekYear | Yes |
+
+### Conditional expressions
+
+| Command | Supported |
+|||
+| $cond | Yes |
+| $ifNull | Yes |
+| $switch | Yes |
+
+### Data type operator
+
+| Command | Supported |
+|||
+| $type | Yes |
+
+### Accumulator expressions
+
+| Command | Supported |
+|||
+| $sum | Yes |
+| $avg | Yes |
+| $first | Yes |
+| $last | Yes |
+| $max | Yes |
+| $min | Yes |
+| $push | Yes |
+| $addToSet | Yes |
+| $stdDevPop | Yes |
+| $stdDevSamp | Yes |
+
+### Merge operator
+
+| Command | Supported |
+|||
+| $mergeObjects | Yes |
+
+## Data types
+
+Azure Cosmos DB API for MongoDB supports documents encoded in MongoDB BSON format. The 4.2 API version enhances the internal usage of this format to improve performance and reduce costs. Documents written or updated through an endpoint running 4.2 benefit from this.
+
+In an [upgrade scenario](upgrade-mongodb-version.md), documents written prior to the upgrade to version 4.2 will not benefit from the enhanced performance until they are updated via a write operation through the 4.2 endpoint.
+
+| Command | Supported |
+|||
+| Double | Yes |
+| String | Yes |
+| Object | Yes |
+| Array | Yes |
+| Binary Data | Yes |
+| ObjectId | Yes |
+| Boolean | Yes |
+| Date | Yes |
+| Null | Yes |
+| 32-bit Integer (int) | Yes |
+| Timestamp | Yes |
+| 64-bit Integer (long) | Yes |
+| MinKey | Yes |
+| MaxKey | Yes |
+| Decimal128 | Yes |
+| Regular Expression | Yes |
+| JavaScript | Yes |
+| JavaScript (with scope)| Yes |
+| Undefined | Yes |
+
+## Indexes and index properties
+
+### Indexes
+
+| Command | Supported |
+|||
+| Single Field Index | Yes |
+| Compound Index | Yes |
+| Multikey Index | Yes |
+| Text Index | No |
+| 2dsphere | Yes |
+| 2d Index | No |
+| Hashed Index | Yes |
+
+### Index properties
+
+| Command | Supported |
+|||
+| TTL | Yes |
+| Unique | Yes |
+| Partial | No |
+| Case Insensitive | No |
+| Sparse | No |
+| Background | Yes |
+
+## Operators
+
+### Logical operators
+
+| Command | Supported |
+|||
+| $or | Yes |
+| $and | Yes |
+| $not | Yes |
+| $nor | Yes |
+
+### Element operators
+
+| Command | Supported |
+|||
+| $exists | Yes |
+| $type | Yes |
+
+### Evaluation query operators
+
+| Command | Supported |
+|||
+| $expr | No |
+| $jsonSchema | No |
+| $mod | Yes |
+| $regex | Yes |
+| $text | No (Not supported. Use $regex instead.)|
+| $where | No |
+
+In the $regex queries, left-anchored expressions allow index search. However, using 'i' modifier (case-insensitivity) and 'm' modifier (multiline) causes the collection scan in all expressions.
+
+When there's a need to include '$' or '|', it is best to create two (or more) regex queries. For example, given the following original query: `find({x:{$regex: /^abc$/})`, it has to be modified as follows:
+
+`find({x:{$regex: /^abc/, x:{$regex:/^abc$/}})`
+
+The first part will use the index to restrict the search to those documents beginning with ^abc and the second part will match the exact entries. The bar operator '|' acts as an "or" function - the query `find({x:{$regex: /^abc |^def/})` matches the documents in which field 'x' has values that begin with "abc" or "def". To utilize the index, it's recommended to break the query into two different queries joined by the $or operator: `find( {$or : [{x: $regex: /^abc/}, {$regex: /^def/}] })`.
+
+### Array operators
+
+| Command | Supported |
+|||
+| $all | Yes |
+| $elemMatch | Yes |
+| $size | Yes |
+
+### Comment operator
+
+| Command | Supported |
+|||
+| $comment | Yes |
+
+### Projection operators
+
+| Command | Supported |
+|||
+| $elemMatch | Yes |
+| $meta | No |
+| $slice | Yes |
+
+### Update operators
+
+#### Field update operators
+
+| Command | Supported |
+|||
+| $inc | Yes |
+| $mul | Yes |
+| $rename | Yes |
+| $setOnInsert | Yes |
+| $set | Yes |
+| $unset | Yes |
+| $min | Yes |
+| $max | Yes |
+| $currentDate | Yes |
+
+#### Array update operators
+
+| Command | Supported |
+|||
+| $ | Yes |
+| $[]| Yes |
+| $[\<identifier\>]| Yes |
+| $addToSet | Yes |
+| $pop | Yes |
+| $pullAll | Yes |
+| $pull | Yes |
+| $push | Yes |
+| $pushAll | Yes |
+
+#### Update modifiers
+
+| Command | Supported |
+|||
+| $each | Yes |
+| $slice | Yes |
+| $sort | Yes |
+| $position | Yes |
+
+#### Bitwise update operator
+
+| Command | Supported |
+|||
+| $bit | Yes |
+| $bitsAllSet | No |
+| $bitsAnySet | No |
+| $bitsAllClear | No |
+| $bitsAnyClear | No |
+
+### Geospatial operators
+
+Operator | Supported |
+ | |
+$geoWithin | Yes |
+$geoIntersects | Yes |
+$near | Yes |
+$nearSphere | Yes |
+$geometry | Yes |
+$minDistance | Yes |
+$maxDistance | Yes |
+$center | No |
+$centerSphere | No |
+$box | No |
+$polygon | No |
+
+## Sort operations
+
+When using the `findOneAndUpdate` operation, sort operations on a single field are supported but sort operations on multiple fields are not supported.
+
+## Indexing
+The API for MongoDB [supports a variety of indexes](mongodb-indexing.md) to enable sorting on multiple fields, improve query performance, and enforce uniqueness.
+
+## Client-side field level encryption
+
+Client-level field encryption is a driver feature and is compatible with the API for MongoDB. Explicit encryption - were the driver explicitly encrypts each field when written is supported. Explicit decryption and automatic decryption is supported.
+
+The mongocryptd should not be run since it is not needed to perform any of the supported operations.
+
+## GridFS
+
+Azure Cosmos DB supports GridFS through any GridFS-compatible Mongo driver.
+
+## Replication
+
+Azure Cosmos DB supports automatic, native replication at the lowest layers. This logic is extended out to achieve low-latency, global replication as well. Cosmos DB does not support manual replication commands.
+
+## Retryable Writes
+
+Cosmos DB does not yet support retryable writes. Client drivers must add the 'retryWrites=false' URL parameter to their connection string. More URL parameters can be added by prefixing them with an '&'.
+
+## Sharding
+
+Azure Cosmos DB supports automatic, server-side sharding. It manages shard creation, placement, and balancing automatically. Azure Cosmos DB does not support manual sharding commands, which means you don't have to invoke commands such as addShard, balancerStart, moveChunk etc. You only need to specify the shard key while creating the containers or querying the data.
+
+## Sessions
+
+Azure Cosmos DB does not yet support server-side sessions commands.
+
+## Time-to-live (TTL)
+
+Azure Cosmos DB supports a time-to-live (TTL) based on the timestamp of the document. TTL can be enabled for collections by going to the [Azure portal](https://portal.azure.com).
+
+## Transactions
+
+Multi-document transactions are supported within an unsharded collection. Multi-document transactions are not supported across collections or in sharded collections. The timeout for transactions is a fixed 5 seconds.
+
+## User and role management
+
+Azure Cosmos DB does not yet support users and roles. However, Cosmos DB supports Azure role-based access control (Azure RBAC) and read-write and read-only passwords/keys that can be obtained through the [Azure portal](https://portal.azure.com) (Connection String page).
+
+## Write Concern
+
+Some applications rely on a [Write Concern](https://docs.mongodb.com/manual/reference/write-concern/), which specifies the number of responses required during a write operation. Due to how Cosmos DB handles replication in the background all writes are all automatically Quorum by default. Any write concern specified by the client code is ignored. Learn more in [Using consistency levels to maximize availability and performance](../consistency-levels.md).
+
+## Next steps
+
+- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB API for MongoDB.
+- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB API for MongoDB.
+- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB API for MongoDB.
+- Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
+ - If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md).
+ - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md).
cosmos-db Upgrade Mongodb Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/upgrade-mongodb-version.md
description: How to upgrade the MongoDB wire-protocol version for your existing
Previously updated : 08/26/2021 Last updated : 02/23/2022
This article describes how to upgrade the API version of your Azure Cosmos DB's
When upgrading to a new API version, start with development/test workloads before upgrading production workloads. It's important to upgrade your clients to a version compatible with the API version you are upgrading to before upgrading your Azure Cosmos DB API for MongoDB account. >[!Note]
-> At this moment, only qualifying accounts using the server version 3.2 can be upgraded to version 3.6 or 4.0. If your account doesn't show the upgrade option, please [file a support ticket](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
+> At this moment, only qualifying accounts using the server version 3.2 can be upgraded to version 3.6 and higher. If your account doesn't show the upgrade option, please [file a support ticket](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
+
+## Upgrading to 4.2, 4.0, or 3.6
+### Benefits of upgrading to version 4.2:
+- Several major improvements to the aggregation pipeline such as support for `$merge`, Trigonometry, arithmetic expressions, and more.
+- Support for client side field encyption which further secures your database by enabling individual fields to be selectively encrypted and maintaining privacy of the encrypted data from database users and hosting providers.
+
-## Upgrading to 4.0 or 3.6
### Benefits of upgrading to version 4.0
When upgrading from 3.2 to newer versions, [compound indexes](mongodb-indexing.m
:::image type="content" source="./media/upgrade-mongodb-version/upgrade-server-version.png" alt-text="Open the Features blade and upgrade your account." border="true":::
-1. Review the information displayed about the upgrade. Select `Set server version to 4.0` (or 3.6 depending upon your current version).
+1. Review the information displayed about the upgrade. Select `Set server version to 4.2` (or 4.0 or 3.6 depending upon your current version).
:::image type="content" source="./media/upgrade-mongodb-version/select-upgrade.png" alt-text="Review upgrade guidance and select upgrade." border="true":::
When upgrading from 3.2 to newer versions, [compound indexes](mongodb-indexing.m
## How to downgrade
-You may also downgrade your account from 4.0 to 3.6 via the same steps in the 'How to Upgrade' section.
+You may also downgrade your account to 4.0 or 3.6 via the same steps in the 'How to Upgrade' section.
-If you upgraded from 3.2 to (4.0 or 3.6) and wish to downgrade back to 3.2, you can simply switch back to using your previous (3.2) connection string with the host `accountname.documents.azure.com` which remains active post-upgrade running version 3.2.
+If you upgraded from 3.2 to and wish to downgrade back to 3.2, you can simply switch back to using your previous (3.2) connection string with the host `accountname.documents.azure.com` which remains active post-upgrade running version 3.2.
## Next steps
+- Learn about the supported and unsupported [features of MongoDB version 4.2](feature-support-42.md).
- Learn about the supported and unsupported [features of MongoDB version 4.0](feature-support-40.md). - Learn about the supported and unsupported [features of MongoDB version 3.6](feature-support-36.md). - For further information check [Mongo 3.6 version features](https://devblogs.microsoft.com/cosmosdb/azure-cosmos-dbs-api-for-mongodb-now-supports-server-version-3-6/)
cosmos-db Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Cosmos DB description: Sample Azure Resource Graph queries for Azure Cosmos DB showing use of resource types and tables to access Azure Cosmos DB related resources and properties. Previously updated : 01/20/2022 Last updated : 02/16/2022
cosmos-db Migrate Dotnet V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/migrate-dotnet-v3.md
Previously updated : 02/18/2022 Last updated : 02/23/2022 ms.devlang: csharp
The following are some of the main class name changes:
|`Microsoft.Azure.Documents.Client.FeedOptions`|`Microsoft.Azure.Cosmos.QueryRequestOptions`| |`Microsoft.Azure.Documents.Client.StoredProcedure`|`Microsoft.Azure.Cosmos.StoredProcedureProperties`| |`Microsoft.Azure.Documents.Client.Trigger`|`Microsoft.Azure.Cosmos.TriggerProperties`|
+|`Microsoft.Azure.Documents.SqlQuerySpec`|`Microsoft.Azure.Cosmos.QueryDefinition`|
### Classes replaced on .NET v3 SDK
private static async Task ReadAllItems(DocumentClient client)
### Query items
+#### Changes to SqlQuerySpec (QueryDefinition in v3.0 SDK)
+
+The `SqlQuerySpec` class in SDK v2 has now been renamed to `QueryDefinition` in the SDK v3.
+
+`SqlParameterCollection` and `SqlParameter` has been removed. Parameters are now added to the `QueryDefinition` with a builder model using `QueryDefinition.WithParameter`. Users can access the parameters with `QueryDefinition.GetQueryParameters`
# [.NET SDK v3](#tab/dotnet-v3)
cosmos-db Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/synapse-link.md
Synapse Link enables you to run near real-time analytics over your mission-criti
* **Network isolation using private endpoints** - You can control network access to the data in the transactional and analytical stores independently. Network isolation is done using separate managed private endpoints for each store, within managed virtual networks in Azure Synapse workspaces. To learn more, see how to [Configure private endpoints for analytical store](analytical-store-private-endpoints.md) article.
-* **Data encryption with customer-managed keys** - You can seamlessly encrypt the data across transactional and analytical stores using the same customer-managed keys in an automatic and transparent manner. Azure Synapse Link only supports configuring customer-managed keys using your Azure Cosmos DB account's managed identity. You must configure your account's managed identity in your Azure Key Vault access policy before enabling Azure Synapse Link](configure-synapse-link.md#enable-synapse-link) on your account. To learn more, see how to [Configure customer-managed keys using Azure Cosmos DB accounts' managed identities](how-to-setup-cmk.md#using-managed-identity) article.
+* **Data encryption with customer-managed keys** - You can seamlessly encrypt the data across transactional and analytical stores using the same customer-managed keys in an automatic and transparent manner. Azure Synapse Link only supports configuring customer-managed keys using your Azure Cosmos DB account's managed identity. You must configure your account's managed identity in your Azure Key Vault access policy before [enabling Azure Synapse Link](configure-synapse-link.md#enable-synapse-link) on your account. To learn more, see how to [Configure customer-managed keys using Azure Cosmos DB accounts' managed identities](how-to-setup-cmk.md#using-managed-identity) article.
* **Secure key management** - Accessing the data in analytical store from Synapse Spark and Synapse serverless SQL pools requires managing Azure Cosmos DB keys within Synapse Analytics workspaces. Instead of using the Azure Cosmos DB account keys inline in Spark jobs or SQL scripts, Azure Synapse Link provides more secure capabilities:
data-catalog Data Catalog Common Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-common-scenarios.md
Title: Azure Data Catalog common scenarios description: An overview of common scenarios for Azure Data Catalog, including the registration and discovery of high-value data sources, enabling self-service business intelligence, and capturing existing knowledge about data sources and processes.--++ Previously updated : 08/01/2019 Last updated : 02/22/2022 # Azure Data Catalog common scenarios
Last updated 08/01/2019
This article presents common scenarios where Azure Data Catalog can help your organization get more value from its existing data sources. ## Scenario 1: Registration of central data sources+ Organizations often have many high-value data sources. These data sources include line-of-business, online transaction processing (OLTP) systems, data warehouses, and business intelligence/analytics databases. The number of systems, and the overlap between them, typically grows over time as business needs evolve and the business itself evolves through, for example, mergers and acquisitions. It can be difficult for organization members to know where to locate the data within these data sources. Questions like the following are all too common:
It can be difficult for organization members to know where to locate the data wi
* Who should I ask, or what is the process I should use to get access to the data warehouse? * I donΓÇÖt know if these numbers are right. Who can I ask for insight on how this data is supposed to be used before I share this dashboard with my team?
-To these and other questions, Azure Data Catalog can provide answers. The central, high-value, IT-managed data sources that are used across organizations are often the logical starting point for populating the catalog. Although any user can register a data source, having the catalog kick-started with the data sources that are most likely to provide value to the largest number of users helps drive adoption and use of the system.
+To these and other questions, Azure Data Catalog can provide answers. The central, high-value, IT-managed data sources that are used across organizations are often the logical starting point for populating the catalog. Although any user can register a data source, having the catalog kick-started with the data sources that are most likely to provide value to the largest number of users helps drive adoption and use of the system.
-If you are getting started with Azure Data Catalog, identifying and registering key data sources that are used by many different teams of data consumers can be your first step to success.
+If you're getting started with Azure Data Catalog, identifying and registering key data sources that are used by many different teams of data consumers can be your first step to success.
-This scenario also presents an opportunity to annotate the high-value data sources to make them easier to understand and access. One key aspect of this effort is to include information on how users can request access to the data source. With Azure Data Catalog, you can provide the email address of the user or team that's responsible for controlling data-source access, links to existing tools or documentation, or free text that describes the access-request process. This information helps members who discover registered data sources but who do not yet have permissions to access the data to easily request access by using the processes that are defined and controlled by the data-source owners.
+This scenario also presents an opportunity to annotate the high-value data sources to make them easier to understand and access. One key aspect of this effort is to include information on how users can request access to the data source. With Azure Data Catalog, you can provide the email address of the user or team that's responsible for controlling data-source access, links to existing tools or documentation, or free text that describes the access-request process. This information helps members who discover registered data sources but who don't yet have permissions to access the data to easily request access by using the processes that are defined and controlled by the data-source owners.
## Scenario 2: Self-service business intelligence
-Although traditional corporate business-intelligence solutions continue to be an invaluable part of many organizationsΓÇÖ data landscapes, the changing pace of business has made self-service BI more and more important. By using self-service BI, information workers and analysts can create their own reports, workbooks, and dashboards without relying on a central IT team or being restricted by that IT teamΓÇÖs schedule and availability.
+
+Although traditional corporate business-intelligence solutions continue to be an invaluable part of many organizationsΓÇÖ data landscapes, the changing pace of business has made self-service BI more important. By using self-service BI, information workers and analysts can create their own reports, workbooks, and dashboards without relying on a central IT team or being restricted by that IT teamΓÇÖs schedule and availability.
In self-service BI scenarios, users commonly combine data from multiple sources, many of which might not have previously been used for BI and analysis. Although some of these data sources might already be known, it can be challenging to discover what to do to locate and evaluate potential data sources for a given task. Traditionally, this discovery process is a manual one: analysts use their peer network connections to identify others who work with the data being sought. After a data source is found and used, the process repeats itself again for each subsequent self-service BI effort, with multiple users performing a redundant manual process of discovery.
-With Azure Data Catalog, your organization can break this cycle of effort. After discovering a data source through traditional means, an analyst can register it to make it more easily discoverable by other users in the future. Although the analyst can add more value by annotating the registered data assets, this annotation does not need to take place at the same time as registration. Users can contribute over time, as their schedules permit, gradually adding value to the data sources registered in the catalog.
+With Azure Data Catalog, your organization can break this cycle of effort. After discovering a data source through traditional means, an analyst can register it to make it more easily discoverable by other users in the future. Although the analyst can add more value by annotating the registered data assets, this annotation doesn't need to take place at the same time as registration. Users can contribute over time, as their schedules permit, gradually adding value to the data sources registered in the catalog.
-This organic growth of the catalog content is a natural complement to the up-front registration of central data sources. Pre-populating the catalog with data that many users will need can be a motivator for initial use and discovery. Enabling users to register and annotate additional sources can be a way to keep them and other organization members engaged.
+This organic growth of the catalog content is a natural complement to the up-front registration of central data sources. Pre-populating the catalog with data that many users will need can be a motivator for initial use and discovery. Enabling users to register and annotate more sources can be a way to keep them and other organization members engaged.
ItΓÇÖs worth noting that although this scenario focuses specifically on self-service BI, the same patterns and challenges apply to large-scale corporate BI projects as well. By using Data Catalog, your organization can improve any effort that involves a manual process of data-source discovery. ## Scenario 3: Capturing tribal knowledge+ How do you know what data you need to do your job, and where to find that data? If youΓÇÖve been in your job for a while, you probably just know. YouΓÇÖve gone through a gradual learning process, and over time have learned about the data sources that are key to your day-to-day work.
data-catalog Data Catalog How To Business Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-business-glossary.md
Title: Set up the business glossary in Azure Data Catalog description: How-to article highlighting the business glossary in Azure Data Catalog for defining and using a common business vocabulary to tag registered data assets.--++ Previously updated : 08/01/2019 Last updated : 02/23/2022 # Set up the business glossary for governed tagging
The business glossary is available only in the Standard Edition of Azure Data Ca
You can access the business glossary via the **Glossary** option in the Data Catalog portal's navigation menu.
-![Data Catalog - Access the business glossary](./media/data-catalog-how-to-business-glossary/01-portal-menu.png)
Data Catalog administrators and members of the glossary administrators role can create, edit, and delete glossary terms in the business glossary. All Data Catalog users can view the term definitions and tag assets with glossary terms.
-![Data Catalog - Add a new glossary term](./media/data-catalog-how-to-business-glossary/02-new-term.png)
## Creating glossary terms
Data Catalog administrators and glossary administrators can create glossary term
By using the Data Catalog business glossary, an organization can describe its business vocabulary as a hierarchy of terms, and it can create a classification of terms that better represents its business taxonomy.
-A term must be unique at a given level of hierarchy. Duplicate names aren't allowed. There is no limit to the number of levels in a hierarchy, but a hierarchy is often more easily understood when there are three levels or fewer.
+A term must be unique at a given level of hierarchy. Duplicate names aren't allowed. There's no limit to the number of levels in a hierarchy, but a hierarchy is often more easily understood when there are three levels or fewer.
The use of hierarchies in the business glossary is optional. Leaving the parent term field blank for glossary terms creates a flat (non-hierarchical) list of terms in the glossary.
The use of hierarchies in the business glossary is optional. Leaving the parent
After glossary terms have been defined within the catalog, the experience of tagging assets is optimized to search the glossary as a user types a tag. The Data Catalog portal displays a list of matching glossary terms to choose from. If the user selects a glossary term from the list, the term is added to the asset as a tag (also called a glossary tag). The user can also choose to create a new tag by typing a term that's not in the glossary (also called a user tag).
-![Data asset tagged with one user tag and two glossary tags](./media/data-catalog-how-to-business-glossary/03-tagged-asset.png)
> [!NOTE] > User tags are the only type of tag supported in the Free Edition of Data Catalog.
By using the business glossary in Azure Data Catalog, and the governed tagging i
## Next steps
-* [REST API documentation for business glossary operations](/rest/api/datacatalog/data-catalog-glossary)
+* [REST API documentation for business glossary operations](/rest/api/datacatalog/data-catalog-glossary)
data-catalog Data Catalog How To Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-connect.md
Title: How to connect to data sources in Azure Data Catalog description: How-to article highlighting how to connect to data sources discovered with Azure Data Catalog.--++ Previously updated : 08/01/2019 Last updated : 02/22/2022 # How to connect to data sources [!INCLUDE [Azure Purview redirect](../../includes/data-catalog-use-purview.md)] ## Introduction+ **Microsoft Azure Data Catalog** is a fully managed cloud service that serves as a system of registration and system of discovery for enterprise data sources. In other words, **Azure Data Catalog** is all about helping people discover, understand, and use data sources, and helping organizations to get more value from their existing data. A key aspect of this scenario is using the data – once a user discovers a data source and understands its purpose, the next step is to connect to the data source to put its data to use. ## Data source locations+ During data source registration, **Azure Data Catalog** receives metadata about the data source. This metadata includes the details of the data source’s location. The details of the location will vary from data source to data source, but it will always contain the information needed to connect. For example, the location for a SQL Server table includes the server name, database name, schema name, and table name, while the location for a SQL Server Reporting Services report includes the server name and the path to the report. Other data source types will have locations that reflect the structure and capabilities of the source system. ## Integrated client tools+ The simplest way to connect to a data source is to use the “Open in…” menu in the **Azure Data Catalog** portal. This menu displays a list of options for connecting to the selected data asset.
-When using the default tile view, this menu is available on the each tile.
+In the default tile view, this menu is available on each tile.
- ![Opening a SQL Server table in Excel from the data asset tile](./media/data-catalog-how-to-connect/data-catalog-how-to-connect1.png)
+ :::image type="content" source="./media/data-catalog-how-to-connect/data-catalog-how-to-connect1.png" alt-text="Opening a SQL Server table in Excel from the data asset tile by selecting the Open In tab.":::
-When using the list view, the menu is available in the search bar at the top of the portal window.
+In the list view, the menu is available in the search bar at the top of the portal window.
- ![Opening a SQL Server Reporting Services report in Report Manager](./media/data-catalog-how-to-connect/data-catalog-how-to-connect2.png)
## Supported Client Applications+ When using the “Open in…” menu for data sources in the Azure Data Catalog portal, the correct client application must be installed on the client computer. | Open in application | File extension / protocol | Supported application versions |
When using the “Open in…” menu for data sources in the Azure Data Catalog
| Report Manager |http:// |See [browser requirements for SQL Server Reporting Services](/sql/reporting-services/browser-support-for-reporting-services-and-power-view) | ## Your data, your tools
-The options available in the menu will depend on the type of data asset currently selected. Of course, not all possible tools will be included in the “Open in…” menu, but it is still easy to connect to the data source using any client tool. When a data asset is selected in the **Azure Data Catalog** portal, the complete location is displayed in the properties pane.
- ![Connection information for a SQL Server table](./media/data-catalog-how-to-connect/data-catalog-how-to-connect3.png)
+The options available in the menu will depend on the type of data asset currently selected. Not all possible tools will be included in the “Open in…” menu, but it's still easy to connect to the data source using any client tool. When a data asset is selected in the **Azure Data Catalog** portal, the complete location is displayed in the properties pane.
+ The connection information details will differ from data source type to data source type, but the information included in the portal will give you everything you need to connect to the data source in any client tool. Users can copy the connection details for the data sources that they have discovered using **Azure Data Catalog**, enabling them to work with the data in their tool of choice. ## Connecting and data source permissions
-Although **Azure Data Catalog** makes data sources discoverable, access to the data itself remains under the control of the data source owner or administrator. Discovering a data source in **Azure Data Catalog** does not give a user any permissions to access the data source itself.
-To make it easier for users who discover a data source but do not have permission to access its data, users can provide information in the Request Access property when annotating a data source. Information provided here ΓÇô including links to the process or point of contact for gaining data source access ΓÇô is presented alongside the data source location information in the portal.
+Although **Azure Data Catalog** makes data sources discoverable, access to the data remains under the control of the data source owner or administrator. Discovering a data source in **Azure Data Catalog** doesn't give a user any permissions to access the data source itself.
+
+To make it easier for users who discover a data source but don't have permission to access its data, users can provide information in the Request Access property when annotating a data source. Information provided here ΓÇô including links to the process or point of contact for gaining data source access ΓÇô is presented alongside the data source location information in the portal.
- ![Connection information with request access instructions provided](./media/data-catalog-how-to-connect/data-catalog-how-to-connect4.png)
## Summary+ Registering a data source with **Azure Data Catalog** makes that data discoverable by copying structural and descriptive metadata from the data source into the Catalog service. Once a data source has been registered, and discovered, users can connect to the data source from the **Azure Data Catalog** portal “Open in…”” menu or using their data tools of choice. ## See also
-* [Get Started with Azure Data Catalog](data-catalog-get-started.md) tutorial for step-by-step details about how to connect to data sources.
+
+* [Get Started with Azure Data Catalog](data-catalog-get-started.md) tutorial for step-by-step details about how to connect to data sources.
data-factory Concepts Data Flow Performance Sinks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-performance-sinks.md
With Azure SQL Database, the default partitioning should work in most cases. The
### Best practice for deleting rows in sink based on missing rows in source
-Here is a video walk through of how to use data flows with exits, alter row, and sink transformations to achieve this common pattern:
+Here is a video walk through of how to use data flows with exists, alter row, and sink transformations to achieve this common pattern:
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWMLr5]
When writing files, you have a choice of naming options that each have a perform
Selecting the **Default** option will write the fastest. Each partition will equate to a file with the Spark default name. This is useful if you are just reading from the folder of data.
-Setting a naming **Pattern** will rename each partition file to a more user-friendly name. This operation happens after write and is slightly slower than choosing the default. Per partition allows you to name each individual partition manually.
+Setting a naming **Pattern** will rename each partition file to a more user-friendly name. This operation happens after write and is slightly slower than choosing the default.
-If a column corresponds to how you wish to output the data, you can select **As data in column**. This reshuffles the data and can impact performance if the columns are not evenly distributed.
+**Per partition** allows you to name each individual partition manually.
+
+If a column corresponds to how you wish to output the data, you can select **Name file as column data**. This reshuffles the data and can impact performance if the columns are not evenly distributed.
+
+If a column corresponds to how you wish to generate folder names, select **Name folder as column data**.
**Output to single file** combines all the data into a single partition. This leads to long write times, especially for large datasets. This option is strongly discouraged unless there is an explicit business reason to use it.
-## CosmosDB sinks
+## Azure Cosmos DB sinks
-When writing to CosmosDB, altering throughput and batch size during data flow execution can improve performance. These changes only take effect during the data flow activity run and will return to the original collection settings after conclusion.
+When writing to Azure Cosmos DB, altering throughput and batch size during data flow execution can improve performance. These changes only take effect during the data flow activity run and will return to the original collection settings after conclusion.
**Batch size:** Usually, starting with the default batch size is sufficient. To further tune this value, calculate the rough object size of your data, and make sure that object size * batch size is less than 2MB. If it is, you can increase the batch size to get better throughput.
-**Throughput:** Set a higher throughput setting here to allow documents to write faster to CosmosDB. Keep in mind the higher RU costs based upon a high throughput setting.
+**Throughput:** Set a higher throughput setting here to allow documents to write faster to Azure Cosmos DB. Keep in mind the higher RU costs based upon a high throughput setting.
**Write throughput budget:** Use a value which is smaller than total RUs per minute. If you have a data flow with a high number of Spark partitions, setting a budget throughput will allow more balance across those partitions.
data-factory Connector Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-rest.md
Previously updated : 01/13/2022 Last updated : 02/21/2022
This generic REST connector supports the following pagination patterns:
| Headers.*request_header* OR Headers['request_header'] | "request_header" is user-defined, which references one header name in the next HTTP request. | | EndCondition:*end_condition* | "end_condition" is user-defined, which indicates the condition that will end the pagination loop in the next HTTP request. | | MaxRequestNumber | Indicates the maximum pagination request number. Leave it as empty means no limit. |
-| SupportRFC5988 | RFC 5988 is supported in the pagination rules. By default, this is set to true. It will only be honored if no other pagination rules are defined.
+| SupportRFC5988 | By default, this is set to true if no pagination rule is defined. You can disable this rule by setting `supportRFC5988` to false or remove this property from script. |
**Supported values** in pagination rules:
This generic REST connector supports the following pagination patterns:
| Headers.*response_header* OR Headers['response_header'] | "response_header" is user-defined, which references one header name in the current HTTP response, the value of which will be used to issue next request. | | A JSONPath expression starting with "$" (representing the root of the response body) | The response body should contain only one JSON object. The JSONPath expression should return a single primitive value, which will be used to issue next request. |
-**Example:**
+>[!NOTE]
+> The pagination rules in mapping data flows is different from it in copy activity in the following aspects:
+>1. Range is not supported in mapping data flows.
+>2. `['']` is not supported in mapping data flows. Instead, use `{}` to escape special character. For example, `body.{@odata.nextLink}`, whose JSON node `@odata.nextLink` contains special character `.` .
+>3. The end condition is supported in mapping data flows, but the condition syntax is different from it in copy activity. `body` is used to indicate the response body instead of `$`. `header` is used to indicate the response header instead of `headers`. Here are two examples showing this difference:
+> - Example 1:
+> Copy activity: **"EndCondition:$.data": "Empty"**
+> Mapping data flows: **"EndCondition:body.data": "Empty"**
+> - Example 2:
+> Copy activity: **"EndCondition:headers.complete": "Exist"**
+> Mapping data flows: **"EndCondition:header.complete": "Exist"**
+
+### Pagination rules examples
+
+This section provides a list of examples for pagination rules settings.
+
+#### Example 1: Variables in QueryParameters
+
+This example provides the configuration steps to send multiple requests whose variables are in QueryParameters.
+
+**Multiple requests:**
+```
+baseUrl/api/now/table/incident?sysparm_limit=1000&sysparm_offset=0,
+baseUrl/api/now/table/incident?sysparm_limit=1000&sysparm_offset=1000,
+......
+baseUrl/api/now/table/incident?sysparm_limit=1000&sysparm_offset=10000
+```
+
+*Step 1*: Input `sysparm_offset={offset}` either in **Base URL** or **Relative URL** as shown in the following screenshots:
+
+
+or
+
+
+*Step 2*: Set **Pagination rules** as either option 1 or option 2:
+
+- Option1: **"QueryParameters.{offset}" : "RANGE:0:10000:1000"**
+
+- Option2: **"AbsoluteUrl.{offset}" : "RANGE:0:10000:1000"**
++
+#### Example 2:Variables in AbsoluteUrl
+
+This example provides the configuration steps to send multiple requests whose variables are in AbsoluteUrl.
+
+**Multiple requests:**
+```
+BaseUrl/api/now/table/t1
+BaseUrl/api/now/table/t2
+......
+BaseUrl/api/now/table/t100
+```
+
+*Step 1*: Input `{id}` either in **Base URL** in the linked service configuration page or **Relative URL** in the dataset connection pane.
+
+
+or
++
+*Step 2*: Set **Pagination rules** as **"AbsoluteUrl.{id}" :"RANGE:1:100:1"**.
+
+#### Example 3:Variables in Headers
+
+This example provides the configuration steps to send multiple requests whose variables are in Headers.
+
+**Multiple requests:**<br/>
+RequestUrl: *https://example/table*<br/>
+Request 1: `Header(id->0)`<br/>
+Request 2: `Header(id->10)`<br/>
+......<br/>
+Request 100: `Header(id->100)`<br/>
+
+*Step 1*: Input `{id}` in **Additional headers**.
+
+*Step 2*: Set **Pagination rules** as **"Headers.{id}" : "RARNGE:0:100:10"**.
++
+#### Example 4:Variables are in AbsoluteUrl/QueryParameters/Headers, the end variable is not pre-defined and the end condition is based on the response
+
+This example provides configuration steps to send multiple requests whose variables are in AbsoluteUrl/QueryParameters/Headers but the end variable is not defined. For different responses, different end condition rule settings are shown in Example 4.1-4.6.
+
+**Multiple requests:**
+
+```
+Request 1: baseUrl/api/now/table/incident?sysparm_limit=1000&sysparm_offset=0,
+Request 2: baseUrl/api/now/table/incident?sysparm_limit=1000&sysparm_offset=1000,
+Request 3: baseUrl/api/now/table/incident?sysparm_limit=1000&sysparm_offset=2000,
+......
+```
-Facebook Graph API returns response in the following structure, in which case next page's URL is represented in ***paging.next***:
+Two responses encountered in this example:<br/>
+
+Response 1:
+
+```json
+{
+ Data: [
+ {key1: val1, key2: val2
+ },
+ {key1: val3, key2: val4
+ }
+ ]
+}
+```
+
+Response 2:
+
+```json
+{
+ Data: [
+ {key1: val5, key2: val6
+ },
+ {key1: val7, key2: val8
+ }
+ ]
+}
+```
+
+*Step 1*: Set the range of **Pagination rules** as [Example 1](#example-1-variables-in-queryparameters) and leave the end of range empty as **"AbsoluteUrl.{offset}": "RANGE:0::1000"**.
+
+*Step 2*: Set different end condition rules according to different last responses. See below examples:
+
+- **Example 4.1: The pagination ends when the value of the specific node in response is empty**
+
+ The REST API returns the last response in the following structure:
+
+ ```json
+ {
+ Data: []
+ }
+ ```
+ Set the end condition rule as **"EndCondition:$.data": "Empty"** to end the pagination when the value of the specific node in response is empty.
+
+ :::image type="content" source="media/connector-rest/pagination-rule-example-4-1.png" alt-text="Screenshot showing the EndCondition setting for Example 4.1.":::
+
+- **Example 4.2: The pagination ends when the value of the specific node in response dose not exist**
+
+ The REST API returns the last response in the following structure:
+
+ ```json
+ {}
+ ```
+ Set the end condition rule as **"EndCondition:$.data": "NonExist"** to end the pagination when the value of the specific node in response dose not exist.
+
+ :::image type="content" source="media/connector-rest/pagination-rule-example-4-2.png" alt-text="Screenshot showing the EndCondition setting for Example 4.2.":::
+
+- **Example 4.3: The pagination ends when the value of the specific node in response exists**
+
+ The REST API returns the last response in the following structure:
+
+ ```json
+ {
+ Data: [
+ {key1: val991, key2: val992
+ },
+ {key1: val993, key2: val994
+ }
+ ],
+ Complete: true
+ }
+ ```
+ Set the end condition rule as **"EndCondition:$.Complete": "Exist"** to end the pagination when the value of the specific node in response exists.
+
+ :::image type="content" source="media/connector-rest/pagination-rule-example-4-3.png" alt-text="Screenshot showing the EndCondition setting for Example 4.3.":::
+
+- **Example 4.4: The pagination ends when the value of the specific node in response is a user-defined const value**
+
+ The REST API returns the response in the following structure:
+ ```json
+ {
+ Data: [
+ {key1: val1, key2: val2
+ },
+ {key1: val3, key2: val4
+ }
+ ],
+ Complete: false
+ }
+ ```
+ ......
+
+ And the last response is in the following structure:
+
+ ```json
+ {
+ Data: [
+ {key1: val991, key2: val992
+ },
+ {key1: val993, key2: val994
+ }
+ ],
+ Complete: true
+ }
+ ```
+ Set the end condition rule as **"EndCondition:$.Complete": "Const:true"** to end the pagination when the value of the specific node in response is a user-defined const value.
+
+ :::image type="content" source="media/connector-rest/pagination-rule-example-4-4.png" alt-text="Screenshot showing the EndCondition setting for Example 4.4.":::
+
+- **Example 4.5: The pagination ends when the value of the header key in response equals to user-defined const value**
+
+ The header keys in REST API responses are shown in the structure below:
+
+ Response header 1: `header(Complete->0)`<br/>
+ ......<br/>
+ Last Response header: `header(Complete->1)`<br/>
+
+ Set the end condition rule as **"EndCondition:headers.Complete": "Const:1"** to end the pagination when the value of the header key in response is equal to user-defined const value.
+
+ :::image type="content" source="media/connector-rest/pagination-rule-example-4-5.png" alt-text="Screenshot showing the EndCondition setting for Example 4.5.":::
+
+- **Example 4.6: The pagination ends when the key exists in the response header**
+
+ The header keys in REST API responses are shown in the structure below:
+
+ Response header 1: `header()`<br/>
+ ......<br/>
+ Last Response header: `header(CompleteTime->20220920)`<br/>
+
+ Set the end condition rule as **"EndCondition:headers.CompleteTime": "Exist"** to end the pagination when the key exists in the response header.
+
+ :::image type="content" source="media/connector-rest/pagination-rule-example-4-6.png" alt-text="Screenshot showing the EndCondition setting for Example 4.6.":::
+
+#### Example 5:Set end condition to avoid endless requests when range rule is not defined
+
+This example provides the configuration steps to send multiple requests when the range rule is not used. The end condition can be set refer to Example 4.1-4.6 to avoid endless requests. The REST API returns response in the following structure, in which case next page's URL is represented in ***paging.next***.
```json {
Facebook Graph API returns response in the following structure, in which case ne
"next": "https://graph.facebook.com/me/albums?limit=25&after=MTAxNTExOTQ1MjAwNzI5NDE=" } }
+...
```-
-The corresponding REST copy activity source configuration especially the `paginationRules` is as follows:
+The last response is:
```json
-"typeProperties": {
- "source": {
- "type": "RestSource",
- "paginationRules": {
- "AbsoluteUrl": "$.paging.next"
+{
+ "data": [],
+ "paging": {
+ "cursors": {
+ "after": "MTAxNTExOTQ1MjAwNzI5NDE=",
+ "before": "NDMyNzQyODI3OTQw"
},
- ...
- },
- "sink": {
- "type": "<sink type>"
+ "previous": "https://graph.facebook.com/me/albums?limit=25&before=NDMyNzQyODI3OTQw",
+ "next": "Same with Last Request URL"
} } ```
-**Example: Pagination rules**
+*Step 1*: Set **Pagination rules** as **"AbsoluteUrl": "$.paging.next"**.
+
+*Step 2*: If `next` in the last response is always same with the last request URL and not empty, endless requests will be sent. The end condition can be used to avoid endless requests. Therefore, set the end condition rule refer to Example 4.1-4.6.
-If you want to send multiple sequence requests with one variable in a range, you can define a variable such as `{offset}`, `{id}` in AbsoluteUrl, Headers, QueryParameters, and define the range rule in pagination rules. See the following examples of pagination rules:
+#### Example 6:Set the max request number to avoid endless request
-- **Example 1**
+Set **MaxRequestNumber** to avoid endless request as shown in the following screenshot:
- You have multiple requests:
-
- ```
- baseUrl/api/now/table/incident?sysparm_limit=1000&sysparm_offset=0,
-
- baseUrl/api/now/table/incident?sysparm_limit=1000&sysparm_offset=1000,
-
- ......
-
- baseUrl/api/now/table/incident?sysparm_limit=1000&sysparm_offset=10000
- ```
- You need to specify the range pagination:
-
- `AbosoluteUrl = baseUrl/api/now/table/incident?sysparm_limit=1000&sysparm_offset={offset}`
- The pagination rule is: `QueryParameter.{offset} = RANGE:0:10000:1000`
+#### Example 7:The RFC 5988 pagination rule is supported by default
-- **Example 2**
+The backend will automatically get the next URL based on the RFC 5988 style links in the header.
- You have multiple requests:
- ```
- baseUrl/api/now/table/t1
-
- baseUrl/api/now/table/t2
-
- ......
+> [!TIP]
+> If you don't want to enable this default pagination rule, you can set `supportRFC5988` to `false` or just delete it in the script.
+>
+> :::image type="content" source="media/connector-rest/pagination-rule-example-7-disable-rfc5988.png" alt-text="Screenshot showing how to disable RFC 5988 setting for Example 7.":::
+
+#### Example 8: The next request URL is from the response body when use pagination in mapping data flows
+
+This example states how to set the pagination rule and the end condition rule in mapping data flows when the next request URL is from the response body.
+
+The response schema is shown below:
++
+The pagination rules should be set as the following screenshot:
++
+By default, the pagination will stop when body **.{@odata.nextLink}** is null or empty.
+
+But if the value of **@odata.nextLink** in the last response body is equal to the last request URL, then it will lead to the endless loop. To avoid this condition, define end condition rules.
+
+- If **Value** in the last response is **Empty**, then the end condition rule can be set as below:
+
+ :::image type="content" source="media/connector-rest/pagination-rule-example-8-end-condition-1.png" alt-text="Screenshot showing setting the end condition rule when the last response is empty.":::
- baseUrl/api/now/table/t100
- ```
- You need to specify the range pagination:
+- If the value of the complete key in the response header equals to true indicates the end of pagination, then the end condition rule can be set as below:
+
+ :::image type="content" source="media/connector-rest/pagination-rule-example-8-end-condition-2.png" alt-text="Screenshot showing setting the end condition rule when the complete key in the response header equals to true indicates the end of pagination.":::
+
+#### Example 9: The response format is XML and the next request URL is from the response body when use pagination in mapping data flows
+
+This example states how to set the pagination rule in mapping data flows when the response format is XML and the next request URL is from the response body. As shown in the following screenshot, the first URL is *https://\<user\>.dfs.core.windows.net/bugfix/test/movie_1.xml*
++++
+The response schema is shown below:
++
+The pagination rule syntax is the same as in Example 8 and should be set as below in this example:
- `AbosoluteUrl = baseUrl/api/now/table/t{id}`
- The pagination rule is: `AbsoluteUrl.{id} = RANGE:1:100:1`
## Use OAuth This section describes how to use a solution template to copy data from REST connector into Azure Data Lake Storage in JSON format using OAuth.
data-factory Connector Teamdesk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-teamdesk.md
Previously updated : 02/17/2022 Last updated : 02/23/2022 # Transform data in TeamDesk (Preview) using Azure Data Factory or Synapse Analytics
Use the following steps to create a TeamDesk linked service in the Azure portal
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory U I.":::
# [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse U I.":::
2. Search for TeamDesk (Preview) and select the TeamDesk (Preview) connector.
data-factory Scenario Ssis Migration Ssisdb Mi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scenario-ssis-migration-ssisdb-mi.md
When migrating database workloads from a SQL Server instance to Azure SQL Manage
This article focuses on the migration of SQL Server Integration Service (SSIS) packages stored in SSIS catalog (SSISDB) and SQL Server Agent jobs that schedule SSIS package executions.
-## Migrate SSIS catalog (SSISDB)
+## Migrate packages in SSIS catalog (SSISDB)
-SSISDB migration can be done using DMS, as described in the article:
+Database Migration Service can migrate SSIS packages stored in SSISDB, as described in the article:
[Migrate SSIS packages to SQL Managed Instance](../dms/how-to-migrate-ssis-packages-managed-instance.md). ## SSIS jobs to SQL Managed Instance agent
Since a migration tool for SSIS jobs is not yet available, they have to be migra
## Next steps - [Connect to SSISDB in Azure](/sql/integration-services/lift-shift/ssis-azure-connect-to-catalog-database)-- [Run SSIS packages deployed in Azure](/sql/integration-services/lift-shift/ssis-azure-run-packages)
+- [Run SSIS packages deployed in Azure](/sql/integration-services/lift-shift/ssis-azure-run-packages)
databox-online Azure Stack Edge Deploy Check Network Readiness https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-deploy-check-network-readiness.md
+
+ Title: Check network readiness for Azure Stack Edge using Azure Stack Network Readiness Checker
+description: Pre-qualify your network before deploying Azure Stack Edge Pro GPU, Pro R, or Mini R device using Azure Stack Network Readiness Checker.
++++++ Last updated : 02/22/2022++
+# Customer intent: As an IT admin, I want to save time and avoid Support calls during deployment of Azure Stack Edge devices by verifying network settings in advance.
++
+# Check network readiness for Azure Stack Edge devices
++
+This article describes how to check to see how ready your network is for deployment of Azure Stack Edge devices.
+
+You'll use the Azure Stack Network Readiness Checker, a PowerShell tool that runs a series of tests to check mandatory and optional settings on the network where you deploy your Azure Stack Edge devices. The tool returns Pass/Fail status for each test and saves a log file and report file with more detail.
+
+You can run the tool from any computer on the network where you'll deploy the Azure Stack Edge devices. The tool works with PowerShell 5.1, which is built into Windows.
+
+## About the tool
+
+The Azure Stack Network Readiness Checker can check whether a network meets the following prerequisites:
+
+- The Domain Name System (DNS) server is available and functioning.
+
+- The Network Time Protocol (NTP) server is available and functioning.
+
+- Azure endpoints are available and respond on HTTPS, with or without a proxy server.<!--Ask Shijo: HTTPS isn't supported on a proxy server. Does the Network Readiness Checker tool check proxy server connections?-->
+
+- The Windows Update server - either the customer-provided Windows Server Update services (WSUS) server or the public Windows Update server - is available and functioning.
+
+- The network path has a Maximum Transmission Unit (MTU) of at least 1,500 bytes, as required by the Azure Stack Edge service.
+
+- There are no overlapping IP addresses for Edge Compute.
+
+- DNS resource records for Azure Stack Edge can be resolved.
+
+#### Report file
+
+The tool saves a report, `AzsReadinessCheckerReport.json`, with detailed diagnostics that are collected during each test. This information can be helpful if you need to [contact Microsoft Support](azure-stack-edge-contact-microsoft-support.md).
+
+For example, the report provides:
+
+- A list of network adapters on the machine used to run the tests, with the driver version, MAC address, and connection state for each network adapter.
+
+- IP configuration of the machine used to run the tests.
+
+- Detailed DNS response properties that the DNS server returned for each test.
+
+- Detailed HTTP response for each test of a URL.<!--Verify: Should this also be HTTPS instead of HTTP? Does the tool check connections when a proxy server is in use?-->
+
+- Network route trace for each test.
+
+## Prerequisites
+
+Before you begin, complete the following tasks:
+
+- Review network requirements in the [Deployment checklist for your Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-deploy-checklist.md).
+
+- Make sure you have access to a client computer that is running on the network where you'll deploy your Azure Stack Edge devices.
+
+- Install the Azure Stack Network Readiness Checker tool in PowerShell by following the steps in [Install Network Readiness Checker](#install-network-readiness-checker), below.
++
+## Install Network Readiness Checker
+
+To install the Azure Stack Network Readiness Checker on the client computer, do these steps:
+
+1. Open PowerShell on the client computer. If you need to install PowerShell, see [Installing PowerShell on Windows](/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.2&preserve-view=true).
+
+1. In a browser, go to [Microsoft.AzureStack.ReadinessChecker](https://www.powershellgallery.com/packages/Microsoft.AzureStack.ReadinessChecker/1.2100.1780.756) in the PowerShell Gallery. Version 1.2100.1780.756 of the Microsoft.AzureStack.ReadinessChecker module is displayed.
+
+1. On the **Install Module** tab, select the Copy icon to copy the Install-Module command that installs version 1.2100.1396.426 of the Microsoft.AzureStack.ReadinessChecker.
+
+ ![Screenshot showing the download page for the Azure Stack Edge Network Readiness Checker tool. The Install Module tab and the Copy icon are highlighted.](./media/azure-stack-edge-deploy-check-network-readiness/network-readiness-checker-install-tool.png)
+
+1. Paste in the command at the PowerShell command prompt, and press **Enter**.
+
+1. Press **Y** (Yes) or **A** (Yes to All) at the following prompt to install the module.
+
+ ```powershell
+ Untrusted repository
+ You are installing the modules from an untrusted repository. If you trust this repository, change its InstallationPolicy value by running the Set-PSRepository cmdlet. Are you sure you want to install the modules from 'PSGallery'?
+ [Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "N"):
+ ```
++
+## Run a network readiness check
+
+When you run the Azure Stack Network Readiness Checker tool, you'll need to provide network and device information from the [Deployment checklist for your Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-deploy-checklist.md).
+
+To run a network readiness check, do these steps:
+
+1. Open PowerShell on a client computer running on the network where you'll deploy the Azure Stack Edge device.
+
+1. Run a network readiness check by entering the following command:
+
+ ```powershell
+ Invoke-AzsNetworkValidation -DnsServer <string[]> -DeviceFqdn <string> [-TimeServer <string[]>] `
+ [-Proxy <uri>] [-ProxyCredential <pscredential>] [-WindowsUpdateServer <uri[]>] [-CustomUrl <url[]>] `
+ [-AzureEnvironment {AzureCloud | AzureChinaCloud | AzureGermanCloud | AzureUSGovernment | CustomCloud}] `
+ [-SkipTests {LinkLayer | IPConfig | DnsServer | TimeServer | PathMtu | DuplicateIP | AzureEndpoint | WindowsUpdateServer | DnsRegistration}] `
+ [-OutputPath <string>]
+ ```
+
+ To get meaningful Network Readiness Checker results that find key issues in your network setup, you need to include all of the following parameters that apply to your environment.
+
+ |Parameter|Description|
+ ||--|
+ |`-DnsServer`|IP addresses of the DNS servers (for example, your primary and secondary DNS servers).|
+ |`-DeviceFqdn`|Fully qualified domain name (FQDN) that you plan to use for the Azure Stack Edge device.|
+ |`-TimeServer`|FQDN of one or more Network Time Protocol (NTP) servers. (Recommended)|
+ |`-Proxy`|URI for the proxy server, if you're using a proxy server. (Optional)|
+ |`-ProxyCredential`|[PSCredential object](/powershell/module/microsoft.powershell.security/get-credential) containing the username and password used on the proxy server. (Required if proxy server requires user authentication)|
+ |`-WindowsUpdateServer`|URIs for one or more Windows Server Update Services (WSUS) servers. (Optional)|
+ |`-ComputeIPs`|The Compute IP range to be used by Kubernetes. Specify the Start IP and End IP separated by a hyphen.|
+ |`-CustomUrl`|Lists other URLs that you want to test HTTP access to. (Optional)|
+ |`-AzureEnvironment`|Indicates the Azure environment. Required if the device is deployed to an environment other than the Azure public cloud (Azure Cloud).|
+ |`-SkipTests`|Can be used to exclude tests. (Optional)<br>Separate test names with a comma.|
+ |`-OutputPath`|Tells where to store the log file and report from the tests. (Optional)<br>If you don't use this path, the files are stored in the following path: `C:\Users\<username>\AppData\Local\Temp\AzsReadinessChecker\`<br>Each run of the Network Readiness Checker overwrites the existing report.|
+
+## Sample output
+
+The following samples are the output from successful and unsuccessful runs of the Azure Stack Network Readiness Checker tool.
+
+### Sample output: Successful test
+
+The following sample is the output from a successful run of the Network Readiness Checker tool with these parameters:
+
+```powershell
+Invoke-AzsNetworkValidation -DnsServer '10.50.10.50', '10.50.50.50' -DeviceFqdn 'aseclient.contoso.com' -TimeServer 'pool.ntp.org' -Proxy 'http://proxy.contoso.com:3128/' -SkipTests DuplicateIP -WindowsUpdateServer 'http://ase-prod.contoso.com' -OutputPath `C:\ase-network-tests`
+```
+<!--Query: Add beginning single quote to the output path string or remove the final one? No end quote in sample return data.-->
+
+The tool returned this output:
+
+```powershell
+PS C:\Users\Administrator> Invoke-AzsNetworkValidation -DnsServer '10.50.10.50', '10.50.50.50' -DeviceFqdn 'aseclient.contoso.com' -TimeServer 'pool.ntp.org' -Proxy 'http://proxy.contoso.com:3128/' -SkipTests DuplicateIP -WindowsUpdateServer 'http://ase-prod.contoso.com' -OutputPath C:\ase-network-tests
+
+Invoke-AzsNetworkValidation v1.2100.1396.426 started.
+The following tests will be executed: LinkLayer, IPConfig, DnsServer, PathMtu, TimeServer, AzureEndpoint, WindowsUpdateServer, DnsRegistration, Proxy
+Validating input parameters
+Validating Azure Stack Edge Network Readiness
+ Link Layer: OK
+ IP Configuration: OK
+ Using network adapter name 'vEthernet (corp-1g-Static)', description 'Hyper-V Virtual Ethernet Adapter'
+ DNS Server 10.50.10.50: OK
+ DNS Server 10.50.50.50: OK
+ Network Path MTU: OK
+ Time Server pool.ntp.org: OK
+ Proxy Server 10.57.48.80: OK
+ Azure ARM Endpoint: OK
+ Azure Graph Endpoint: OK
+ Azure Login Endpoint: OK
+ Azure ManagementService Endpoint: OK
+ Azure AseService Endpoint: OK
+ Azure AseServiceBus Endpoint: OK
+ Azure AseStorageAccount Endpoint: OK
+ Windows Update Server ase-prod.contoso.com port 80: OK
+ DNS Registration for aseclient.contoso.com: OK
+ DNS Registration for login.aseclient.contoso.com: OK
+ DNS Registration for management.aseclient.contoso.com: OK
+ DNS Registration for *.blob.aseclient.contoso.com: OK
+ DNS Registration for compute.aseclient.contoso.com: OK
+
+Log location (contains PII): C:\ase-network-tests\AzsReadinessChecker.log
+Report location (contains PII): C:\ase-network-tests\AzsReadinessCheckerReport.json
+Invoke-AzsNetworkValidation Completed
+```
+
+### Sample output: Failed test
+
+If a test fails, the Network Readiness Checker returns information to help you resolve the issue, as shown in the sample output below.
+
+The following sample is the output from this command:
+
+```powershell
+Invoke-AzsNetworkValidation -DnsServer '10.50.10.50' -TimeServer 'time.windows.com' -DeviceFqdn aseclient.contoso.com -ComputeIPs 10.10.52.1-10.10.52.20 -CustomUrl 'http://www.nytimes.com','http://fakename.fakeurl.com'
+```
+
+The tool returned this output:
+
+```powershell
+PS C:\Users\Administrator> Invoke-AzsNetworkValidation -DnsServer '10.50.10.50' -TimeServer 'time.windows.com' -DeviceFqdn aseclient.contoso.com -ComputeIPs 10.10.52.1-10.10.52.20 -CustomUrl 'http://www.nytimes.com','http://fakename.fakeurl.com'
+
+Invoke-AzsNetworkValidation v1.2100.1396.426 started.
+Validating input parameters
+The following tests will be executed: LinkLayer, IPConfig, DnsServer, PathMtu, TimeServer, AzureEndpoint, WindowsUpdateServer, DuplicateIP, DnsRegistration, CustomUrl
+Validating Azure Stack Edge Network Readiness
+ Link Layer: OK
+ IP Configuration: OK
+ DNS Server 10.50.10.50: OK
+ Network Path MTU: OK
+ Time Server time.windows.com: OK
+ Azure ARM Endpoint: OK
+ Azure Graph Endpoint: OK
+ Azure Login Endpoint: OK
+ Azure ManagementService Endpoint: OK
+ Azure AseService Endpoint: OK
+ Azure AseServiceBus Endpoint: OK
+ Azure AseStorageAccount Endpoint: OK
+ URL http://www.nytimes.com/: OK
+ URL http://fakename.fakeurl.com/: Fail
+ Windows Update Server windowsupdate.microsoft.com port 80: OK
+ Windows Update Server update.microsoft.com port 80: OK
+ Windows Update Server update.microsoft.com port 443: OK
+ Windows Update Server download.windowsupdate.com port 80: OK
+ Windows Update Server download.microsoft.com port 443: OK
+ Windows Update Server go.microsoft.com port 80: OK
+ Duplicate IP: Warning
+ DNS Registration for aseclient.contoso.com: OK
+ DNS Registration for login.aseclient.contoso.com: Fail
+ DNS Registration for management.aseclient.contoso.com: Fail
+ DNS Registration for *.blob.aseclient.contoso.com: Fail
+ DNS Registration for compute.aseclient.contoso.com: Fail
+Details:
+[-] URL http://fakename.fakeurl.com/: fakename.fakeurl.com : DNS name does not exist
+[-] Duplicate IP: Some IP addresses allocated to Azure Stack may be active on the network. Check the output log for the detailed list.
+[-] DNS Registration for login.aseclient.contoso.com: login.aseclient.contoso.com : DNS name does not exist
+[-] DNS Registration for management.aseclient.contoso.com: management.aseclient.contoso.com : DNS name does not exist
+[-] DNS Registration for *.blob.aseclient.contoso.com: testname.aseclient.contoso.com : DNS name does not exist
+[-] DNS Registration for compute.aseclient.contoso.com: compute.aseclient.contoso.com : DNS name does not exist
+Additional help URL http://aka.ms/azsnrc
+
+Log location (contains PII): C:\Users\[*redacted*]\AppData\Local\Temp\AzsReadinessChecker\AzsReadinessChecker.log
+Report location (contains PII): C:\Users\[*redacted*]\AppData\Local\Temp\AzsReadinessChecker\AzsReadinessCheckerReport.json
+Invoke-AzsNetworkValidation Completed
+```
+
+## Review log and report
+
+For more information, you can review the log and report. By default, both files are saved in the following location:
+
+- Log: `C:\Users\<username>\AppData\Local\Temp\AzsReadinessChecker\AzrReadinessChecker.log`
+- Report: `C:\Users\<username>\AppData\Local\Temp\AzsReadinessChecker\AzrReadinessCheckerReport.json`
+
+## Next steps
+
+- Learn how to connect to your Azure Stack Edge device: [Pro GPU device](azure-stack-edge-gpu-deploy-connect.md), [Pro R device](azure-stack-edge-pro-r-deploy-connect.md), [Mini R device](azure-stack-edge-mini-r-deploy-connect.md).
+- Review a deployment checklist for your device: [Pro GPU checklist](azure-stack-edge-gpu-deploy-checklist.md), [Pro R checklist](azure-stack-edge-pro-r-deploy-checklist.md), [Mini R checklist](azure-stack-edge-mini-r-deploy-checklist.md).
+- [Contact Microsoft Support](azure-stack-edge-contact-microsoft-support.md).
databox-online Azure Stack Edge Gpu Deploy Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-checklist.md
Previously updated : 01/28/2022 Last updated : 02/23/2022 zone_pivot_groups: azure-stack-edge-device-deployment
Use the following checklist to ensure you have this information after youΓÇÖve p
| Stage | Parameter | Details | |--|-|-|
-| Device management | <li>Azure subscription</li><li>Resource providers registered</li><li>Azure Storage account</li>|<li>Enabled for Azure Stack Edge, owner or contributor access.</li><li>In Azure portal, go to **Home > Subscriptions > Your-subscription > Resource providers**. Search for `Microsoft.EdgeOrder` and register. Repeat for `Microsoft.Devices` if deploying IoT workloads.</li><li>Need access credentials</li> |
-| Device installation | Power cables in the package. <br>For US, an SVE 18/3 cable rated for 125 V and 15 Amps with a NEMA 5-15P to C13 (input to output) connector is shipped. | For more information, see the list of [Supported power cords by country](azure-stack-edge-technical-specifications-power-cords-regional.md) |
-| | <li>At least one 1-GbE RJ-45 network cable for Port 1 </li><li> At least one 25/10-GbE SFP+ copper cable for Port 3, Port 4, Port 5, or Port 6</li>| Customer needs to procure these cables.<br>For a full list of supported network cables, switches, and transceivers for device network cards from Cavium, see [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/)<br>For a full list of supported cables and modules for 25 GbE and 10 GbE from Mellanox, see [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products).|
-| First-time device connection | <li>Laptop whose IPv4 settings can be changed. This laptop connects to Port 1 via a switch or a USB to Ethernet adaptor. </li><!--<li> A minimum of 1 GbE switch must be used for the device once the initial setup is complete. The local web UI will not be accessible if the connected switch is not at least 1 Gbe.</li>-->| |
+| Device management | <ul><li>Azure subscription</li><li>Resource providers registered</li><li>Azure Storage account</li></ul>|<ul><li>Enabled for Azure Stack Edge, owner or contributor access.</li><li>In Azure portal, go to **Home > Subscriptions > Your-subscription > Resource providers**. Search for `Microsoft.EdgeOrder` and register. Repeat for `Microsoft.Devices` if deploying IoT workloads.</li><li>Need access credentials.</li></ul> |
+| Device installation | Power cables in the package. <br>For US, an SVE 18/3 cable rated for 125 V and 15 Amps with a NEMA 5-15P to C13 (input to output) connector is shipped. | For more information, see the list of [Supported power cords by country](azure-stack-edge-technical-specifications-power-cords-regional.md). |
+| | <ul><li>At least one 1-GbE RJ-45 network cable for Port 1 </li><li>At least one 25/10-GbE SFP+ copper cable for Port 3, Port 4, Port 5, or Port 6</li></ul>| Customer needs to procure these cables.<br>For a full list of supported network cables, switches, and transceivers for device network cards from Cavium, see [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/).<br>For a full list of supported cables and modules for 25 GbE and 10 GbE from Mellanox, see [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products).|
+| Network readiness | Check to see how ready your network is for the deployment of an Azure Stack Edge device. |[Use the Azure Stack Network Readiness Checker](azure-stack-edge-deploy-check-network-readiness.md) to test all needed connections. |
+| First-time device connection | Laptop whose IPv4 settings can be changed. <!--<li> A minimum of 1 GbE switch must be used for the device once the initial setup is complete. The local web UI will not be accessible if the connected switch is not at least 1 Gbe.</li>-->| This laptop connects to Port 1 via a switch or a USB to Ethernet adaptor. |
| Device sign-in | Device administrator password, between 8 and 16 characters, including three of the following character types: uppercase, lowercase, numeric, and special characters. | Default password is *Password1*, which expires at first sign-in. |
-| Network settings | Device comes with 2 x 1-GbE, 4 x 25-GbE network ports. <li>Port 1 is used for initial configuration only. One or more data ports can be connected and configured. </li><li> At least one data network interface from among Port 2 - Port 6 needs to be connected to the Internet (with connectivity to Azure).</li><li> DHCP and static IPv4 configuration supported. | Static IPv4 configuration requires IP, DNS server, and default gateway. |
-| Advanced networking settings | <li>Require 2 free, static, contiguous IPs for Kubernetes nodes, and one static IP for IoT Edge service.</li><li>Require one additional IP for each extra service or module that you'll deploy.</li>| Only static IPv4 configuration is supported.|
-| (Optional) Web proxy settings | <li>Web proxy server IP/FQDN, port </li><li>Web proxy username, password</li> | |
+| Network settings | Device comes with 2 x 1-GbE, 4 x 25-GbE network ports. <ul><li>Port 1 is used for initial configuration only. One or more data ports can be connected and configured. </li><li>At least one data network interface from among Port 2 - Port 6 needs to be connected to the Internet (with connectivity to Azure).</li><li>DHCP and static IPv4 configuration supported.</li></ul> | Static IPv4 configuration requires IP, DNS server, and default gateway. |
+| Advanced networking settings | <ul><li>Require 2 free, static, contiguous IPs for Kubernetes nodes, and one static IP for IoT Edge service.</li><li>Require one additional IP for each extra service or module that you'll deploy.</li></ul>| Only static IPv4 configuration is supported.|
+| (Optional) Web proxy settings | <ul><li>Web proxy server IP/FQDN, port </li><li>Web proxy username, password</li></ul> | |
| Firewall and port settings | If using firewall, make sure the [listed URLs patterns and ports](azure-stack-edge-system-requirements.md#networking-port-requirements) are allowed for device IPs. | | | (Recommended) Time settings | Configure time zone, primary NTP server, secondary NTP server. | Configure primary and secondary NTP server on local network.<br>If local server isnΓÇÖt available, public NTP servers can be configured. |
-| (Optional) Update server settings | <li>Require update server IP address on local network, path to WSUS server. </li> | By default, public windows update server is used.|
-| Device settings | <li>Device fully qualified domain name (FQDN) </li><li>DNS domain</li> | |
-| (Optional) Certificates | To test non-production workloads, use [Generate certificates option](azure-stack-edge-gpu-deploy-configure-certificates.md#generate-device-certificates) <br><br> If you bring your own certificates including the signing chain(s), [Add certificates](azure-stack-edge-gpu-deploy-configure-certificates.md#bring-your-own-certificates) in appropriate format.| Configure certificates only if you change the device name and/or DNS domain. |
+| (Optional) Update server settings | Require update server IP address on local network, path to WSUS server. | By default, public Windows update server is used.|
+| Device settings | <ul><li>Device fully qualified domain name (FQDN) </li><li>DNS domain</li></ul> | |
+| (Optional) Certificates | To test non-production workloads, use [Generate certificates option](azure-stack-edge-gpu-deploy-configure-certificates.md#generate-device-certificates). <br><br> If you bring your own certificates including the signing chain(s), [Add certificates](azure-stack-edge-gpu-deploy-configure-certificates.md#bring-your-own-certificates) in appropriate format.| Configure certificates only if you change the device name and/or DNS domain. |
| Activation | Require activation key from the Azure Stack Edge resource. | Once generated, the key expires in three days. |
Use the following checklist to ensure you have this information after youΓÇÖve p
| Stage | Parameter | Details | |--|-|-|
-| Device management | <li>Azure subscription</li><li>Resource providers registered</li><li>Azure Storage account</li>|<li>Enabled for Azure Stack Edge, owner or contributor access.</li><li>In Azure portal, go to **Home > Subscriptions > Your-subscription > Resource providers**. Search for `Microsoft.EdgeOrder` and register. Repeat for `Microsoft.Devices` if deploying IoT workloads.</li><li>Need access credentials</li> |
-| Device installation | Four power cables for the two device nodes in the package. <br>For US, an SVE 18/3 cable rated for 125 V and 15 Amps with a NEMA 5-15P to C13 (input to output) connector is shipped. | For more information, see the list of [Supported power cords by country](azure-stack-edge-technical-specifications-power-cords-regional.md) |
-| | <li>At least two 1-GbE RJ-45 network cables for Port 1 on the two device nodes </li><li> You would need two 1-GbE RJ-45 network cables to connect Port 2 on each device node to the internet. Depending on the network topology you wish to deploy, you also need SFP+ copper cables to connect Port 3 and Port 4 across the device nodes and also from device nodes to the switches. See the [Supported network topologies](azure-stack-edge-gpu-clustering-overview.md#supported-networking-topologies). </li>| Customer needs to procure these cables.<br>For a full list of supported network cables, switches, and transceivers for device network cards from Cavium, see [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/)<br>For a full list of supported cables and modules for 25 GbE and 10 GbE from Mellanox, see [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products).|
-| First-time device connection | <li>Laptop whose IPv4 settings can be changed. This laptop connects to Port 1 via a switch or a USB to Ethernet adaptor. </li><!--<li> A minimum of 1 GbE switch must be used for the device once the initial setup is complete. The local web UI will not be accessible if the connected switch is not at least 1 Gbe.</li>-->| |
+| Device management | <ul><li>Azure subscription</li><li>Resource providers registered</li><li>Azure Storage account</li></ul>|<ul><li>Enabled for Azure Stack Edge, owner or contributor access.</li><li>In Azure portal, go to **Home > Subscriptions > Your-subscription > Resource providers**. Search for `Microsoft.EdgeOrder` and register. Repeat for `Microsoft.Devices` if deploying IoT workloads.</li><li>Need access credentials.</li></ul> |
+| Device installation | Four power cables for the two device nodes in the package. <br>For US, an SVE 18/3 cable rated for 125 V and 15 Amps with a NEMA 5-15P to C13 (input to output) connector is shipped. | For more information, see the list of [Supported power cords by country](azure-stack-edge-technical-specifications-power-cords-regional.md). |
+| | <ul><li>At least two 1-GbE RJ-45 network cables for Port 1 on the two device nodes </li><li>You would need two 1-GbE RJ-45 network cables to connect Port 2 on each device node to the internet. Depending on the network topology you wish to deploy, you also need SFP+ copper cables to connect Port 3 and Port 4 across the device nodes and also from device nodes to the switches. See the [Supported network topologies](azure-stack-edge-gpu-clustering-overview.md#supported-networking-topologies).</li></ul> | Customer needs to procure these cables.<br>For a full list of supported network cables, switches, and transceivers for device network cards from Cavium, see [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/).<br>For a full list of supported cables and modules for 25 GbE and 10 GbE from Mellanox, see [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products).|
+| First-time device connection | Laptop whose IPv4 settings can be changed.<!--<li> A minimum of 1 GbE switch must be used for the device once the initial setup is complete. The local web UI will not be accessible if the connected switch is not at least 1 Gbe.</li>-->|This laptop connects to Port 1 via a switch or a USB to Ethernet adaptor. |
| Device sign-in | Device administrator password, between 8 and 16 characters, including three of the following character types: uppercase, lowercase, numeric, and special characters. | Default password is *Password1*, which expires at first sign-in. |
-| Network settings | Each device node has 2 x 1-GbE, 4 x 25-GbE network ports. <li>Port 1 is used for initial configuration only. </li><li>Port 2 must be connected to the Internet (with connectivity to Azure). Port 3 and Port 4 must be configured and connected across the two device nodes in accordance with the network topology you intend to deploy. You can choose from one of the three [Supported network topologies](azure-stack-edge-gpu-clustering-overview.md#supported-networking-topologies). </li><li> DHCP and static IPv4 configuration supported. | Static IPv4 configuration requires IP, DNS server, and default gateway. |
-| Advanced networking settings | <li>Require 2 free, static, contiguous IPs for Kubernetes nodes, and one static IP for IoT Edge service.</li><li>Require one additional IP for each extra service or module that you'll deploy.</li>| Only static IPv4 configuration is supported.|
-| (Optional) Web proxy settings | <li>Web proxy server IP/FQDN, port </li><li>Web proxy username, password</li> | |
+| Network settings | Each device node has 2 x 1-GbE, 4 x 25-GbE network ports. <ul><li>Port 1 is used for initial configuration only.</li><li>Port 2 must be connected to the Internet (with connectivity to Azure). Port 3 and Port 4 must be configured and connected across the two device nodes in accordance with the network topology you intend to deploy. You can choose from one of the three [Supported network topologies](azure-stack-edge-gpu-clustering-overview.md#supported-networking-topologies).</li><li>DHCP and static IPv4 configuration supported.</li></ul> | Static IPv4 configuration requires IP, DNS server, and default gateway. |
+| Advanced networking settings | <ul><li>Require 2 free, static, contiguous IPs for Kubernetes nodes, and one static IP for IoT Edge service.</li><li>Require one additional IP for each extra service or module that you'll deploy.</li></ul>| Only static IPv4 configuration is supported.|
+| (Optional) Web proxy settings | <ul><li>Web proxy server IP/FQDN, port.</li><li>Web proxy username, password</li></ul> | |
| Firewall and port settings | If using firewall, make sure the [listed URLs patterns and ports](azure-stack-edge-system-requirements.md#networking-port-requirements) are allowed for device IPs. | | | (Recommended) Time settings | Configure time zone, primary NTP server, secondary NTP server. | Configure primary and secondary NTP server on local network.<br>If local server isnΓÇÖt available, public NTP servers can be configured. |
-| (Optional) Update server settings | <li>Require update server IP address on local network, path to WSUS server. </li> | By default, public windows update server is used.|
-| Device settings | <li>Device fully qualified domain name (FQDN) </li><li>DNS domain</li> | |
-| (Optional) Certificates | To test non-production workloads, use [Generate certificates option](azure-stack-edge-gpu-deploy-configure-certificates.md#generate-device-certificates) <br><br> If you bring your own certificates including the signing chain(s), [Add certificates](azure-stack-edge-gpu-deploy-configure-certificates.md#bring-your-own-certificates) in appropriate format.| Configure certificates only if you change the device name and/or DNS domain. |
+| (Optional) Update server settings | Require update server IP address on local network, path to WSUS server. | By default, public Windows update server is used.|
+| Device settings | <ul><li>Device fully qualified domain name (FQDN) </li><li>DNS domain</li></ul> | |
+| (Optional) Certificates | To test non-production workloads, use [Generate certificates option](azure-stack-edge-gpu-deploy-configure-certificates.md#generate-device-certificates). <br><br> If you bring your own certificates including the signing chain(s), [Add certificates](azure-stack-edge-gpu-deploy-configure-certificates.md#bring-your-own-certificates) in appropriate format.| Configure certificates only if you change the device name and/or DNS domain. |
| Activation | Require activation key from the Azure Stack Edge resource. | Once generated, the key expires in three days. |
Use the following checklist to ensure you have this information after youΓÇÖve p
## Next steps -
-Prepare to deploy your [Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-deploy-prep.md).
--
+- Prepare to deploy your [Azure Stack Edge Pro device](azure-stack-edge-gpu-deploy-prep.md).
+- Use the [Azure Stack Edge Network Readiness Tool](azure-stack-edge-deploy-check-network-readiness.md) to verify your network settings.
databox-online Azure Stack Edge Gpu Deploy Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-connect.md
Previously updated : 11/07/2021 Last updated : 02/22/2022 zone_pivot_groups: azure-stack-edge-device-deployment # Customer intent: As an IT admin, I need to understand how to connect to Azure Stack Edge Pro GPU so I can use it to transfer data to Azure.
In this tutorial, you learn about:
Before you configure and set up your Azure Stack Edge Pro GPU device, make sure that: * You've installed the physical device as detailed in [Install Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-install.md).
+* You've run the Azure Stack Network Readiness Checker tool to verify that your network meets Azure Stack Edge requirements. For instructions, see [Check network readiness for Azure Stack Edge devices](azure-stack-edge-deploy-check-network-readiness.md).
## Connect to the local web UI setup
databox-online Azure Stack Edge Gpu Deploy Prep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-prep.md
Previously updated : 01/28/2022 Last updated : 02/23/2022 # Customer intent: As an IT admin, I need to understand how to prepare the portal to deploy Azure Stack Edge Pro GPU so I can use it to compute at the edge and to transfer data to Azure.
Before you begin, make sure that:
Before you deploy a physical device, make sure that:
+- You've [run the Azure Stack Network Readiness Checker tool](azure-stack-edge-deploy-check-network-readiness.md) to check network readiness for your Azure Stack Edge device. You can use the tool to check whether your firewall rules are blocking access to any essential URLs for the service and verify custom URLs, among other tests. For more information, see [Check network readiness for your Azure Stack Edge device](azure-stack-edge-deploy-check-network-readiness.md).
- You've reviewed the safety information that was included in the shipment package. - To rackmount the device in a standard 19* rack in your datacenter, make sure to have:
databox-online Azure Stack Edge Mini R Deploy Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-mini-r-deploy-checklist.md
Previously updated : 02/24/2021 Last updated : 02/23/2022 # Deployment checklist for your Azure Stack Edge Mini R device
Use the following checklist to ensure you have this information after you have p
| Stage | Parameter | Details | |--|-|-|
-| Device management | <li>Azure subscription</li><li>Resource providers registered</li><li>Azure Storage account</li>|<li>Enabled for Azure Stack Edge Mini R/Data Box Gateway, owner or contributor access.</li><li>In Azure portal, go to **Home > Subscriptions > Your-subscription > Resource providers**. Search for `Microsoft.DataBoxEdge` and register. Repeat for `Microsoft.Devices` if deploying IoT workloads.</li><li>Need access credentials</li> |
-| Device installation | Power cables in the package. <br>For US, an SVE 18/3 cable rated for 125 V and 15 Amps with a NEMA 5-15P to C13 (input to output) connector is shipped. | For more information, see the list of [Supported power cords by country](azure-stack-edge-technical-specifications-power-cords-regional.md) |
-| | <li>At least 1 X 1-GbE RJ-45 network cable for Port 1 </li><li> At least 1 X 25-GbE SFP+ copper cable for Port 3, Port 4, Port 5, or Port 6</li>| Customer needs to procure these cables.<br>For a full list of supported network cables, switches, and transceivers for device network cards, see [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/) and [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products).|
-| First-time device connection | <li>Laptop whose IPv4 settings can be changed. This laptop connects to Port 1 via a switch or a USB to Ethernet adaptor. </li><!--<li> A minimum of 1 GbE switch must be used for the device once the initial setup is complete. The local web UI will not be accessible if the connected switch is not at least 1 Gbe.</li>-->| |
+| Device management | <ul><li>Azure subscription</li><li>Resource providers registered</li><li>Azure Storage account</li></ul>|<ul><li>Enabled for Azure Stack Edge Mini R/Data Box Gateway, owner or contributor access.</li><li>In Azure portal, go to **Home > Subscriptions > Your-subscription > Resource providers**. Search for `Microsoft.DataBoxEdge` and register. Repeat for `Microsoft.Devices` if deploying IoT workloads.</li><li>Need access credentials.</li></ul> |
+| Device installation | Power cables in the package. <br>For US, an SVE 18/3 cable rated for 125 V and 15 Amps with a NEMA 5-15P to C13 (input to output) connector is shipped. | For more information, see the list of [Supported power cords by country](azure-stack-edge-technical-specifications-power-cords-regional.md). |
+| | <ul><li>At least 1 X 1-GbE RJ-45 network cable for Port 1 </li><li>At least 1 X 25-GbE SFP+ copper cable for Port 3, Port 4, Port 5, or Port 6</li></ul>| Customer needs to procure these cables.<br>For a full list of supported network cables, switches, and transceivers for device network cards, see [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/) and [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products).|
+| Network readiness | Check to see how ready your network is for the deployment of an Azure Stack Edge device. | [Use the Azure Stack Network Readiness Checker](azure-stack-edge-deploy-check-network-readiness.md) to test all needed connections. |
+| First-time device connection | Laptop whose IPv4 settings can be changed. This laptop connects to Port 1 via a switch or a USB to Ethernet adaptor.<!--<li> A minimum of 1 GbE switch must be used for the device once the initial setup is complete. The local web UI will not be accessible if the connected switch is not at least 1 Gbe.</li>-->| |
| Device sign-in | Device administrator password, between 8 and 16 characters, including three of the following character types: uppercase, lowercase, numeric, and special characters. | Default password is *Password1*, which expires at first sign-in. |
-| Network settings | Device comes with 2 x 1-GbE, 4 x 25-GbE network ports. <li>Port 1 is used to configure management settings only. One or more data ports can be connected and configured. </li><li> At least one data network interface from among Port 2 - Port 6 needs to be connected to the Internet (with connectivity to Azure).</li><li> DHCP and static IPv4 configuration supported. | Static IPv4 configuration requires IP, DNS server, and default gateway. |
-| Compute network settings | <li>Require 2 free, static, contiguous IPs for Kubernetes nodes, and 1 static IP for IoT Edge service.</li><li>Require 1 additional IP for each extra service or module that you'll deploy.</li>| Only static IPv4 configuration is supported.|
-| (Optional) Web proxy settings | <li>Web proxy server IP/FQDN, port </li><li>Web proxy username, password</li> | |
+| Network settings | Device comes with 2 x 1-GbE, 4 x 25-GbE network ports. <ul><li>Port 1 is used to configure management settings only. One or more data ports can be connected and configured. </li><li> At least one data network interface from among Port 2 - Port 6 needs to be connected to the Internet (with connectivity to Azure).</li><li> DHCP and static IPv4 configuration supported.</li></ul> | Static IPv4 configuration requires IP, DNS server, and default gateway. |
+| Compute network settings | <ul><li>Require 2 free, static, contiguous IPs for Kubernetes nodes, and 1 static IP for IoT Edge service.</li><li>Require 1 additional IP for each extra service or module that you'll deploy.</li></ul>| Only static IPv4 configuration is supported.|
+| (Optional) Web proxy settings | <ul><li>Web proxy server IP/FQDN, port </li><li>Web proxy username, password</li></ul> | |
| Firewall and port settings | If using firewall, make sure the [listed URLs patterns and ports](azure-stack-edge-system-requirements.md#networking-port-requirements) are allowed for device IPs. | | | (Recommended) Time settings | Configure time zone, primary NTP server, secondary NTP server. | Configure primary and secondary NTP server on local network.<br>If local server is not available, public NTP servers can be configured. |
-| (Optional) Update server settings | <li>Require update server IP address on local network, path to WSUS server. </li> | By default, public windows update server is used.|
-| Device settings | <li>Device fully qualified domain name (FQDN) </li><li>DNS domain</li> | |
-| (Optional) Certificates | To test non-production workloads, use [Generate certificates option](azure-stack-edge-gpu-deploy-configure-certificates.md#generate-device-certificates) <br><br> If you bring your own certificates including the signing chain(s), [Add certificates](azure-stack-edge-gpu-deploy-configure-certificates.md#bring-your-own-certificates) in appropriate format.| Configure certificates only if you change the device name and/or DNS domain. |
+| (Optional) Update server settings | Require update server IP address on local network, path to WSUS server. | By default, public Windows update server is used.|
+| Device settings | <ul><li>Device fully qualified domain name (FQDN) </li><li>DNS domain</li></ul> | |
+| (Optional) Certificates | To test non-production workloads, use [Generate certificates option](azure-stack-edge-gpu-deploy-configure-certificates.md#generate-device-certificates). <br><br> If you bring your own certificates including the signing chain(s), [Add certificates](azure-stack-edge-gpu-deploy-configure-certificates.md#bring-your-own-certificates) in appropriate format.| Configure certificates only if you change the device name and/or DNS domain. |
| Activation | Require activation key from the Azure Stack Edge resource. | Once generated, the key expires in 3 days. | <!--
Use the following checklist to ensure you have this information after you have p
## Next steps
-Prepare to deploy your [Azure Stack Edge Mini R device](azure-stack-edge-gpu-deploy-prep.md).
+- Prepare to deploy your [Azure Stack Edge Mini R device](azure-stack-edge-gpu-deploy-prep.md).
+- Use the [Azure Stack Edge Network Readiness Tool](azure-stack-edge-deploy-check-network-readiness.md) to verify your network settings.
databox-online Azure Stack Edge Mini R Deploy Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-mini-r-deploy-connect.md
Previously updated : 10/20/2020 Last updated : 02/22/2022 # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Mini R so I can use it to transfer data to Azure.
In this tutorial, you learn about:
Before you configure and set up your Azure Stack Edge device, make sure that: * You've installed the physical device as detailed in [Install Azure Stack Edge](azure-stack-edge-mini-r-deploy-install.md).
+* You've run the Azure Stack Network Readiness Checker tool to verify that your network meets Azure Stack Edge requirements. For instructions, see [Check network readiness for Azure Stack Edge devices](azure-stack-edge-deploy-check-network-readiness.md).
## Connect to the local web UI setup
databox-online Azure Stack Edge Mini R Deploy Prep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-mini-r-deploy-prep.md
Previously updated : 12/20/2021 Last updated : 02/23/2022 # Customer intent: As an IT admin, I need to understand how to prepare the portal to deploy Azure Stack Edge Mini R device so I can use it to transfer data to Azure.
Following are the configuration prerequisites for your Azure Stack Edge resource
Before you deploy a physical device, make sure that:
+- You've [run the Azure Stack Network Readiness Checker tool](azure-stack-edge-deploy-check-network-readiness.md) to check network readiness for your Azure Stack Edge device. You can use the tool to check whether your firewall rules are blocking access to any essential URLs for the service and verify custom URLs, among other tests. For more information, see [Check network readiness for your Azure Stack Edge device](azure-stack-edge-deploy-check-network-readiness.md).
+ - You've reviewed the safety information for this device at [Safety guidelines for your Azure Stack Edge device](azure-stack-edge-mini-r-safety.md). [!INCLUDE [Azure Stack Edge device prerequisites](../../includes/azure-stack-edge-gateway-device-prerequisites.md)]
databox-online Azure Stack Edge Pro R Deploy Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-r-deploy-checklist.md
Previously updated : 02/24/2021 Last updated : 02/23/2022 # Deployment checklist for your Azure Stack Edge Pro R device
Use the following checklist to ensure you have this information after you have p
| Stage | Parameter | Details | |--|-|-|
-| Device management | <li>Azure subscription</li><li>Resource providers registered</li><li>Azure Storage account</li>|<li>Enabled for Azure Stack Edge Pro/Data Box Gateway, owner or contributor access.</li><li>In Azure portal, go to **Home > Subscriptions > Your-subscription > Resource providers**. Search for `Microsoft.DataBoxEdge` and register. Repeat for `Microsoft.Devices` if deploying IoT workloads.</li><li>Need access credentials</li> |
-| Device installation | Power cables in the package. <br>For US, an SVE 18/3 cable rated for 125 V and 15 Amps with a NEMA 5-15P to C13 (input to output) connector is shipped. | For more information, see the list of [Supported power cords by country](azure-stack-edge-technical-specifications-power-cords-regional.md) |
-| | <li>At least 1 X 1-GbE RJ-45 network cable for Port 1 </li><li> At least 1 X 25-GbE SFP+ copper cable for Port 3, Port 4</li>| Customer needs to procure these cables.<br>For a full list of supported network cables, switches, and transceivers for device network cards, see [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/) and [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products).|
-| First-time device connection | <li>Laptop whose IPv4 settings can be changed. This laptop connects to Port 1 via a switch or a USB to Ethernet adaptor. </li><!--<li> A minimum of 1 GbE switch must be used for the device once the initial setup is complete. The local web UI will not be accessible if the connected switch is not at least 1 Gbe.</li>-->| |
+| Device management | <ul><li>Azure subscription</li><li>Resource providers registered</li><li>Azure Storage account</li></ul>|<ul><li>Enabled for Azure Stack Edge Pro/Data Box Gateway, owner or contributor access.</li><li>In Azure portal, go to **Home > Subscriptions > Your-subscription > Resource providers**. Search for `Microsoft.DataBoxEdge` and register. Repeat for `Microsoft.Devices` if deploying IoT workloads.</li><li>Need access credentials.</li></ul> |
+| Device installation | Power cables in the package. <br>For US, an SVE 18/3 cable rated for 125 V and 15 Amps with a NEMA 5-15P to C13 (input to output) connector is shipped. | For more information, see the list of [Supported power cords by country](azure-stack-edge-technical-specifications-power-cords-regional.md). |
+| | <ul><li>At least 1 X 1-GbE RJ-45 network cable for Port 1 </li><li> At least 1 X 25-GbE SFP+ copper cable for Port 3, Port 4</li><ul>| Customer needs to procure these cables.<br>For a full list of supported network cables, switches, and transceivers for device network cards, see [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/) and [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products).|
+| Network readiness | Check to see how ready your network is for the deployment of an Azure Stack Edge device. | [Use the Azure Stack Network Readiness Checker](azure-stack-edge-deploy-check-network-readiness.md) to test all needed connections. |
+| First-time device connection | Laptop whose IPv4 settings can be changed. This laptop connects to Port 1 via a switch or a USB to Ethernet adaptor. <!--<li> A minimum of 1 GbE switch must be used for the device once the initial setup is complete. The local web UI will not be accessible if the connected switch is not at least 1 Gbe.</li>-->| |
| Device sign-in | Device administrator password, between 8 and 16 characters, including three of the following character types: uppercase, lowercase, numeric, and special characters. | Default password is *Password1*, which expires at first sign-in. |
-| Network settings | Device comes with 2 x 1-GbE, 4 x 25-GbE network ports. <li>Port 1 is used to configure management settings only. One or more data ports can be connected and configured. </li><li> At least one data network interface from among Port 2 - Port 6 needs to be connected to the Internet (with connectivity to Azure).</li><li> DHCP and static IPv4 configuration supported. | Static IPv4 configuration requires IP, DNS server, and default gateway. |
-| Compute network settings | <li>Require 2 free, static, contiguous IPs for Kubernetes nodes, and 1 static IP for IoT Edge service.</li><li>Require one additional IP for each extra service or module that you'll deploy.</li>| Only static IPv4 configuration is supported.|
-| (Optional) Web proxy settings | <li>Web proxy server IP/FQDN, port </li><li>Web proxy username, password</li> | |
+| Network settings | Device comes with 2 x 1-GbE, 4 x 25-GbE network ports. <ul><li>Port 1 is used to configure management settings only. One or more data ports can be connected and configured. </li><li> At least one data network interface from among Port 2 - Port 6 needs to be connected to the Internet (with connectivity to Azure).</li><li> DHCP and static IPv4 configuration supported.</li></ul> | Static IPv4 configuration requires IP, DNS server, and default gateway. |
+| Compute network settings | <ul><li>Require 2 free, static, contiguous IPs for Kubernetes nodes, and 1 static IP for IoT Edge service.</li><li>Require one additional IP for each extra service or module that you'll deploy.</li></ul>| Only static IPv4 configuration is supported.|
+| (Optional) Web proxy settings | <ul><li>Web proxy server IP/FQDN, port </li><li>Web proxy username, password</li></ul> | |
| Firewall and port settings | If using firewall, make sure the [listed URLs patterns and ports](azure-stack-edge-system-requirements.md#networking-port-requirements) are allowed for device IPs. | | | (Recommended) Time settings | Configure time zone, primary NTP server, secondary NTP server. | Configure primary and secondary NTP server on local network.<br>If local server is not available, public NTP servers can be configured. |
-| (Optional) Update server settings | <li>Require update server IP address on local network, path to WSUS server. </li> | By default, public windows update server is used.|
-| Device settings | <li>Device fully qualified domain name (FQDN) </li><li>DNS domain</li> | |
+| (Optional) Update server settings | Require update server IP address on local network, path to WSUS server. | By default, public Windows update server is used.|
+| Device settings | <ul><li>Device fully qualified domain name (FQDN) </li><li>DNS domain</li></ul> | |
| (Optional) Certificates | If you bring your own certificates including the signing chain(s), [Add certificates](azure-stack-edge-pro-r-deploy-configure-certificates-vpn-encryption.md#bring-your-own-certificates) in appropriate format.| Configure certificates only if you change device name and/or DNS domain. | | VPN | <!--Need VPN certificate, VPN gateway, firewall setup in Azure, passphrase and region info VPN scripts. --> | |
-| Encryption-at-rest | Recommend using automatically generated encryption key. |If using your own key, need a 32 character long Base-64 encoded key. |
+| Encryption-at-rest | Recommend using automatically generated encryption key. |If using your own key, you need a 32 character long Base-64 encoded key. |
| Activation | Require activation key from the Azure Stack Edge Pro/ Data Box Gateway resource. | Once generated, the key expires in 3 days. | <!--
Use the following checklist to ensure you have this information after you have p
## Next steps
-Prepare to deploy your [Azure Stack Edge Pro device](azure-stack-edge-pro-r-deploy-prep.md).
+- Prepare to deploy your [Azure Stack Edge Pro device](azure-stack-edge-pro-r-deploy-prep.md).
+- Use the [Azure Stack Edge Network Readiness Tool](azure-stack-edge-deploy-check-network-readiness.md) to verify your network settings.
databox-online Azure Stack Edge Pro R Deploy Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-r-deploy-connect.md
Previously updated : 10/15/2020 Last updated : 02/22/2022 # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro R so I can use it to transfer data to Azure.
In this tutorial, you learn about:
Before you configure and set up your Azure Stack Edge Pro R device, make sure that: * You've installed the physical device as detailed in [Install Azure Stack Edge Pro R](azure-stack-edge-pro-r-deploy-install.md).
+* You've run the Azure Stack Network Readiness Checker tool to verify that your network meets Azure Stack Edge requirements. For instructions, see [Check network readiness for Azure Stack Edge devices](azure-stack-edge-deploy-check-network-readiness.md).
## Connect to the local web UI setup
databox-online Azure Stack Edge Pro R Deploy Prep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-r-deploy-prep.md
Previously updated : 12/20/2021 Last updated : 02/23/2022 # Customer intent: As an IT admin, I need to understand how to prepare the portal to deploy Azure Stack Edge Pro R so I can use it to transfer data to Azure.
Following are the configuration prerequisites for your Azure Stack Edge resource
Before you deploy a physical device, make sure that:
+- You've [run the Azure Stack Network Readiness Checker tool](azure-stack-edge-deploy-check-network-readiness.md) to check network readiness for your Azure Stack Edge device. You can use the tool to check whether your firewall rules are blocking access to any essential URLs for the service and verify custom URLs, among other tests. For more information, see [Check network readiness for your Azure Stack Edge device](azure-stack-edge-deploy-check-network-readiness.md).
+ - You've reviewed the safety information for this device at: [Safety guidelines for your Azure Stack Edge device](azure-stack-edge-pro-r-safety.md). [!INCLUDE [Azure Stack Edge device prerequisites](../../includes/azure-stack-edge-gateway-device-prerequisites.md)]
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Title: Connect your GCP project to Microsoft Defender for Cloud description: Monitoring your GCP resources from Microsoft Defender for Cloud Previously updated : 02/22/2022 Last updated : 02/23/2022 zone_pivot_groups: connect-gcp-accounts
If you have any existing connectors created with the classic cloud connectors ex
:::image type="content" source="media/quickstart-onboard-gcp/classic-connectors-experience.png" alt-text="Switching back to the classic cloud connectors experience in Defender for Cloud.":::
-1. For each connector, select the three dot button **…** at the end of the row, and select **Delete**.
+1. For each connector, select the three dot button at the end of the row, and select **Delete**.
## Connect your GCP projects
defender-for-cloud Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Microsoft Defender for Cloud description: Sample Azure Resource Graph queries for Microsoft Defender for Cloud showing use of resource types and tables to access Microsoft Defender for Cloud related resources and properties. Previously updated : 01/20/2022 Last updated : 02/16/2022
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md
After you acquire your on-premises management console appliance:
**To install and set up**:
-1. Go to [Defender for IoT: Getting Started](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started) in the Azure portal].
+1. Go to [Defender for IoT: Getting Started](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started) in the Azure portal.
1. Select the **On-premises management console** tab.
Onboard a sensor by registering it with Microsoft Defender for IoT and downloadi
- **Locally managed sensors**: Information that sensors detect is displayed in the sensor console. If you're working in an air-gapped network and want a unified view of all information detected by multiple locally managed sensors, work with the on-premises management console.
-1. Select a site to associate your sensor to within an IoT Hub. The IoT Hub will serve as a gateway between this sensor and Microsoft Defender for IoT. Define the display name, and zone. You can also add descriptive tags. The display name, zone, and tags are descriptive entries on the [Sites and Sensors page](how-to-manage-sensors-on-the-cloud.md#view-onboarded-sensors).
+1. Select a site to associate your sensor to within an IoT Hub. The IoT Hub will serve as a gateway between this sensor and Microsoft Defender for IoT. Define the display name, and zone. You can also add descriptive tags. The display name, zone, and tags are descriptive entries on the [Sites and Sensors page](how-to-manage-sensors-on-the-cloud.md#manage-on-boarded-sensors).
1. Select **Register**.
defender-for-iot How To Create Data Mining Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-create-data-mining-queries.md
The following predefined reports are available. These queries are generated in r
- **Programming commands**: Devices that send industrial programming. - **Remote access**: Devices that communicate through remote session protocols. - **Internet activity**: Devices that are connected to the internet.-
+- **CVEs**: A list of devices detected with known vulnerabilities within the last 24 hours.
- **Excluded CVEs**: A list of all the CVEs that were manually excluded. To achieve more accurate results in VA reports and attack vectors, you can customize the CVE list manually by including and excluding CVEs. - **Nonactive devices**: Devices that have not communicated for the past seven days. - **Active devices**: Active network devices within the last 24 hours.
-Find these reports in Analyze** > **Data Mining*. Reports are available for users with Administrator and Security Analyst permissions. Read only users can't access these reports.
+Find these reports in **Analyze** > **Data Mining**. Reports are available for users with Administrator and Security Analyst permissions. Read only users can't access these reports.
## Create a report
+To create a data-mining report:
-1. In Defender for IoT, **Data mining**.
-1. Select **Create report**.
-1. In the **Create new report** dialog, specify a report name and optional description.
-1. In **Choose category**, select the type of report you want to create. You can choose all, standard categories (generic) or specific settings.
-1. In **Order by**, order the report by category or activity.
-1. If you want to filter report results, you can specify a time range (minutes, days, and hours), and IP or MAC address, port, or device group (as defined in the device map).
-4. Select **Save**. Report results open on the **Data Mining** page.
+1. Select **Data Mining** from the side menu. Predefined suggested reports appear automatically.
+
+1. Select **Create report** and then enter the following values:
+
+ - **Name** / **Description**. Enter a meaningful name for your report and an optional description.
+ - **Send to CM**. Toggle this option on to send your report to your on-premises management console.
+ - **Choose category**. Select the categories to include in your report.
+ - **Order by**. Select to sort your data by category or by activity.
+ - **Filter by**. Define a filter for your report, using dates, IP address, MAC address, port, or device group.
+
+1. Select **Save** to save your report and display results on the **Data Mining** page.
Reports are dynamically updated each time you open them. For example: - If you create a report for firmware versions on devices on June 1 and open the report again on June 10, this report will be updated with information that's accurate for June 10.
The on-premises management console lets you generate reports for each sensor tha
- **Programming Commands**: Presents a list of devices that sent programming commands within the last 24 hours. - **Remote Access**: Presents a list of devices that remote sources accessed within the last 24 hours. - When you choose the sensor from the on-premises management console, all the custom reports configured on that sensor appear in the list of reports. For each sensor, you can generate a default report or a custom report configured on that sensor. To generate a report:
defender-for-iot How To Manage Cloud Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-cloud-alerts.md
Defender for IoT alerts lets you enhance the security and operation of your netw
- Protocol and operational anomalies - Suspected malware traffic Alerts triggered by Defender for IoT are displayed on the Alerts page in the Azure portal. Use the Alerts page to:
Alert details triggered by these sensors and aggregated in the Alerts page:
## Alert types and messages
-You can view alert messages you may receive. Reviewing alert types and messages ahead of time will help you plan remediation and integration with playbooks.
-[Alert types and descriptions](alert-engine-messages.md#alert-types-and-descriptions).
+You can view alert messages you may receive. Reviewing alert types and messages ahead of time will help you plan remediation and integration with playbooks.
+For more information, see [Alert types and descriptions](alert-engine-messages.md#alert-types-and-descriptions).
## View alerts
This section describes the information available in the Alerts table.
|--|--| | **Severity**| A predefined alert severity assigned by the sensor. The severity can be updated. See [Manage alert status and severity](#manage-alert-status-and-severity) for details. | **Name** | The alert title.
- | **Site** | The site associated with the sensor. This site name is defined when you register a sensor with Microsoft Defender for IoT on the Azure portal. The name can be viewed in the Sites and Sensors page on the portal. See [View onboarded sensors](how-to-manage-sensors-on-the-cloud.md#view-onboarded-sensors) for information on registered sensors.
+ | **Site** | The site associated with the sensor. This site name is defined when you register a sensor with Microsoft Defender for IoT on the Azure portal. The name can be viewed in the Sites and Sensors page on the portal. See [View onboarded sensors](how-to-manage-sensors-on-the-cloud.md#manage-on-boarded-sensors) for information on registered sensors.
| **Engine** | The sensor engine that detected the Operational Technology (OT) traffic. To learn more about engines, see [Detection engines](how-to-control-what-traffic-is-monitored.md#detection-engines). For device builders, the term micro-agent will be displayed. | **Detection time** | The first time the alert was detected. The alert traffic may occur several times after the first detection. If the alert Status is **New**, the detection time won't change. If the alert is Closed and the traffic is seen again, a new detection time will be displayed. | **Status** | The alert status: New, Active, Closed
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
The console will display restore failures.
## Update a standalone sensor version
-The following procedure describes how to update a standalone sensor by using the sensor console. The update process takes about 30 minutes.
+The following procedure describes how to update a standalone sensor by using the sensor console.
-1. Go to the [Azure portal](https://portal.azure.com/).
+1. In the Azure portal, go to **Defender for IoT** > **Getting started** > **Updates**.
-2. Go to Defender for IoT.
-
-3. Go to the **Updates** page.
+1. From the **Sensors** section, select **Download** for the sensor update, and save your `<legacy/upstream>-sensor-secured-patcher-<version number>.tar` file locally. For example:
:::image type="content" source="media/how-to-manage-individual-sensors/updates-page.png" alt-text="Screenshot of the Updates page of Defender for IoT.":::
-4. Select **Download** from the **Sensors** section and save the file.
-
-5. In the sensor console's sidebar, select **System Settings**.
+1. On your sensor console, select **System Settings** > **Sensor management** > **Software Update**.
-6. On the **Version Update** pane, select **Update**.
+1. On the **Software Update** pane on the right, select **Upload file**, and then navigate to and select your downloaded `legacy-sensor-secured-patcher-<Version number>.tar` file.
:::image type="content" source="media/how-to-manage-individual-sensors/upgrade-pane-v2.png" alt-text="Screenshot of the update pane.":::
-7. Select the file that you downloaded from the Defender for IoT **Updates** page.
+ The update process starts, and may take about 30 minutes. During your upgrade, the system is rebooted twice.
-8. The update process starts, during which time the system is rebooted twice. After the first reboot (before the completion of the update process), the system opens with the sign-in window. After you sign in, the upgrade version appears at the lower left of the sidebar.
+ Sign in when prompted, and then return to the **System Settings** > **Sensor management** > **Software Update** pane to confirm that the new version is listed.
:::image type="content" source="media/how-to-manage-individual-sensors/defender-for-iot-version.png" alt-text="Screenshot of the upgrade version that appears after you sign in.":::
+If you're upgrading from version 10.5.x to version 22.x, make sure to reactivate your sensor. For more information, see [Reactivate a sensor for upgrades to version 22.x from a legacy version](how-to-manage-sensors-on-the-cloud.md#reactivate-a-sensor-for-upgrades-to-version-22x-from-a-legacy-version).
+
+After upgrading to version 22.1.x, the new upgrade log can be found at the following path, accessed via SSH and the *cyberx_host* user: `/opt/sensor/logs/legacy-upgrade.log`.
+ ## Forward sensor failure alerts You can forward alerts to third parties to provide details about:
defender-for-iot How To Manage Sensors On The Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-on-the-cloud.md
This article describes how to onboard, view, and manage sensors with [Defender f
## Onboard sensors
-You onboard a sensor by registering it with Microsoft Defender for IoT and downloading a sensor activation file.
+Onboard a sensor by registering it with Microsoft Defender for IoT and downloading a sensor activation file.
-### Register the sensor
+**Prerequisites**: Make sure that you've set up your sensor and configured your SPAN port or TAP. For more information, see [Defender for IoT installation](how-to-install-software.md).
-**To register:**
+**To onboard your sensor to Defender for IoT**:
-1. Go to the [Defender for IoT: Getting started page](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started) in the Azure portal.
+1. In the Azure portal, navigate to **Defender for IoT** > **Getting started** and select **Set up OT/ICS Security**. Alternately, from the Defender for IoT **Sites and sensors** page, select **Onboard OT sensor**.
-1. Select **Onboard sensor**.
+1. By default, on the **Set up OT/ICS Security** page, **Step 1: Did you set up a sensor?** and **Step 2: Configure SPAN port or TAPΓÇï** of the wizard are collapsed. If you haven't completed these steps, do so before continuing.
- :::image type="content" source="media/how-to-manage-sensors-on-the-cloud/onboard-a-sensor.png" alt-text="Select onboard sensor to start the onboarding process for your sensor.":::
+1. In **Step 3: Register this sensor with Microsoft Defender for IoT** enter or select the following values for your sensor:
-1. Create a sensor name.
+ 1. In the **Sensor name** field, enter a meaningful name for your sensor. We recommend including your sensor's IP address as part of the name, or using another easily identifiable name, that can help you keep track between the registration name in the Azure portal and the IP address of the sensor shown in the sensor console.
- We recommend that you include the IP address of the sensor you installed as part of the name, or use an easily identifiable name. This ensures easier tracking and consistent naming between the registration name in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started) and the IP of the deployed sensor displayed in the sensor console.
+ 1. In the **Subscription** field, select your Azure subscription.
-1. Associate the sensor with an Azure subscription.
+ 1. Toggle on the **Cloud connected** option to have your sensor connected to other Azure services, such as Microsoft Sentinel, and to push [threat intelligence packages](how-to-work-with-threat-intelligence-packages.md) from Defender for IoT to your sensors.
- :::image type="content" source="media/how-to-manage-sensors-on-the-cloud/name-subscription.png" alt-text="Enter a meaningful name, and connect your sensor to a subscription.":::
+ 1. In the **Sensor version** field, select which software version is installed on your sensor machine. We recommend that you select **22.X and above** to get all of the latest features and enhancements.
-1. Choose a sensor connection mode by using the **Cloud connected** toggle. If the toggle is on, the sensor is cloud connected. If the toggle is off, the sensor is locally managed.
+ If you haven't yet upgraded to version 22.x, see [Update a standalone sensor version](how-to-manage-individual-sensors.md#update-a-standalone-sensor-version) and [Reactivate a sensor for upgrades to version 22.x](#reactivate-a-sensor-for-upgrades-to-version-22x-from-a-legacy-version).
- - **Cloud-connected sensors**: Information that the sensor detects is displayed in the sensor console. Alert information is delivered through an IoT hub and can be shared with other Azure services, such as Microsoft Sentinel. In addition, threat intelligence packages can be pushed from Defender for IoT to sensors. Conversely when, the sensor is not cloud connected, you must download threat intelligence packages and then upload them to your enterprise sensors. To allow Defender for IoT to push packages to sensors, enable the **Automatic Threat Intelligence Updates** toggle. For more information, see [Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md).
-
- For cloud connected sensors, the name defined during onboarding is the name that appears in the sensor console. You can't change this name from the console directly. For locally managed sensors, the name applied during onboarding will be stored in Azure but can be updated in the sensor console.
+ 1. In the **Site** section, select the **Resource name** and enter the **Display name** for your site. Add any tags as needed to help you identify your sensor.
- - **Locally managed sensors**: Information that sensors detect is displayed in the sensor console. If you're working in an air-gapped network and want a unified view of all information detected by multiple locally managed sensors, work with the on-premises management console.
+ 1. In the **Zone** field, select a zone from the menu, or select **Create Zone** to create a new one.
-1. Select a site to associate your sensor to within an IoT Hub. The IoT Hub will serve as a gateway between this sensor and Microsoft Defender for IoT. Define the display name, and zone. You can also add descriptive tags. The display name, zone, and tags are descriptive entries on the [Sites and Sensors page](#view-onboarded-sensors).
+1. Select **Register**.
-1. Select **Register**.
+A success message appears and your activation file is automatically downloaded, and your sensor is now shown under the configured site on the Defender for IoT **Sites and sensors** page.
-### Download the sensor activation file
+However, until you activate your sensor, the sensor's status will show as **Pending Activation**.
-After registering a sensor you will be able to download an activation file. The sensor activation file contains instructions about the management mode of the sensor. You download a unique activation file for each sensor that you deploy. A user who signs in to the sensor console for the first time uploads the activation file to the sensor.
+Make the downloaded activation file accessible to the sensor console admin so that they can activate the sensor. For more information, see [Upload new activation files](how-to-manage-individual-sensors.md#upload-new-activation-files).
-**To download an activation file:**
+## Manage on-boarded sensors
-1. On the **Onboard Sensor** page, select **Register**
+Sensors that you've on-boarded to Defender for IoT are listed on the Defender for IoT **Sites and sensors** page. This page supports the following management tasks:
-1. Select **download activation file**.
+- **Export sensor data**. To export a CSV file with details about all sensors listed, select **Export** at the top of the page.
-1. Make the file accessible to the user who's signing in to the sensor console for the first time.
+- **Edit sensor details**. To edit a sensor zone, or to toggle on/off the **Automatic Threat Intelligence Update** option, select the **...** options menu at the right of a sensor row > **Edit**.
-## View onboarded sensors
+ Make your changes as needed and select **Save**.
-To view important operational information about onboarded sensors:
+- **Delete a sensor**. Delete sensors if you're no longer working with them. Select the **...** options menu at the right of a sensor row > **Delete sensor**.
-1. Go to [Defender for IoT: Getting started](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started), in the Azure portal.
+- **Download an activation file**. You'll need to download a new activation file for your sensor if you want to [reactivate the sensor](#reactivate-a-sensor). Select the **...** options menu at the right of a sensor row > **Download activation file**.
-1. Select **Sites and Sensors**. The page shows how many sensors were onboarded, the number of sensors that are cloud connected and locally managed, as well as:
+- **Prepare to update to 22.X**. Use this option specifically when upgrading sensors to version 22.x. For more information, see [below](#reactivate-a-sensor-for-upgrades-to-version-22x-from-a-legacy-version).
- :::image type="content" source="media/how-to-manage-sensors-on-the-cloud/sites-and-sensors.png" alt-text="Select the sites and sensors page to view all of the associated sensors.":::
+## Reactivate a sensor
- - The sensor name assigned during onboarding.
- - The connection type (cloud connected or locally managed).
- - The zone associated with the sensor.
- - The sensor version installed.
- - The sensor connection status to the cloud.
- - The last time the sensor was detected connecting to the cloud.
-
-## Manage onboarded sensors
-
-Use the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started) for management tasks related to sensors.
-
-Onboarded sensors can be viewed on the **Sites and Sensors** page. You can also edit sensor information from this page.
-
-### Export sensor details
-
-To export onboarded sensor information, select the **Export** icon on the top of the **Sites and Sensors** page.
--
-### Edit sensor zone details
-
-Use the **Sites and Sensors** edit options to edit the sensor name and zone.
-
-**To edit:**
-
-1. Select the **ellipsis** (**...**) for the sensor you want to edit.
-
-1. Select **Edit**.
-
-1. Update the sensor zone, or create a new zone.
+You may need to reactivate your sensor because you want to:
-### Delete a sensor
+- **Work in cloud-connected mode instead of locally managed mode**: After reactivation, existing sensor detections are displayed in the sensor console, and newly detected alert information is delivered through Defender for IoT in the Azure portal. This information can be shared with other Azure services, such as Microsoft Sentinel.
-If you delete a cloud-connected sensor, information won't be sent to the IoT hub. Delete locally connected sensors when you're no longer working with them.
+- **Work in locally managed mode instead of cloud-connected mode**: After reactivation, sensor detection information is displayed only in the sensor console.
-**To delete a sensor:**
+- **Associate the sensor to a new site**: To do this, re-register the sensor with new site definitions and use the new activation file to activate.
-1. Select the ellipsis (**...**) for the sensor you want to delete.
+In such cases, do the following:
-1. Select **delete sensor**.
+1. [Delete your existing sensor](#manage-on-boarded-sensors).
+1. [Onboard your sensor](#onboard-sensors), registering it again with any new settings.
+1. [Upload your new activation file](how-to-manage-individual-sensors.md#upload-new-activation-files).
-### Reactivate a sensor
+### Reactivate a sensor for upgrades to version 22.x from a legacy version
-You may need to reactivate your sensor because you want to:
+This procedure describes how to reactivate a sensor specifically when upgrading to version 22.x from version 10.5.x.
-- **Work in cloud-connected mode instead of locally managed mode**: After reactivation, sensor detections are displayed in the sensor and newly detected alert information is delivered through the IoT hub. This information can be shared with other Azure services, such as Microsoft Sentinel.
+**To reactivate your sensor after a legacy upgrade**:
-- **Work in locally managed mode instead of cloud-connected mode**: After reactivation, sensor detection information is displayed only in the sensor.
+1. Make sure that your sensor is fully upgraded. For more information, see [Update a standalone sensor version](how-to-manage-individual-sensors.md#update-a-standalone-sensor-version).
-- **Associate the sensor to a new IoT hub**: To do this, re-register the sensor with a new hub, and then download a new activation file.
+1. In Defender for IoT, select **Sites and sensors** on the left.
-**To reactivate a sensor:**
+1. Select the site where you want to update your sensor, and then navigate to the sensor you want to update.
-1. Go to **Sites and Sensors** page in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started).
+1. Expand the row for your sensor, select the options **...** menu on the right of the row, and then select **Prepare to update to 22.x**.
-1. Select the sensor for which you want to upload a new activation file.
+ :::image type="content" source="media/how-to-manage-sensors-on-the-cloud/prepare-to-update.png" alt-text="Screenshot of the Prepare to update option." lightbox="media/how-to-manage-sensors-on-the-cloud/prepare-to-update.png":::
-1. Select the **ellipsis** (**...**), and then select **delete sensor**.
+1. In the **Prepare to update sensor to version 22.X** message, select **Let's go**.
- :::image type="content" source="media/how-to-manage-sensors-on-the-cloud/delete-a-sensor.png" alt-text="Select the ellipsis and then delete sensor.":::
+1. When the new activation file is ready, download it and verify that the sensor status has switched to **Pending activation**.
-1. [Onboard the sensor](#onboard-sensors) again in the new mode, or with a new IoT hub by selecting **Onboard a sensor** from the Getting Started page.
+1. Use your newly downloaded activation file to activate your upgraded sensor.
-1. Download the activation file.
+ 1. On your sensor, select **System settings > Sensor management > Subscription & Mode Activation**.
-1. Sign in to the Defender for IoT sensor console.
+ 1. In the **Subscription & Mode Activation** pane that appears on the right, select **Select file**, and then browse to and select your new activation file.
-1. In the sensor console, select **System settings** > **Sensor management** > **Subscription & Activation Mode**.
+1. In Defender for IoT on the Azure portal, monitor your sensor's activation status. When the sensor is fully activated:
-1. Select **Select file** choose the file you saved from the Onboard sensor page.
+ - The sensor's **Overview** page shows an activation status of **Valid**.
+ - In the Azure portal, on the **Sites and sensors** page, the sensor is listed as **OT cloud connected** and with the updated sensor version.
-1. Select **Activate**.
+Your legacy sensors will continue to appear in the **Sites and sensors** page until you delete them. For more information, see [above](#manage-on-boarded-sensors).
## Next steps
-[Activate and set up your sensor](how-to-activate-and-set-up-your-sensor.md)
+[View and manage alerts on the Defender for IoT portal (Preview)](how-to-manage-cloud-alerts.md)
defender-for-iot How To Manage Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md
This section describes how to onboard a subscription.
1. In the Pricing page, select **Subscribe**. 1. In the **Onboard subscription** pane, select a subscription and the number of committed devices from the drop-down menu.
- :::image type="content" source="media/how-to-manage-subscriptions/onboard-subscription.png" alt-text="select your subscription and the number of committed devices.":::
+ :::image type="content" source="media/how-to-manage-subscriptions/onboard-subscription.png" alt-text="select your subscription and the number of committed devices." lightbox="media/how-to-manage-subscriptions/onboard-subscription.png":::
1. Select **Subscribe**. 1. Confirm your subscription.
You may need to update your subscription with more committed devices, or more fe
**To update a subscription:** 1. Go to [Defender for IoT: Getting started](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started) in the Azure portal. 1. Select **Onboard subscription**.
-1. Select the subscription, and then select the three dots. (...).
+1. Select the subscription, and then select the three dots (...).
1. Select **Edit**. 1. Update the committed devices and select **Save**. 2. In the confirmation dialog box that opens, select **Confirm.**
You will need to upload a new activation file to your on-premises management con
You may need to offboard a subscription, for example if you need to work with a new payment entity. Subscription offboarding takes effect one hour after confirming the offboard. Your upcoming monthly bill will reflect this change.
-Remove all sensors that are associated with the subscription prior to offboarding. For more information on how to delete a sensor, see [Delete a sensor](how-to-manage-sensors-on-the-cloud.md#delete-a-sensor).
+Remove all sensors that are associated with the subscription prior to offboarding. For more information on how to delete a sensor, see [Delete a sensor](how-to-manage-sensors-on-the-cloud.md#manage-on-boarded-sensors).
**To offboard a subscription:** 1. Go to [Defender for IoT: Getting started](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started) in the Azure portal.
-1. Select the subscription, and then select the three dots. (...).
+1. Select the subscription, and then select the three dots (...).
1. Select **Offboard subscription**.
defender-for-iot How To Set Up Your Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-your-network.md
Title: Set up your network description: Learn about solution architecture, network preparation, prerequisites, and other information needed to ensure that you successfully set up your network to work with Microsoft Defender for IoT appliances. Previously updated : 12/19/2021 Last updated : 02/22/2022
Verify that your organizational security policy allows access to the following:
#### Sensor access to Azure portal
-| Protocol | Transport | In/Out | Port | Used | Purpose | Source | Destination |
-|--|--|--|--|--|--|--|--|
-| HTTPS / Websocket | TCP | Out | 443 | Gives the sensor access to the Azure portal. (Optional) Access can be granted through a proxy. | Access to Azure portal | Sensor | *.azure-devices.net, *.blob.core.windows.net, *.servicebus.windows.net |
+| Protocol | Transport | In/Out | Port | Purpose | Source | Destination |
+|--|--|--|--|--|--|--|
+| HTTPS | TCP | Out | 443 | Access to Azure portal | Sensor | `*.azure-devices.net`<br> `*.blob.core.windows.net`<br> `*.servicebus.windows.net` |
#### Sensor access to the on-premises management console
The following diagram is a general abstraction of a multilayer, multitenant netw
Typically, NTA sensors are deployed in layers 0 to 3 of the OSI model. #### Example: Ring topology
An overview of the industrial network diagram will allow you to define the prope
## Next steps
-[About the Defender for IoT installation](how-to-install-software.md)
+[About the Defender for IoT installation](how-to-install-software.md)
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Title: What's new in Microsoft Defender for IoT description: This article lets you know what's new in the latest release of Defender for IoT. Previously updated : 12/19/2021 Last updated : 02/15/2022 # What's new in Microsoft Defender for IoT? [!INCLUDE [Banner for top of topics](../includes/banner.md)]
-This article lists new features and feature enhancements for Defender for IoT.
+This article lists new features and feature enhancements for Defender for IoT in February 2022.
-Noted features are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## Versioning and support for Defender for IoT
Listed below are the support, breaking change policies for Microsoft Defender fo
### Servicing information and timelines
-Microsoft plans to release updates for Defender for IoT no less than once a quarter. Each General Availability (GA) version of the Defender for IoT sensor and on-premises management console is supported for nine months after release. Fixes and new functionality will be applied to the current GA version that is currently supported and will not be applied to older GA versions.
-
-The Defender for IoT sensor and on-premises management console update packages include new functionality and security patches. Urgent, high-risk security updates will be applied to minor releases occurring during the quarter.
-
-*Making changes to packages manually might have detrimental effects on the sensor and on-premises management console. Microsoft will be unable to provide support for your deployment if this happen.*
-
+Each General Availability (GA) version of the Defender for IoT sensor and on-premises management console is supported for nine months after release. Fixes and new functionality will be applied to the current GA version that is currently supported and won't be applied to older GA versions.
+The Defender for IoT sensor and on-premises management console update packages includes new functionality and security patches. Urgent, high-risk security updates will be applied to minor releases occurring during the quarter.
+*Making changes to packages manually might have detrimental effects on the sensor and on-premises management console. In such cases, Microsoft is unable to provide support for your deployment.*
### Versions and support dates | Version | Date released | End support date | |--|--|--|
+| 22.1 | 02/2022 | 10/2022 |
| 10.0 | 01/2021 | 10/2021 | | 10.3 | 04/2021 | 01/2022 | | 10.5.2 | 10/2021 | 07/2022 | | 10.5.3 | 10/2021 | 07/2022 | | 10.5.4 | 12/2021 | 09/2022 |
+## February 2022
+
+- [Sensor redesign and unified Microsoft product experience](#sensor-redesign-and-unified-microsoft-product-experience)
+- [Enhanced sensor Overview page](#enhanced-sensor-overview-page)
+- [New support diagnostics log](#new-support-diagnostics-log)
+- [Alert updates](#alert-updates)
+- [New sensor installation wizard](#new-sensor-installation-wizard)
+- [Containerized sensor installation](#containerized-sensor-installation)
+- [Upgrade to version 22.1](#upgrade-to-version-221)
+- [New connectivity model and firewall requirements](#new-connectivity-model-and-firewall-requirements)
+- [Protocol improvements](#protocol-improvements)
+- [Modified, replaced, or removed options and configurations](#modified-replaced-or-removed-options-and-configurations)
+
+### Sensor redesign and unified Microsoft product experience
+
+The Defender for IoT sensor console has been redesigned to create a unified Microsoft Azure experience and enhance and simplify workflows.
+
+These features are now Generally Available (GA). Updates include the general look and feel, drill-down panes, search and action options, and more. For example:
+
+**Simplified workflows include**:
+
+- The **Device inventory** page now includes detailed device pages. Select a device in the table and then select **View full details** on the right.
+
+ :::image type="content" source="media/release-notes/device-inventory-details.png" alt-text="Screenshot of the View full details button." lightbox="media/release-notes/device-inventory-details.png":::
+
+- Properties updated from the sensor's inventory are now automatically updated in the cloud device inventory.
+
+- The device details pages, accessed either from the **Device map** or **Device inventory** pages, is shown as read only. To modify device properties, select **Edit properties** on the bottom-left.
+
+- The **Data mining** page now includes reporting functionality. While the **Reports** page was removed, users with read-only access can view updates on the **Data mining page** without the ability to modify reports or settings.
+
+ For admin users creating new reports, you can now toggle on a **Send to CM** option to send the report to a central management console as well. For more information, see [Create a report](how-to-create-data-mining-queries.md#create-a-report).
+
+- The **System settings** area has been reorganized in to sections for *Basic* settings, settings for *Network monitoring*, *Sensor management*, *Integrations*, and *Import settings*.
+
+- The sensor online help now links to key articles in the Microsoft Defender for IoT documentation.
+
+**Defender for IoT maps now include**:
+
+- A new **Map View** is now shown for alerts and on the device details pages, showing where in your environment the alert or device is found.
+
+- Right-click a device on the map to view contextual information about the device, including related alerts, event timeline data, and connected devices.
+
+- To enable the ability to collapse IT networks, ensure that the **Toggle IT Networks Grouping** option is enabled. This option is now only available from the map.
+
+- The **Simplified Map View** option has been removed.
+
+We've also implemented global readiness and accessibility features to comply with Microsoft standards. In the on-premises sensor console, these updates include both high contrast and regular screen display themes and localization for over 15 languages.
+
+For example:
++
+Access global readiness and accessibility options from the **Settings** icon at the top-right corner of your screen:
++
+### Enhanced sensor Overview page
+
+The Defender for IoT sensor portal's **Dashboard** page has been renamed as **Overview**, and now includes data that better highlights system deployment details, critical network monitoring health, top alerts, and important trends and statistics.
++
+The Overview page also now serves as a *black box* to view your overall sensor status in case your outbound connections, such as to the Azure portal, go down.
+
+Create more dashboards using the **Trends & Statistics** page, located under the **Analyze** menu on the left.
+
+### New support diagnostics log
+
+Now you can get a summary of the log and system information that gets added to your support tickets. In the **Backup and Restore** dialog, select **Support Ticket Diagnostics**.
++
+### Alert updates
+
+**In the Azure portal**:
+
+Alerts are now available in Defender for IoT in the Azure portal. Work with alerts to enhance the security and operation of your IoT/OT network.
+
+The new **Alerts** page is currently in Public Preview, and provides:
+
+- An aggregated, real-time view of threats detected by network sensors.
+- Remediation steps for devices and network processes.
+- Streaming alerts to Microsoft Sentinel and empower your SOC team.
+- Alert storage for 90 days from the time they're first detected.
+- Tools to investigate source and destination activity, alert severity and status, MITRE ATT&CK information, and contextual information about the alert.
+
+For example:
++
+**On the sensor console**:
+
+On the sensor console, the **Alerts** page now shows details for alerts detected by sensors that are configured with a cloud-connection to Defender for IoT on Azure. Users working with alerts in both Azure and on-premises should understand how alerts are managed between the Azure portal and the on-premises components.
++
+Other alert updates include:
+
+- **Access contextual data** for each alert, such as events that occurred around the same time, or a map of connected devices. Maps of connected devices are available for sensor console alerts only.
+
+- **Alert statuses** are updated, and for example now include a *Closed* status instead of *Acknowledged*.
+
+- **Alert storage** for 90 days from the time that they're first detected.
+
+- The **Backup Activity with Antivirus Signatures Alert**. This new alert warning is triggered for traffic detected between a source device and destination backup server, which is often legitimate backup activity. Critical or major malware alerts are no longer triggered for such activity.
+
+- **During upgrades**, sensor console alerts that are currently archived are deleted. Pinned alerts are no longer supported, so pins are removed for sensor console alerts as relevant.
+
+### Custom alert updates
+
+The sensor console's **Custom alert rules** page now provides:
+
+- Hit count information in the **Custom alert rules** table, with at-a-glance details about the number of alerts triggered in the last week for each rule you've created.
+
+- The ability to schedule custom alert rules to run outside of regular working hours.
+
+- The ability to alert on any field that can be extracted from a protocol using the DPI engine.
+
+- Complete protocol support when creating custom rules, and support for an extensive range of related protocol variables.
+
+ :::image type="content" source="media/how-to-manage-sensors-on-the-cloud/protocol-support-custom-alerts.png" alt-text="Screenshot of the updated Custom alerts dialog. "lightbox="media/how-to-manage-sensors-on-the-cloud/protocol-support-custom-alerts.png":::
+
+### New sensor installation wizard
+
+Previously, you needed to use separate dialogs to upload a sensor activation file, verify your sensor network configuration, and configure your SSL/TLS certificates.
+
+Now, when installing a new sensor or a new sensor version, our installation wizard provides a streamlined interface to do all these tasks from a single location.
+
+For more information, see [Defender for IoT installation](how-to-install-software.md).
+
+### Containerized sensor installation
+
+The Defender for Iot sensor software installation is now containerized. With the now-containerized sensor, you can use the *cyberx_host* user to investigate issues with other containers or the operating system, or to send files via FTP.
+
+This *cyberx_host* user is available by default and connects to the host machine. If you need to, recover the password for the *cyberx_host* user from the **Sites and sensors** page in Defender for IoT.
+
+As part of the containerized sensor, the following CLI commands have been modified:
+
+|Legacy name |Replacement |
+|||
+|`cyberx-xsense-reconfigure-interfaces` |`sudo dpkg-reconfigure iot-sensor` |
+|`cyberx-xsense-reload-interfaces` | `sudo dpkg-reconfigure iot-sensor` |
+|`cyberx-xsense-reconfigure-hostname` | `sudo dpkg-reconfigure iot-sensor` |
+| `cyberx-xsense-system-remount-disks` |`sudo dpkg-reconfigure iot-sensor` |
+| | |
+
+The `sudo cyberx-xsense-limit-interface-I eth0 -l value` CLI command was removed. This command was used to limit the interface bandwidth that the sensor uses for day-to-day procedures, and is no longer supported.
+
+For more information, see [Defender for IoT installation](how-to-install-software.md) and [Work with Defender for IoT CLI commands](references-work-with-defender-for-iot-cli-commands.md).
+
+### Upgrade to version 22.1
+
+Upgrade your sensor versions directly to 22.1. Make sure that you've downloaded and upgraded your sensor machine, and then reactivated your sensor from the Azure portal using the new activation file.
+
+For more information, see:
+
+- [Update a standalone sensor version](how-to-manage-individual-sensors.md#update-a-standalone-sensor-version)
+- [Reactivate a sensor for upgrades to version 22.x from a legacy version](how-to-manage-sensors-on-the-cloud.md#reactivate-a-sensor-for-upgrades-to-version-22x-from-a-legacy-version)
+
+After you've upgraded to version 22.1.x, the new upgrade log can be found at the following path, accessed via SSH and the *cyberx_host* user: `/opt/sensor/logs/legacy-upgrade.log`.
+
+### New connectivity model and firewall requirements
+
+With this version, users are only required to install sensors and connect to Defender for IoT on the Azure portal. Defender for IoT no longer requires you to install, pay for, or manage an IoT Hub.
+
+This new connectivity model requires that you open a new firewall rule. For more information, see [Sensor access to Azure portal](how-to-set-up-your-network.md#sensor-access-to-azure-portal).
+
+### Protocol improvements
+
+This version of Defender for IoT provides improved support for:
+
+- Profinet DCP
+- Honeywell
+- Windows endpoint detection
+
+### Modified, replaced, or removed options and configurations
+
+The following Defender for IoT options and configurations have been moved, removed, and/or replaced:
+
+- Reports previously found on the **Reports** page are now shown on the **Data Mining** page instead. You can also continue to view data mining information directly from the on-premises management console.
+
+- Changing a locally managed sensor name is now supported only by onboarding the sensor to the Azure portal again with the new name. Sensor names can no longer be changed directly from the sensor. For more information, see [Change the name of a sensor](how-to-manage-individual-sensors.md#change-the-name-of-a-sensor).
++ ## December 2021 - [Enhanced integration with Microsoft Sentinel (Preview)](#enhanced-integration-with-microsoft-sentinel-preview)
Version 10.5.4 of Microsoft Defender for IoT mitigates the Apache Log4j vulnerab
Version 10.5.4 of Microsoft Defender for IoT delivers important alert enhancements: - Alerts for certain minor events or edge-cases are now disabled.-- For certain scenarios, similar alert are minimized in a single alert message.
+- For certain scenarios, similar alerts are minimized in a single alert message.
These changes reduce alert volume and enable more efficient targeting and analysis of security and operational events.
The alerts listed below are permanently disabled with version 10.5.4. Detection
#### Alerts disabled by default
-The alerts listed below are disabled by default with version 10.5.4. You can re-enable the alerts from the Support page of the sensor console, if required.
+The alerts listed below are disabled by default with version 10.5.4. You can re-enable the alerts from the Support page of the sensor console, if necessary.
**Anomaly engine alert** - Abnormal Number of Parameters in HTTP Header
The alerts listed below are disabled by default with version 10.5.4. You can re-
**Policy engine alerts**
-Disabling these alerts also disables monitoring of related traffic. Specifically, this traffic will not be reported in Data Mining reports.
+Disabling these alerts also disables monitoring of related traffic. Specifically, this traffic won't be reported in Data Mining reports.
- Illegal HTTP Communication alert and HTTP Connections Data Mining traffic - Unauthorized HTTP User Agent alert and HTTP User Agents Data Mining traffic
Disabling these alerts also disables monitoring of related traffic. Specifically
**Unauthorized Database Operation alert** Previously, this alert covered DDL and DML alerting and Data Mining reporting. Now: - DDL traffic: alerting and monitoring are supported. -- DML traffic: Monitoring is supported. Alerting is not supported.
+- DML traffic: Monitoring is supported. Alerting isn't supported.
**New Asset Detected alert**
-This alert is disabled for new devices detected in IT subnets. The New Asset Detected alert is still triggered for new devices discovered in OT subnets. OT subnets are detected automatically and can be updated by users if required.
+This alert is disabled for new devices detected in IT subnets. The New Asset Detected alert is still triggered for new devices discovered in OT subnets. OT subnets are detected automatically and can be updated by users if necessary.
### Minimized alerting
The following feature enhancements are available with version 10.5.3 of Microsof
- As part of our automated maintenance, archived alerts that are over 90 days old will now be automatically deleted. -- A number of enhancements have been made to the exporting of alert metadata based on customer feedback.
+- Many enhancements have been made to the exporting of alert metadata based on customer feedback.
## October 2021
The following feature enhancements are available with version 10.5.2 of Microsof
Users can now view PLC operating mode states, changes, and risks. The PLC Operating mode consists of the PLC logical Run state and the physical Key state, if a physical key switch exists on the PLC.
-This new capability helps improve security by detecting *unsecure* PLCs, and as a result prevents malicious attacks such as PLC Program Downloads. The 2017 Triton attack on a petrochemical plant illustrates the impact of such risks.
+This new capability helps improve security by detecting *unsecure* PLCs, and as a result prevents malicious attacks such as PLC Program Downloads. The 2017 Triton attack on a petrochemical plant illustrates the effects of such risks.
This information also provides operational engineers with critical visibility into the operational mode of enterprise PLCs. #### What is an unsecure mode?
Unicode characters are now supported when working with sensor certificate passph
### Work with automatic threat Intelligence updates (Public Preview)
-New threat intelligence packages can now be automatically pushed to cloud connected sensors as they are released by Microsoft Defender for IoT. This is in addition to downloading threat intelligence packages and then uploading them to sensors.
+New threat intelligence packages can now be automatically pushed to cloud connected sensors as they're released by Microsoft Defender for IoT. This is in addition to downloading threat intelligence packages and then uploading them to sensors.
Working with automatic updates helps reduce operational efforts and ensure greater security. Enable automatic updating by onboarding your cloud connected sensor on the Defender for IoT portal with the **Automatic Threat Intelligence Updates** toggle turned on.
-If you would like to take a more conservative approach to updating your threat intelligence data, you can manually push packages from the Microsoft Defender for IoT portal to cloud connected sensors only when you feel it is required.
+If you would like to take a more conservative approach to updating your threat intelligence data, you can manually push packages from the Microsoft Defender for IoT portal to cloud connected sensors only when you feel it's required.
This gives you the ability to control when a package is installed, without the need to download and then upload it to your sensors. Manually push updates to sensors from the Defender for IoT **Sites and Sensors** page. You can also review the following information about threat intelligence packages:
This feature is available on the on-premises management console with the release
Certificate and password recovery enhancements were made for this release. #### Certificates
-
+ This version lets you: - Upload SSL certificates directly to the sensors and on-premises management consoles.-- Perform validation between the on-premises management console and connected sensors, and between a management console and a High Availability management console. Validation is based on expiration dates, root CA authenticity, and Certificate Revocation Lists. If validation fails, the session will not continue.
+- Perform validation between the on-premises management console and connected sensors, and between a management console and a High Availability management console. Validation is based on expiration dates, root CA authenticity, and Certificate Revocation Lists. If validation fails, the session won't continue.
For upgrades: -- There is no change in SSL certificate or validation functionality during the upgrade.
+- There's no change in SSL certificate or validation functionality during the upgrade.
- After upgrading, sensor and on-premises management console administrative users can replace SSL certificates, or activate SSL certificate validation from the System Settings, SSL Certificate window. For Fresh Installations: -- During first-time login, users are required to either use an SSL Certificate (recommended) or a locally generated self-signed certificate (not recommended)
+- During first-time sign-in, users are required to either use an SSL Certificate (recommended) or a locally generated self-signed certificate (not recommended)
- Certificate validation is turned on by default for fresh installations. #### Password recovery
Sensor and on-premises management console Administrative users can now recover p
Following initial sign-in to the on-premises management console, users are now required to upload an activation file. The file contains the aggregate number of devices to be monitored on the organizational network. This number is referred to as the number of committed devices. Committed devices are defined during the onboarding process on the Microsoft Defender for IoT portal, where the activation file is generated. First-time users and users upgrading are required to upload the activation file.
-After initial activation, the number of devices detected on the network might exceed the number of committed devices. This event might happen, for example, if you connect more sensors to the management console. If there is a discrepancy between the number of detected devices and the number of committed devices, a warning appears in the management console. If this event occurs, you should upload a new activation file.
+After initial activation, the number of devices detected on the network might exceed the number of committed devices. This event might happen, for example, if you connect more sensors to the management console. If there's a discrepancy between the number of detected devices and the number of committed devices, a warning appears in the management console. If this event occurs, you should upload a new activation file.
#### Pricing page options
Security Reader and Security Administrator support has been added.
### Other updates #### Access group - zone permissions
-
-The on-premises management console Access Group rules will not include the option to grant access to a specific zone. There is no change in defining rules that use sites, regions, and business units. Following upgrade, Access Groups that contained rules allowing access to specific zones will be modified to allow access to its parent site, including all its zones.
+
+The on-premises management console Access Group rules won't include the option to grant access to a specific zone. There's no change in defining rules that use sites, regions, and business units. Following upgrade, Access Groups that contained rules allowing access to specific zones will be modified to allow access to its parent site, including all its zones.
#### Terminology changes
defender-for-iot Tutorial Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-onboarding.md
Before you can start using your Defender for IoT sensor, you will need to onboar
- **Locally managed sensors**: Information that sensors detect is displayed in the sensor console. If you're working in an air-gapped network and want a unified view of all information detected by multiple locally managed sensors, work with the on-premises management console.
-1. Select a site to associate your sensor to within an IoT Hub. The IoT Hub will serve as a gateway between this sensor and Microsoft Defender for IoT. Define the display name, and zone. You can also add descriptive tags. The display name, zone, and tags are descriptive entries on the [View onboarded sensors](how-to-manage-sensors-on-the-cloud.md#view-onboarded-sensors).
+1. Select a site to associate your sensor to within an IoT Hub. The IoT Hub will serve as a gateway between this sensor and Microsoft Defender for IoT. Define the display name, and zone. You can also add descriptive tags. The display name, zone, and tags are descriptive entries on the [View onboarded sensors](how-to-manage-sensors-on-the-cloud.md#manage-on-boarded-sensors).
1. Select **Register**.
dms Ads Sku Recommend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/ads-sku-recommend.md
+
+ Title: Get right-sized Azure recommendation for your on-premises SQL Server database(s)
+description: Learn how to use the Azure SQL Migration extension in Azure Data Studio to get SKU recommendation to migrate SQL Server database(s) to the right-sized Azure SQL Managed Instance or SQL Server on Azure Virtual Machines.
++++++++ Last updated : 02/22/2022+++
+# Get right-sized Azure recommendation for your on-premises SQL Server database(s)
+
+The [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) provides a unified experience to assess, get right-sized Azure recommendations and migrate your SQL Server database(s) to Azure.
+
+Before migrating your SQL Server databases to Azure, it is important to assess them to identify any migration issues (if any) so you can remediate them and confidently migrate them to Azure. Moreover, it is equally important to identify the right-sized configuration in Azure to ensure your database workload performance requirements are met with minimal cost.
+
+The Azure SQL Migration extension for Azure Data Studio provides both the assessment and SKU recommendation (right-sized Azure recommended configuration) capabilities when you are trying to select the best option to migrate your SQL Server database(s) to Azure SQL Managed Instance or SQL Server on Azure Virtual Machines. The extension provides a user friendly interface to run the assessment and generate recommendations within a short timeframe.
+
+> [!NOTE]
+> Assessment and Azure recommendation feature in the Azure SQL Migration extension for Azure Data Studio also supports source SQL Server running on Linux.
+
+## Performance data collection and SKU recommendation
+
+With the Azure SQL Migration extension, you can get a right-sized Azure recommendation to migrate your SQL Server databases to Azure SQL Managed Instance or SQL Server on Azure Virtual Machines. The extension collects and analyzes performance data from your SQL Server instance to generate a recommended SKU each for Azure SQL Managed Instance and SQL Server on Azure Virtual Machines that meets your database(s)' performance characteristics with the lowest cost.
+
+The workflow for data collection and SKU recommendation is illustrated below.
++
+1. **Performance data collection**: To start the performance data collection process in the migration wizard, select **Get Azure recommendation** and choose the option to collect performance data as shown below. Provide the folder where the collected data will be saved and select **Start**.
+ :::image type="content" source="media/ads-sku-recommend/collect-performance-data.png" alt-text="Collect performance data for SKU recommendation":::
+
+ When you start the data collection process in the migration wizard, the Azure SQL Migration extension for Azure Data Studio collects data from your SQL Server instance that includes information about the hardware configuration and aggregated SQL Server specific performance data from system Dynamic Management Views (DMVs) such as CPU utilization, memory utilization, storage size, IO, throughput and IO latency.
+ > [!IMPORTANT]
+ > - The data collection process runs for 10 minutes to generate the first recommendation. It is important to start the data collection process when your database workload reflects usage close to your production scenarios.</br>
+ > - After the first recommendation is generated, you can continue to run the data collection process to refine recommendations especially if your usage patterns vary for an extended duration of time.
+
+1. **Save generated data files locally**: The performance data is periodically aggregated and written to your local filesystem (in the folder that you selected while starting data collection in the migration wizard). Typically, you will see a set of CSV files with the following suffixes in the folder you selected:
+ - **_CommonDbLevel_Counters.csv** : This file contains static configuration data about the database file layout and metadata.
+ - **_CommonInstanceLevel_Counters.csv** : This file contains static data about the hardware configuration of the server instance.
+ - **_PerformanceAggregated_Counters.csv** : This file contains aggregated performance data that is updated frequently.
+1. **Analyze and recommend SKU**: The SKU recommender analyzes the captured common and performance data to recommend the minimum configuration with the least cost that will meet your database's performance requirements. You can also view details about the reason behind the recommendation and source properties that were analyzed. *For SQL Server on Azure Virtual Machines, the SKU recommender also recommends the desired storage configuration for data files, log files and tempdb.*</br> The SKU recommender provides optional parameters that can be modified to refine recommendations based on your inputs about the production workload.
+ - **Scale factor**: Scale ('comfort') factor used to inflate or deflate SKU recommendation based on your understanding of the production workload. For example, if it is determined that there is a 4 vCore CPU requirement with a scale factor of 150%, then the true CPU requirement will be 6 vCores. (Default value: 100)
+ - **Percentage utilization**: Percentile of data points to be used during aggregation of the performance data. (Default: 95th Percentile)
+ - **Enable preview features**: Enabling this option will include the latest hardware generations that have significantly improved performance and scalability. These SKUs are currently in Preview and may not yet be available in all regions. (Default value: Yes)
++
+ > [!IMPORTANT]
+ > The data collection process will terminate if you close Azure Data Studio. However, the data that was collected until that point will be saved in your folder.</br>
+ >If you close Azure Data Studio while the data collection is in progress, you can either
+ > - return to import the data files that are saved in your local folder to generate a recommendation from the collected data; Or
+ > - return to start the data collection again from the migration wizard;
+
+### Import existing performance data
+Any existing Performance data that you collected previously using the Azure SQL Migration extension or [using the console application in Data Migration Assistant](/sql/dma/dma-sku-recommend-sql-db) can be imported in the migration wizard to view the recommendation.</br>
+Simply provide the folder location where the performance data files are saved and select **Start** to instantly view the recommendation and its details.</br>
+ :::image type="content" source="media/ads-sku-recommend/import-sku-data.png" alt-text="Import performance data for SKU recommendation":::
+## Prerequisites
+
+The following prerequisites are required to get Azure recommendation:
+* [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio)
+* [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace
+* Ensure that the logins used to connect the source SQL Server are members of the *sysadmin* server role or have `CONTROL SERVER` permission.
+
+## Next steps
+
+- For an overview of the architecture to migrate databases, see [Migrate databases with Azure SQL Migration extension for Azure Data Studio](migration-using-azure-data-studio.md).
dms Migration Using Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-using-azure-data-studio.md
Previously updated : 09/01/2021 Last updated : 02/22/2022 # Migrate databases with Azure SQL Migration extension for Azure Data Studio (Preview)
-The [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) enables you to use the new SQL Server assessment and migration capability in Azure Data Studio.
+The [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) enables you to assess, get Azure recommendations and migrate your SQL Server databases to Azure.
## Architecture of Azure SQL Migration extension for Azure Data Studio
Azure Database Migration Service prerequisites that are common across all suppor
- Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share). - Owner or Contributor role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account. - Owner or Contributor role for the Azure subscription.
+ > [!IMPORTANT]
+ > Azure account is only required when configuring the migration steps and is not required for assessment or Azure recommendation steps in the migration wizard.
* Create a target [Azure SQL Managed Instance](../azure-sql/managed-instance/instance-create-quickstart.md) or [SQL Server on Azure Virtual Machine](../azure-sql/virtual-machines/windows/create-sql-vm-portal.md) > [!IMPORTANT]
Azure Database Migration Service prerequisites that are common across all suppor
- SSIS packages - Server roles - Server audit-- Automating migrations with Azure Data Studio using PowerShell / CLI isn't supported. - SQL Server 2014 and below are not supported. - Migrating to Azure SQL Database isn't supported. - Azure storage accounts secured by specific firewall rules or configured with a private endpoint are not supported for migrations.
dms Tutorial Sql Server Managed Instance Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-offline-ads.md
Last updated 10/05/2021
You can use the Azure SQL Migration extension in Azure Data Studio to migrate the database(s) from a SQL Server instance to Azure SQL Managed Instance. For methods that may require some manual effort, see the article [SQL Server instance migration to Azure SQL Managed Instance](../azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide.md).
-In this tutorial, you migrate the **Adventureworks** database from an on-premises instance of SQL Server to Azure SQL Managed Instance by using Azure Data Studio with Azure Database Migration Service (DMS). This tutorial focuses on the offline migration mode that considers an acceptable downtime during the migration process.
+In this tutorial, you migrate the **AdventureWorks** database from an on-premises instance of SQL Server to Azure SQL Managed Instance by using Azure Data Studio with Azure Database Migration Service (DMS). This tutorial focuses on the offline migration mode that considers an acceptable downtime during the migration process.
In this tutorial, you learn how to: > [!div class="checklist"] >
-> * Launch the Migrate to Azure SQL wizard in Azure Data Studio.
+> * Launch the *Migrate to Azure SQL* wizard in Azure Data Studio
> * Run an assessment of your source SQL Server database(s)
-> * Specify details of your source SQL Server, backup location and your target Azure SQL Managed Instance
+> * Collect performance data from your source SQL Server
+> * Get a recommendation of the Azure SQL Managed Instance SKU best suited for your workload
+> * Specify details of your source SQL Server backup location and your target Azure SQL Managed Instance
> * Create a new Azure Database Migration Service and install the self-hosted integration runtime to access source server and backups. > * Start and monitor the progress for your migration through to completion
To complete this tutorial, you need to:
- Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share). - Owner or Contributor role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account. - Owner or Contributor role for the Azure subscription (required if creating a new DMS service).
-* Create a SQL Managed Instance by following the detail in the article [Create a SQL Managed Instance in the Azure portal](../azure-sql/managed-instance/instance-create-quickstart.md).
+ > [!IMPORTANT]
+ > Azure account is only required when configuring the migration steps and is not required for assessment or Azure recommendation steps in the migration wizard.
+* Create a target [Azure SQL Managed Instance](../azure-sql/managed-instance/instance-create-quickstart.md).
* Ensure that the logins used to connect the source SQL Server are members of the *sysadmin* server role or have `CONTROL SERVER` permission. * Provide an SMB network share, Azure storage account file share, or Azure storage account blob container that contains your full database backup files and subsequent transaction log backup files, which Azure Database Migration Service can use for database migration. > [!IMPORTANT] > - If your database backup files are provided in an SMB network share, [Create an Azure storage account](../storage/common/storage-account-create.md) that allows DMS service to upload the database backup files to and use for migrating databases. Make sure to create the Azure Storage Account in the same region as the Azure Database Migration Service instance is created. > - You can't use an Azure Storage account that has a private endpoint with Azure Database Migration Service. > - Azure Database Migration Service does not initiate any backups, and instead uses existing backups, which you may already have as part of your disaster recovery plan, for the migration.
- > - You should take [backups using the `WITH CHECKSUM` option](/sql/relational-databases/backup-restore/enable-or-disable-backup-checksums-during-backup-or-restore-sql-server?preserve-view=true&view=sql-server-2017).
+ > - You need to take [backups using the `WITH CHECKSUM` option](/sql/relational-databases/backup-restore/enable-or-disable-backup-checksums-during-backup-or-restore-sql-server?preserve-view=true&view=sql-server-2017).
> - Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups (i.e. full and t-log) into a single backup media is not supported. > - You can provide compressed backups to reduce the likelihood of experiencing potential issues associated with migrating large backups. * Ensure that the service account running the source SQL Server instance has read and write permissions on the SMB network share that contains database backup files.
To complete this tutorial, you need to:
1. On the server's home page, Select **Azure SQL Migration** extension. 1. On the Azure SQL Migration dashboard, select **Migrate to Azure SQL** to launch the migration wizard. :::image type="content" source="media/tutorial-sql-server-to-managed-instance-offline-ads/launch-migrate-to-azure-sql-wizard.png" alt-text="Launch Migrate to Azure SQL wizard":::
-1. In the first step of the migration wizard, link your Azure account if you've signed in to Azure Data Studio already or link a new Azure account.
-
-## Run database assessment and select target
+1. The first page of the wizard will allow you to start a new session or resume a previously saved one. Pick the first option to start a new session.
+## Run database assessment, collect performance data and get Azure recommendation
1. Select the database(s) to run assessment and select **Next**. 1. Select Azure SQL Managed Instance as the target. :::image type="content" source="media/tutorial-sql-server-to-managed-instance-offline-ads/assessment-complete-target-selection.png" alt-text="Assessment confirmation"::: 1. Select on the **View/Select** button to view details of the assessment results for your database(s), select the database(s) to migrate, and select **OK**. :::image type="content" source="media/tutorial-sql-server-to-managed-instance-offline-ads/assessment-issues-details.png" alt-text="Database assessment details":::
-1. Specify your **target Azure SQL Managed Instance** by selecting your subscription, location, resource group from the corresponding drop-down lists and select **Next**.
+1. Click the **Get Azure recommendation** button.
+2. Pick the **Collect performance data now** option and enter a path for performance logs to be collected and click the **Start** button.
+3. Azure Data Studio will now collect performance data until you either stop the collection, press the **Next** button in the wizard or close Azure Data Studio.
+4. After 10 minutes you will see a recommended configuration for your Azure SQL Managed Instance. You can also press the **Refresh recommendation** link to get the recommendation sooner.
+5. In the above **Azure SQL Managed Instance** box click the **View details** button for more information about your recommendation.
+6. Close the view details box and press the **Next** button.
## Configure migration settings
-1. Select **Online migration** as the migration mode.
+1. Specify your **Azure SQL Managed Instance** by selecting your subscription, location, resource group from the corresponding drop-down lists and then select **Next**.
+1. Select **Offline migration** as the migration mode.
> [!NOTE] > In the offline migration mode, the source SQL Server database is not available for read and write activity while database backups are restored on target Azure SQL Managed Instance. Application downtime needs to be considered till the migration completes. 1. Select the location of your database backups. Your database backups can either be located on an on-premises network share or in an Azure storage blob container.
To complete this tutorial, you need to:
> [!NOTE] > If your database backups are provided in an on-premises network share, DMS will require you to setup self-hosted integration runtime in the next step of the wizard. Self-hosted integration runtime is required to access your source database backups, check the validity of the backup set and upload them to Azure storage account.<br/> If your database backups are already on an Azure storage blob container, you do not need to setup self-hosted integration runtime.
-1. After selecting the backup location, provide details of your source SQL Server and source backup location.
+1. If you picked the first option for network share, provide details of your source SQL Server, source backup location, target database name and Azure storage account for the backup files to be uploaded to.
|Field |Description | ||-|
To complete this tutorial, you need to:
|**Windows user account with read access to the network share location** |The Windows credential (username) that has read access to the network share to retrieve the backup files. | |**Password** |The Windows credential (password) that has read access to the network share to retrieve the backup files. | |**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. |
+ |**Storage account details** |The resource group and storage account where backup files will be uploaded to. You do not need to create a container as DMS will automatically create a blob container in the specified storage account during the upload process. |
-1. Specify the **Azure storage account** by selecting the **Subscription**, **Location**, and **Resource Group** from the corresponding drop-down lists. This Azure storage account will be used by DMS to upload the database backups from network share. You do not need to create a container as DMS will automatically create a blob container in the specified storage account during the upload process.
+1. If you picked the second option for backups stored in an Azure Blob Container specify the **Target database name**, **Resource group**, **Azure storage account**, **Blob container** and **Last backup file** from the corresponding drop-down lists. This Azure storage account will be used by DMS to upload the database backups from network share. You do not need to create a container as DMS will automatically create a blob container in the specified storage account during the upload process.
> [!IMPORTANT] > If loopback check functionality is enabled and the source SQL Server and file share are on the same computer, then source won't be able to access the files hare using FQDN. To fix this issue, disable loopback check functionality using the instructions [here](https://support.microsoft.com/help/926642/error-message-when-you-try-to-access-a-server-locally-by-using-its-fqd)
To complete this tutorial, you need to:
> [!NOTE] > If you had previously created DMS using the Azure Portal, you cannot reuse it in the migration wizard in Azure Data Studio. Only DMS created previously using Azure Data Studio can be reused. 1. Select the **Resource group** where you have an existing DMS or need to create a new one. The **Azure Database Migration Service** dropdown will list any existing DMS in the selected resource group.
-1. To reuse an existing DMS, select it from the dropdown list and the status of the self-hosted integration runtime will be displayed at the bottom of the page.
+1. To reuse an existing DMS, select it from the dropdown list and press Next to view the summary screen and when ready to begin the migration press the **Start migration** button.
1. To create a new DMS, select **Create new**. On the **Create Azure Database Migration Service**, screen provide the name for your DMS and select **Create**. 1. After successful creation of DMS, you'll be provided with details to set up **integration runtime**. 1. Select on **Download and install integration runtime** to open the download link in a web browser. Complete the download. Install the integration runtime on a machine that meets the pre-requisites of connecting to source SQL Server and the location containing the source backup.
dms Tutorial Sql Server Managed Instance Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online-ads.md
Last updated 10/05/2021
Use the Azure SQL Migration extension in Azure Data Studio to migrate database(s) from a SQL Server instance to an [Azure SQL Managed Instance](../azure-sql/managed-instance/sql-managed-instance-paas-overview.md) with minimal downtime. For methods that may require some manual effort, see the article [SQL Server instance migration to Azure SQL Managed Instance](../azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide.md).
-In this tutorial, you migrate the **Adventureworks** database from an on-premises instance of SQL Server to Azure SQL Managed Instance with minimal downtime by using Azure Data Studio with Azure Database Migration Service (DMS).
+In this tutorial, you migrate the **AdventureWorks** database from an on-premises instance of SQL Server to Azure SQL Managed Instance with minimal downtime by using Azure Data Studio with Azure Database Migration Service (DMS). This tutorial focuses on the online migration mode where application downtime is limited to a short cutover at the end of the migration.
In this tutorial, you learn how to: > [!div class="checklist"] > > * Launch the *Migrate to Azure SQL* wizard in Azure Data Studio. > * Run an assessment of your source SQL Server database(s)
+> * Collect performance data from your source SQL Server
+> * Get a recommendation of the Azure SQL Managed Instance SKU best suited for your workload
> * Specify details of your source SQL Server, backup location and your target Azure SQL Managed Instance > * Create a new Azure Database Migration Service and install the self-hosted integration runtime to access source server and backups. > * Start and monitor the progress for your migration.
To complete this tutorial, you need to:
- Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share). - Owner or Contributor role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account. - Owner or Contributor role for the Azure subscription (required if creating a new DMS service).
+ > [!IMPORTANT]
+ > Azure account is only required when configuring the migration steps and is not required for assessment or Azure recommendation steps in the migration wizard.
* Create a target [Azure SQL Managed Instance](../azure-sql/managed-instance/instance-create-quickstart.md). * Ensure that the logins used to connect the source SQL Server are members of the *sysadmin* server role or have `CONTROL SERVER` permission. * Use one of the following storage options for the full database and transaction log backup files:
To complete this tutorial, you need to:
> - If your database backup files are provided in an SMB network share, [Create an Azure storage account](../storage/common/storage-account-create.md) that allows the DMS service to upload the database backup files. Make sure to create the Azure Storage Account in the same region as the Azure Database Migration Service instance is created. > - You can't use an Azure Storage account that has a private endpoint with Azure Database Migration Service. > - Azure Database Migration Service does not initiate any backups, and instead uses existing backups, which you may already have as part of your disaster recovery plan, for the migration.
- > - You should take [backups using the `WITH CHECKSUM` option](/sql/relational-databases/backup-restore/enable-or-disable-backup-checksums-during-backup-or-restore-sql-server).
+ > - You need to take [backups using the `WITH CHECKSUM` option](/sql/relational-databases/backup-restore/enable-or-disable-backup-checksums-during-backup-or-restore-sql-server).
> - Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups (i.e. full and t-log) into a single backup media is not supported. > - Use compressed backups to reduce the likelihood of experiencing potential issues associated with migrating large backups. * Ensure that the service account running the source SQL Server instance has read and write permissions on the SMB network share that contains database backup files.
To complete this tutorial, you need to:
1. On the server's home page, Select **Azure SQL Migration** extension. 1. On the Azure SQL Migration dashboard, select **Migrate to Azure SQL** to launch the migration wizard. :::image type="content" source="media/tutorial-sql-server-to-managed-instance-online-ads/launch-migrate-to-azure-sql-wizard.png" alt-text="Launch Migrate to Azure SQL wizard":::
-1. In the first step of the migration wizard, link your existing or new Azure account to Azure Data Studio.
-
-## Run database assessment and select target
+1. The first page of the wizard will allow you to start a new session or resume a previously saved one. Pick the first option to start a new session.
+## Run database assessment, collect performance data and get Azure recommendation
1. Select the database(s) to run assessment and select **Next**. 1. Select Azure SQL Managed Instance as the target. :::image type="content" source="media/tutorial-sql-server-to-managed-instance-online-ads/assessment-complete-target-selection.png" alt-text="Assessment confirmation"::: 1. Select on the **View/Select** button to view details of the assessment results for your database(s), select the database(s) to migrate, and select **OK**. If any issues are displayed in the assessment results, they need to be remediated before proceeding with the next steps. :::image type="content" source="media/tutorial-sql-server-to-managed-instance-online-ads/assessment-issues-details.png" alt-text="Database assessment details":::
-1. Specify your **target Azure SQL Managed Instance** by selecting your subscription, location, resource group from the corresponding drop-down lists and select **Next**.
+1. Click the **Get Azure recommendation** button.
+2. Pick the **Collect performance data now** option and enter a path for performance logs to be collected and click the **Start** button.
+3. Azure Data Studio will now collect performance data until you either stop the collection, press the **Next** button in the wizard or close Azure Data Studio.
+4. After 10 minutes you will see a recommended configuration for your Azure SQL Managed Instance. You can also press the **Refresh recommendation** link to get the recommendation sooner.
+5. In the above **Azure SQL Managed Instance** box click the **View details** button for more information about your recommendation.
+6. Close the view details box and press the **Next** button.
## Configure migration settings
+1. Specify your **Azure SQL Managed Instance** by selecting your subscription, location, resource group from the corresponding drop-down lists and then select **Next**.
1. Select **Online migration** as the migration mode. > [!NOTE] > In the online migration mode, the source SQL Server database is available for read and write activity while database backups are continuously restored on target Azure SQL Managed Instance. Application downtime is limited to duration for the cutover at the end of migration. 1. Select the location of your database backups. Your database backups can either be located on an on-premises network share or in an Azure storage blob container. > [!NOTE] > If your database backups are provided in an on-premises network share, DMS will require you to setup self-hosted integration runtime in the next step of the wizard. Self-hosted integration runtime is required to access your source database backups, check the validity of the backup set and upload them to Azure storage account.<br/> If your database backups are already on an Azure storage blob container, you do not need to setup self-hosted integration runtime.
-1. After selecting the backup location, provide details of your source SQL Server and source backup location.
+
+1. If you picked the first option for network share, provide details of your source SQL Server, source backup location, target database name and Azure storage account for the backup files to be uploaded to.
|Field |Description | ||-|
To complete this tutorial, you need to:
|**Windows user account with read access to the network share location** |The Windows credential (username) that has read access to the network share to retrieve the backup files. | |**Password** |The Windows credential (password) that has read access to the network share to retrieve the backup files. | |**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. |
+ |**Storage account details** |The resource group and storage account where backup files will be uploaded to. You do not need to create a container as DMS will automatically create a blob container in the specified storage account during the upload process.
-1. Specify the **Azure storage account** by selecting the **Subscription**, **Location**, and **Resource Group** from the corresponding drop-down lists. This Azure storage account will be used by DMS to upload the database backups from network share. You don't need to create a container as DMS will automatically create a blob container in the specified storage account during the upload process.
+1. If you picked the second option for backups stored in an Azure Blob Container specify the **Target database name**, **Resource group**, **Azure storage account**, **Blob container** and **Last backup file** from the corresponding drop-down lists. This Azure storage account will be used by DMS to upload the database backups from network share. You do not need to create a container as DMS will automatically create a blob container in the specified storage account during the upload process.
> [!IMPORTANT] > If loopback check functionality is enabled and the source SQL Server and file share are on the same computer, then source won't be able to access the files hare using FQDN. To fix this issue, disable loopback check functionality using the instructions [here](https://support.microsoft.com/help/926642/error-message-when-you-try-to-access-a-server-locally-by-using-its-fqd)
dms Tutorial Sql Server To Virtual Machine Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-offline-ads.md
Last updated 10/05/2021
Use the Azure SQL Migration extension in Azure Data Studio to migrate the databases from a SQL Server instance (SQL Server 2016 and above) to a [SQL Server on Azure Virtual Machine](../azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md) with minimal downtime. For methods that may require some manual effort, see the article [SQL Server instance migration to SQL Server on Azure Virtual Machine](../azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md).
-In this tutorial, you migrate the **Adventureworks** database from an on-premises instance of SQL Server to a SQL Server on Azure Virtual Machine with the offline migration method by using Azure Data Studio with Azure Database Migration Service.
+In this tutorial, you migrate the **AdventureWorks** database from an on-premises instance of SQL Server to a SQL Server on Azure Virtual Machine with the offline migration method by using Azure Data Studio with Azure Database Migration Service.
In this tutorial, you learn how to: > [!div class="checklist"] > > * Launch the Migrate to Azure SQL wizard in Azure Data Studio. > * Run an assessment of your source SQL Server database(s)
+> * Collect performance data from your source SQL Server
+> * Get a recommendation of the Azure SQL Managed Instance SKU best suited for your workload
> * Specify details of your source SQL Server, backup location and your target SQL Server on Azure Virtual Machine > * Create a new Azure Database Migration Service and install the self-hosted integration runtime to access source server and backups. > * Start and monitor the progress for your migration through to completion
To complete this tutorial, you need to:
- Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share). - Owner or Contributor role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account. - Owner or Contributor role for the Azure subscription.
+ > [!IMPORTANT]
+ > Azure account is only required when configuring the migration steps and is not required for assessment or Azure recommendation steps in the migration wizard.
* Create a target [SQL Server on Azure Virtual Machine](../azure-sql/virtual-machines/windows/create-sql-vm-portal.md). > [!IMPORTANT]
To complete this tutorial, you need to:
:::image type="content" source="media/tutorial-sql-server-to-virtual-machine-online-ads/launch-migrate-to-azure-sql-wizard.png" alt-text="Launch Migrate to Azure SQL wizard"::: 1. In the first step of the migration wizard, link your existing or new Azure account to Azure Data Studio.
-## Run database assessment and select target
+## Run database assessment, collect performance data and get Azure recommendation
1. Select the database(s) to run assessment and select **Next**. 1. Select SQL Server on Azure Virtual Machine as the target. :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-offline-ads/assessment-complete-target-selection.png" alt-text="Assessment confirmation"::: 1. Select on the **View/Select** button to view details of the assessment results for your database(s), select the database(s) to migrate, and select **OK**.
-1. Specify your **target SQL Server on Azure Virtual Machine** by selecting your subscription, location, resource group from the corresponding drop-down lists and select **Next**.
-
+1. Click the **Get Azure recommendation** button.
+2. Pick the **Collect performance data now** option and enter a path for performance logs to be collected and click the **Start** button.
+3. Azure Data Studio will now collect performance data until you either stop the collection, press the **Next** button in the wizard or close Azure Data Studio.
+4. After 10 minutes you will see a recommended configuration for your Azure SQL Managed Instance. You can also press the **Refresh recommendation** link to get the recommendation sooner.
+5. In the above **SQL Server on Azure Virtual Machine** box click the **View details** button for more information about your recommendation.
+6. Close the view details box and press the **Next** button.
## Configure migration settings
-1. Select **Offline migration** as the migration mode.
+1. Specify your **target SQL Server on Azure Virtual Machine** by selecting your subscription, location, resource group from the corresponding drop-down lists and then select **Next**.
+2. Select **Offline migration** as the migration mode.
> [!NOTE] > In the offline migration mode, the source SQL Server database is not available for write activity while database backup files are restored on the target Azure SQL database. Application downtime persists through the start until the completion of the migration process.
-1. Select the location of your database backups. Your database backups can either be located on an on-premises network share or in an Azure storage blob container.
+3. Select the location of your database backups. Your database backups can either be located on an on-premises network share or in an Azure storage blob container.
> [!NOTE] > If your database backups are provided in an on-premises network share, DMS will require you to setup self-hosted integration runtime in the next step of the wizard. Self-hosted integration runtime is required to access your source database backups, check the validity of the backup set and upload them to Azure storage account.<br/> If your database backups are already on an Azure storage blob container, you do not need to setup self-hosted integration runtime.
-1. After selecting the backup location, provide details of your source SQL Server and source backup location.
+4. After selecting the backup location, provide details of your source SQL Server and source backup location.
|Field |Description | ||-|
To complete this tutorial, you need to:
|**Password** |The Windows credential (password) that has read access to the network share to retrieve the backup files. | |**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. |
-1. Specify the **Azure storage account** by selecting the **Subscription**, **Location**, and **Resource Group** from the corresponding drop-down lists. This Azure storage account will be used by DMS to upload the database backups from network share. You don't need to create a container as DMS will automatically create a blob container in the specified storage account during the upload process.
+5. Specify the **Azure storage account** by selecting the **Subscription**, **Location**, and **Resource Group** from the corresponding drop-down lists. This Azure storage account will be used by DMS to upload the database backups from network share. You don't need to create a container as DMS will automatically create a blob container in the specified storage account during the upload process.
> [!IMPORTANT] > If loopback check functionality is enabled and the source SQL Server and file share are on the same computer, then source won't be able to access the files hare using FQDN. To fix this issue, disable loopback check functionality using the instructions [here](https://support.microsoft.com/help/926642/error-message-when-you-try-to-access-a-server-locally-by-using-its-fqd)
dms Tutorial Sql Server To Virtual Machine Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-online-ads.md
Last updated 10/05/2021
Use the Azure SQL Migration extension in Azure Data Studio to migrate the databases from a SQL Server instance (SQL Server 2016 and above) to a [SQL Server on Azure Virtual Machine](../azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md) with minimal downtime. For methods that may require some manual effort, see the article [SQL Server instance migration to SQL Server on Azure Virtual Machine](../azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md).
-In this tutorial, you migrate the **Adventureworks** database from an on-premises instance of SQL Server to a SQL Server on Azure Virtual Machine with minimal downtime by using Azure Data Studio with Azure Database Migration Service.
+In this tutorial, you migrate the **AdventureWorks** database from an on-premises instance of SQL Server to a SQL Server on Azure Virtual Machine with minimal downtime by using Azure Data Studio with Azure Database Migration Service.
In this tutorial, you learn how to: > [!div class="checklist"] > > * Launch the Migrate to Azure SQL wizard in Azure Data Studio. > * Run an assessment of your source SQL Server database(s)
+> * Collect performance data from your source SQL Server
+> * Get a recommendation of the Azure SQL Managed Instance SKU best suited for your workload
> * Specify details of your source SQL Server, backup location and your target SQL Server on Azure Virtual Machine > * Create a new Azure Database Migration Service and install the self-hosted integration runtime to access source server and backups. > * Start and monitor the progress for your migration.
To complete this tutorial, you need to:
- Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share). - Owner or Contributor role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account. - Owner or Contributor role for the Azure subscription.
+ > [!IMPORTANT]
+ > Azure account is only required when configuring the migration steps and is not required for assessment or Azure recommendation steps in the migration wizard.
* Create a target [SQL Server on Azure Virtual Machine](../azure-sql/virtual-machines/windows/create-sql-vm-portal.md). > [!IMPORTANT]
To complete this tutorial, you need to:
:::image type="content" source="media/tutorial-sql-server-to-virtual-machine-online-ads/launch-migrate-to-azure-sql-wizard.png" alt-text="Launch Migrate to Azure SQL wizard"::: 1. In the first step of the migration wizard, link your existing or new Azure account to Azure Data Studio.
-## Run database assessment and select target
+## Run database assessment, collect performance data and get Azure recommendation
-1. In Step 2 of the Migrate to Azure SQL wizard, select the database(s) to run assessment and select **Next**.
-1. In Step 3, check the confirmation message to migrate your database and select SQL Server on Azure Virtual Machine as the target.
- :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-online-ads/assessment-complete-target-selection.png" alt-text="Assessment confirmation":::
-1. Select on the **View/Select** button to view details of the assessment results for your database(s), select the database(s) to migrate and, select **OK**.
-1. Specify your **target SQL Server on Azure Virtual Machine** by selecting your subscription, location, resource group from the corresponding drop-down lists and select **Next**.
+1. Select the database(s) to run assessment and select **Next**.
+1. Select SQL Server on Azure Virtual Machine as the target.
+ :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-offline-ads/assessment-complete-target-selection.png" alt-text="Assessment confirmation":::
+1. Select on the **View/Select** button to view details of the assessment results for your database(s), select the database(s) to migrate, and select **OK**.
+1. Click the **Get Azure recommendation** button.
+2. Pick the **Collect performance data now** option and enter a path for performance logs to be collected and click the **Start** button.
+3. Azure Data Studio will now collect performance data until you either stop the collection, press the **Next** button in the wizard or close Azure Data Studio.
+4. After 10 minutes you will see a recommended configuration for your Azure SQL Managed Instance. You can also press the **Refresh recommendation** link to get the recommendation sooner.
+5. In the above **SQL Server on Azure Virtual Machine** box click the **View details** button for more information about your recommendation.
+6. Close the view details box and press the **Next** button.
## Configure migration settings
-1. In step 4, select **Online migration** as the migration mode.
+1. Specify your **target SQL Server on Azure Virtual Machine** by selecting your subscription, location, resource group from the corresponding drop-down lists and then select **Next**.
+2. Select **Online migration** as the migration mode.
> [!NOTE] > In the online migration mode, the source SQL Server database is available for read and write activity while database backups are continuously restored on the target SQL Server on Azure Virtual Machine. Application downtime is limited to duration for the cutover at the end of migration.
-1. In step 5, select the location of your database backups. Your database backups can either be located on an on-premises network share or in an Azure storage blob container.
+3. In step 5, select the location of your database backups. Your database backups can either be located on an on-premises network share or in an Azure storage blob container.
> [!NOTE] > If your database backups are provided in an on-premises network share, DMS will require you to setup self-hosted integration runtime in the next step of the wizard. Self-hosted integration runtime is required to access your source database backups, check the validity of the backup set and upload them to Azure storage account.<br/> If your database backups are already on an Azure storage blob container, you do not need to setup self-hosted integration runtime.
-1. After selecting the backup location, provide details of your source SQL Server and source backup location.
+4. After selecting the backup location, provide details of your source SQL Server and source backup location.
|Field |Description | ||-|
To complete this tutorial, you need to:
|**Password** |The Windows credential (password) that has read access to the network share to retrieve the backup files. | |**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. |
-1. Specify the **Azure storage account** by selecting the **Subscription**, **Location**, and **Resource Group** from the corresponding drop-down lists. This Azure storage account will be used by DMS to upload the database backups from network share. You don't need to create a container as DMS will automatically create a blob container in the specified storage account during the upload process.
-1. Select **Next** to continue.
+5. Specify the **Azure storage account** by selecting the **Subscription**, **Location**, and **Resource Group** from the corresponding drop-down lists. This Azure storage account will be used by DMS to upload the database backups from network share. You don't need to create a container as DMS will automatically create a blob container in the specified storage account during the upload process.
+6. Select **Next** to continue.
> [!IMPORTANT] > If loopback check functionality is enabled and the source SQL Server and file share are on the same computer, then source won't be able to access the files hare using FQDN. To fix this issue, disable loopback check functionality using the instructions [here](https://support.microsoft.com/help/926642/error-message-when-you-try-to-access-a-server-locally-by-using-its-fqd)
event-hubs Event Hubs Ip Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-ip-filtering.md
Title: Azure Event Hubs Firewall Rules | Microsoft Docs description: Use Firewall Rules to allow connections from specific IP addresses to Azure Event Hubs. Previously updated : 10/28/2021 Last updated : 02/23/2022 # Allow access to Azure Event Hubs namespaces from specific IP addresses or ranges
The following Resource Manager template enables adding an IP filter rule to an e
**ipMask** in the template is a single IPv4 address or a block of IP addresses in CIDR notation. For example, in CIDR notation 70.37.104.0/24 represents the 256 IPv4 addresses from 70.37.104.0 to 70.37.104.255, with 24 indicating the number of significant prefix bits for the range.
-When adding virtual network or firewalls rules, set the value of `defaultAction` to `Deny`.
+> [!NOTE]
+> The default value of the `defaultAction` is `Allow`. When adding virtual network or firewalls rules, make sure you set the `defaultAction` to `Deny`.
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": {
- "eventhubNamespaceName": {
- "type": "string",
- "metadata": {
- "description": "Name of the Event Hubs namespace"
+ "eventhubNamespaceName": {
+ "type": "String"
}
- },
- "location": {
- "type": "string",
- "metadata": {
- "description": "Location for Namespace"
- }
- }
- },
- "variables": {
- "namespaceNetworkRuleSetName": "[concat(parameters('eventhubNamespaceName'), concat('/', 'default'))]",
}, "resources": [
- {
- "apiVersion": "2018-01-01-preview",
- "name": "[parameters('eventhubNamespaceName')]",
- "type": "Microsoft.EventHub/namespaces",
- "location": "[parameters('location')]",
- "sku": {
- "name": "Standard",
- "tier": "Standard"
- },
- "properties": { }
- },
- {
- "apiVersion": "2018-01-01-preview",
- "name": "[variables('namespaceNetworkRuleSetName')]",
- "type": "Microsoft.EventHub/namespaces/networkrulesets",
- "dependsOn": [
- "[concat('Microsoft.EventHub/namespaces/', parameters('eventhubNamespaceName'))]"
- ],
- "properties": {
- "virtualNetworkRules": [<YOUR EXISTING VIRTUAL NETWORK RULES>],
- "ipRules":
- [
- {
- "ipMask":"10.1.1.1",
- "action":"Allow"
+ {
+ "type": "Microsoft.EventHub/namespaces",
+ "apiVersion": "2021-11-01",
+ "name": "[parameters('eventhubNamespaceName')]",
+ "location": "East US",
+ "sku": {
+ "name": "Standard",
+ "tier": "Standard",
+ "capacity": 1
},
- {
- "ipMask":"11.0.0.0/24",
- "action":"Allow"
+ "properties": {
+ "disableLocalAuth": false,
+ "zoneRedundant": true,
+ "isAutoInflateEnabled": false,
+ "maximumThroughputUnits": 0,
+ "kafkaEnabled": true
+ }
+ },
+ {
+ "type": "Microsoft.EventHub/namespaces/networkRuleSets",
+ "apiVersion": "2021-11-01",
+ "name": "[concat(parameters('eventhubNamespaceName'), '/default')]",
+ "location": "East US",
+ "dependsOn": [
+ "[resourceId('Microsoft.EventHub/namespaces', parameters('eventhubNamespaceName'))]"
+ ],
+ "properties": {
+ "publicNetworkAccess": "Enabled",
+ "defaultAction": "Deny",
+ "virtualNetworkRules": [],
+ "ipRules": [
+ {
+ "ipMask":"10.1.1.1",
+ "action":"Allow"
+ },
+ {
+ "ipMask":"11.0.0.0/24",
+ "action":"Allow"
+ }
+ ]
}
- ],
- "trustedServiceAccessEnabled": false,
- "defaultAction": "Deny"
}
- }
- ],
- "outputs": { }
- }
+ ]
+}
+ ``` To deploy the template, follow the instructions for [Azure Resource Manager][lnk-deploy].
To deploy the template, follow the instructions for [Azure Resource Manager][lnk
> [!IMPORTANT] > If there are no IP and virtual network rules, all the traffic flows into the namespace even if you set the `defaultAction` to `deny`. The namespace can be accessed over the public internet (using the access key). Specify at least one IP rule or virtual network rule for the namespace to allow traffic only from the specified IP addresses or subnet of a virtual network.
+## default action and public network access
+
+### REST API
+
+The default value of the `defaultAction` property was `Deny` for API version **2021-01-01-preview and earlier**. However, the deny rule isn't enforced unless you set IP filters or virtual network (VNet) rules. That is, if you didn't have any IP filters or VNet rules, it's treated as `Allow`.
+
+From API version **2021-06-01-preview onwards**, the default value of the `defaultAction` property is `Allow`, to accurately reflect the service-side enforcement. If the default action is set to `Deny`, IP filters and VNet rules are enforced. If the default action is set to `Allow`, IP filters and VNet rules aren't enforced. The service remembers the rules when you turn them off and then back on again.
+
+The API version **2021-06-01-preview onwards** also introduces a new property named `publicNetworkAccess`. If it's set to `Disabled`, operations are restricted to private links only. If it's set to `Enabled`, operations are allowed over the public internet.
+
+For more information about these properties, see [Create or Update Network Rule Set](/rest/api/eventhub/preview/namespaces-network-rule-set/create-or-update-network-rule-set) and [Create or Update Private Endpoint Connections](/rest/api/eventhub/preview/private-endpoint-connections/create-or-update).
+
+> [!NOTE]
+> None of the above settings bypass validation of claims via SAS or Azure AD authentication. The authentication check always runs after the service validates the network checks that are configured by `defaultAction`, `publicNetworkAccess`, `privateEndpointConnections` settings.
+
+### Azure portal
+
+Azure portal always uses the latest API version to get and set properties. If you had previously configured your namespace using **2021-01-01-preview and earlier** with `defaultAction` set to `Deny`, and specified zero IP filters and VNet rules, the portal would have previously checked **Selected Networks** on the **Networking** page of your namespace. Now, it checks the **All networks** option.
++++ ## Next steps For constraining access to Event Hubs to Azure virtual networks, see the following link:
event-hubs Event Hubs Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-service-endpoints.md
Title: Virtual Network service endpoints - Azure Event Hubs | Microsoft Docs description: This article provides information on how to add a Microsoft.EventHub service endpoint to a virtual network. Previously updated : 10/28/2021 Last updated : 02/23/2021 # Allow access to Azure Event Hubs namespaces from specific virtual networks
When adding virtual network or firewalls rules, set the value of `defaultAction`
"[concat('Microsoft.EventHub/namespaces/', parameters('eventhubNamespaceName'))]" ], "properties": {
+ "publicNetworkAccess": "Enabled",
+ "defaultAction": "Deny",
"virtualNetworkRules": [ {
When adding virtual network or firewalls rules, set the value of `defaultAction`
"ignoreMissingVnetServiceEndpoint": false } ],
- "ipRules":[<YOUR EXISTING IP RULES>],
- "trustedServiceAccessEnabled": false,
- "defaultAction": "Deny"
+ "ipRules":[],
+ "trustedServiceAccessEnabled": false
} } ],
To deploy the template, follow the instructions for [Azure Resource Manager][lnk
> [!IMPORTANT] > If there are no IP and virtual network rules, all the traffic flows into the namespace even if you set the `defaultAction` to `deny`. The namespace can be accessed over the public internet (using the access key). Specify at least one IP rule or virtual network rule for the namespace to allow traffic only from the specified IP addresses or subnet of a virtual network.
+## default action and public network access
+
+### REST API
+
+The default value of the `defaultAction` property was `Deny` for API version **2021-01-01-preview and earlier**. However, the deny rule isn't enforced unless you set IP filters or virtual network (VNet) rules. That is, if you didn't have any IP filters or VNet rules, it's treated as `Allow`.
+
+From API version **2021-06-01-preview onwards**, the default value of the `defaultAction` property is `Allow`, to accurately reflect the service-side enforcement. If the default action is set to `Deny`, IP filters and VNet rules are enforced. If the default action is set to `Allow`, IP filters and VNet rules aren't enforced. The service remembers the rules when you turn them off and then back on again.
+
+The API version **2021-06-01-preview onwards** also introduces a new property named `publicNetworkAccess`. If it's set to `Disabled`, operations are restricted to private links only. If it's set to `Enabled`, operations are allowed over the public internet.
+
+For more information about these properties, see [Create or Update Network Rule Set](/rest/api/eventhub/preview/namespaces-network-rule-set/create-or-update-network-rule-set) and [Create or Update Private Endpoint Connections](/rest/api/eventhub/preview/private-endpoint-connections/create-or-update).
+
+> [!NOTE]
+> None of the above settings bypass validation of claims via SAS or Azure AD authentication. The authentication check always runs after the service validates the network checks that are configured by `defaultAction`, `publicNetworkAccess`, `privateEndpointConnections` settings.
+
+### Azure portal
+
+Azure portal always uses the latest API version to get and set properties. If you had previously configured your namespace using **2021-01-01-preview and earlier** with `defaultAction` set to `Deny`, and specified zero IP filters and VNet rules, the portal would have previously checked **Selected Networks** on the **Networking** page of your namespace. Now, it checks the **All networks** option.
+++ ## Next steps For more information about virtual networks, see the following links:
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
The following table shows connectivity locations and the service providers for e
* **Zone** refers to [pricing](https://azure.microsoft.com/pricing/details/expressroute/).
+* **ER Direct** refers to [ExpressRoute Direct](expressroute-erdirect-about.md) support at each peering location. If you want to view the available bandwidth see [Determine available bandwidth](expressroute-howto-erdirect.md#resources)
+ ### Global commercial Azure | **Location** | **Address** | **Zone** | **Local Azure regions** | **ER Direct** | **Service providers** | | | | | | | |
-| **Abu Dhabi** | Etisalat KDC | 3 | n/a | 10G | |
-| **Amsterdam** | [Equinix AM5](https://www.equinix.com/locations/europe-colocation/netherlands-colocation/amsterdam-data-centers/am5/) | 1 | West Europe | 10G, 100G | Aryaka Networks, AT&T NetBond, British Telecom, Colt, Equinix, euNetworks, GÉANT, InterCloud, Interxion, KPN, IX Reach, Level 3 Communications, Megaport, NTT Communications, Orange, Tata Communications, Telefonica, Telenor, Telia Carrier, Verizon, Zayo |
-| **Amsterdam2** | [Interxion AMS8](https://www.interxion.com/Locations/amsterdam/schiphol/) | 1 | West Europe | 10G, 100G | BICS, British Telecom, CenturyLink Cloud Connect, Colt, DE-CIX, Equinix, euNetworks, GÉANT, Interxion, NL-IX, NOS, NTT Global DataCenters EMEA, Orange, Vodafone |
-| **Atlanta** | [Equinix AT2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/atlanta-data-centers/at2/) | 1 | n/a | 10G, 100G | Equinix, Megaport |
-| **Auckland** | [Vocus Group NZ Albany](https://www.vocus.co.nz/business/cloud-data-centres) | 2 | n/a | 10G | Devoli, Kordia, Megaport, REANNZ, Spark NZ, Vocus Group NZ |
-| **Bangkok** | [AIS](https://business.ais.co.th/solution/en/azure-expressroute.html) | 2 | n/a | 10G | AIS, National Telecom UIH |
-| **Berlin** | [NTT GDC](https://www.e-shelter.de/en/location/berlin-1-data-center) | 1 | Germany North | 10G | Colt, Equinix, NTT Global DataCenters EMEA|
-| **Bogota** | [Equinix BG1](https://www.equinix.com/locations/americas-colocation/colombia-colocation/bogota-data-centers/bg1/) | 4 | n/a | 10G | CenturyLink Cloud Connect, Equinix |
+| **Abu Dhabi** | Etisalat KDC | 3 | n/a | Supported | |
+| **Amsterdam** | [Equinix AM5](https://www.equinix.com/locations/europe-colocation/netherlands-colocation/amsterdam-data-centers/am5/) | 1 | West Europe | Supported | Aryaka Networks, AT&T NetBond, British Telecom, Colt, Equinix, euNetworks, GÉANT, InterCloud, Interxion, KPN, IX Reach, Level 3 Communications, Megaport, NTT Communications, Orange, Tata Communications, Telefonica, Telenor, Telia Carrier, Verizon, Zayo |
+| **Amsterdam2** | [Interxion AMS8](https://www.interxion.com/Locations/amsterdam/schiphol/) | 1 | West Europe | Supported| BICS, British Telecom, CenturyLink Cloud Connect, Colt, DE-CIX, Equinix, euNetworks, GÉANT, Interxion, NL-IX, NOS, NTT Global DataCenters EMEA, Orange, Vodafone |
+| **Atlanta** | [Equinix AT2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/atlanta-data-centers/at2/) | 1 | n/a | Supported | Equinix, Megaport |
+| **Auckland** | [Vocus Group NZ Albany](https://www.vocus.co.nz/business/cloud-data-centres) | 2 | n/a | Supported | Devoli, Kordia, Megaport, REANNZ, Spark NZ, Vocus Group NZ |
+| **Bangkok** | [AIS](https://business.ais.co.th/solution/en/azure-expressroute.html) | 2 | n/a | Supported | AIS, National Telecom UIH |
+| **Berlin** | [NTT GDC](https://www.e-shelter.de/en/location/berlin-1-data-center) | 1 | Germany North | Supported | Colt, Equinix, NTT Global DataCenters EMEA|
+| **Bogota** | [Equinix BG1](https://www.equinix.com/locations/americas-colocation/colombia-colocation/bogota-data-centers/bg1/) | 4 | n/a | Supported | CenturyLink Cloud Connect, Equinix |
| **Busan** | [LG CNS](https://www.lgcns.com/En/Service/DataCenter) | 2 | Korea South | n/a | LG CNS |
-| **Campinas** | [Ascenty](https://www.ascenty.com/en/data-centers-en/campinas/) | 3 | Brazil South | 10G, 100G | Ascenty |
-| **Canberra** | [CDC](https://cdcdatacentres.com.au/about-us/) | 1 | Australia Central | 10G, 100G | CDC |
-| **Canberra2** | [CDC](https://cdcdatacentres.com.au/about-us/) | 1 | Australia Central 2| 10G, 100G | CDC, Equinix |
-| **Cape Town** | [Teraco CT1](https://www.teraco.co.za/data-centre-locations/cape-town/) | 3 | South Africa West | 10G | BCX, Internet Solutions - Cloud Connect, Liquid Telecom, MTN Global Connect, Teraco, Vodacom |
-| **Chennai** | Tata Communications | 2 | South India | 10G | BSNL, DE-CIX, Global CloudXchange (GCX), SIFY, Tata Communications, VodafoneIdea |
-| **Chennai2** | Airtel | 2 | South India | 10G | Airtel |
-| **Chicago** | [Equinix CH1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/chicago-data-centers/ch1/) | 1 | North Central US | 10G, 100G | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Cologix, Colt, Comcast, Coresite, Equinix, InterCloud, Internet2, Level 3 Communications, Megaport, PacketFabric, PCCW Global Limited, Sprint, Telia Carrier, Verizon, Zayo |
-| **Chicago2** | [CoreSite CH1](https://www.coresite.com/data-center/ch1-chicago-il) | 1 | North Central US | 10G, 100G | CoreSite |
-| **Copenhagen** | [Interxion CPH1](https://www.interxion.com/Locations/copenhagen/) | 1 | n/a | 10G | Interxion |
-| **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | 1 | n/a | 10G, 100G | Aryaka Networks, AT&T NetBond, Cologix, Cox Business Cloud Port, Equinix, Intercloud, Internet2, Level 3 Communications, Megaport, Neutrona Networks, Orange, PacketFabric, Telmex Uninet, Telia Carrier, Transtelco, Verizon, Zayo|
-| **Denver** | [CoreSite DE1](https://www.coresite.com/data-centers/locations/denver/de1) | 1 | West Central US | 10G, 100G | CoreSite, Megaport, PacketFabric, Zayo |
+| **Campinas** | [Ascenty](https://www.ascenty.com/en/data-centers-en/campinas/) | 3 | Brazil South | Supported | Ascenty |
+| **Canberra** | [CDC](https://cdcdatacentres.com.au/about-us/) | 1 | Australia Central | Supported | CDC |
+| **Canberra2** | [CDC](https://cdcdatacentres.com.au/about-us/) | 1 | Australia Central 2| Supported | CDC, Equinix |
+| **Cape Town** | [Teraco CT1](https://www.teraco.co.za/data-centre-locations/cape-town/) | 3 | South Africa West | Supported | BCX, Internet Solutions - Cloud Connect, Liquid Telecom, MTN Global Connect, Teraco, Vodacom |
+| **Chennai** | Tata Communications | 2 | South India | Supported | BSNL, DE-CIX, Global CloudXchange (GCX), SIFY, Tata Communications, VodafoneIdea |
+| **Chennai2** | Airtel | 2 | South India | Supported | Airtel |
+| **Chicago** | [Equinix CH1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/chicago-data-centers/ch1/) | 1 | North Central US | Supported | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Cologix, Colt, Comcast, Coresite, Equinix, InterCloud, Internet2, Level 3 Communications, Megaport, PacketFabric, PCCW Global Limited, Sprint, Telia Carrier, Verizon, Zayo |
+| **Chicago2** | [CoreSite CH1](https://www.coresite.com/data-center/ch1-chicago-il) | 1 | North Central US | Supported | CoreSite |
+| **Copenhagen** | [Interxion CPH1](https://www.interxion.com/Locations/copenhagen/) | 1 | n/a | Supported | Interxion |
+| **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | 1 | n/a | Supported | Aryaka Networks, AT&T NetBond, Cologix, Cox Business Cloud Port, Equinix, Intercloud, Internet2, Level 3 Communications, Megaport, Neutrona Networks, Orange, PacketFabric, Telmex Uninet, Telia Carrier, Transtelco, Verizon, Zayo|
+| **Denver** | [CoreSite DE1](https://www.coresite.com/data-centers/locations/denver/de1) | 1 | West Central US | Supported | CoreSite, Megaport, PacketFabric, Zayo |
| **Dubai** | [PCCS](https://www.pacificcontrols.net/cloudservices/https://docsupdatetracker.net/index.html) | 3 | UAE North | n/a | Etisalat UAE | | **Dubai2** | [du datamena](http://datamena.com/solutions/data-centre) | 3 | UAE North | n/a | DE-CIX, du datamena, Equinix, GBI, Megaport, Orange, Orixcom |
-| **Dublin** | [Equinix DB3](https://www.equinix.com/locations/europe-colocation/ireland-colocation/dublin-data-centers/db3/) | 1 | North Europe | 10G, 100G | CenturyLink Cloud Connect, Colt, eir, Equinix, GEANT, euNetworks, Interxion, Megaport, Zayo|
-| **Dublin2** | [Interxion DUB2](https://www.interxion.com/locations/europe/dublin) | 1 | North Europe | 10G, 100G | Interxion |
-| **Frankfurt** | [Interxion FRA11](https://www.interxion.com/Locations/frankfurt/) | 1 | Germany West Central | 10G, 100G | AT&T NetBond, British Telecom, CenturyLink Cloud Connect, China Unicom Global, Colt, DE-CIX, Equinix, euNetworks, GBI, GEANT, InterCloud, Interxion, Megaport, NTT Global DataCenters EMEA, Orange, Telia Carrier, T-Systems |
-| **Frankfurt2** | [Equinix FR7](https://www.equinix.com/locations/europe-colocation/germany-colocation/frankfurt-data-centers/fr7/) | 1 | Germany West Central | 10G, 100G | DE-CIX, Deutsche Telekom AG, Equinix |
-| **Geneva** | [Equinix GV2](https://www.equinix.com/locations/europe-colocation/switzerland-colocation/geneva-data-centers/gv2/) | 1 | Switzerland West | 10G, 100G | Colt, Equinix, InterCloud, Megaport, Swisscom |
-| **Hong Kong** | [Equinix HK1](https://www.equinix.com/data-centers/asia-pacific-colocation/hong-kong-colocation/hong-kong-data-centers/hk1) | 2 | East Asia | 10G | Aryaka Networks, British Telecom, CenturyLink Cloud Connect, Chief Telecom, China Telecom Global, China Unicom, Colt, Equinix, InterCloud, Megaport, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telia Carrier, Verizon, Zayo |
-| **Hong Kong2** | [iAdvantage MEGA-i](https://www.iadvantage.net/index.php/locations/mega-i) | 2 | East Asia | 10G | China Mobile International, China Telecom Global, iAdvantage, Megaport, PCCW Global Limited, SingTel |
-| **Jakarta** | [Telin](https://www.telin.net/) | 4 | n/a | 10G | NTT Communications, Telin, XL Axiata |
-| **Johannesburg** | [Teraco JB1](https://www.teraco.co.za/data-centre-locations/johannesburg/#jb1) | 3 | South Africa North | 10G | BCX, British Telecom, Internet Solutions - Cloud Connect, Liquid Telecom, MTN Global Connect, Orange, Teraco, Vodacom |
+| **Dublin** | [Equinix DB3](https://www.equinix.com/locations/europe-colocation/ireland-colocation/dublin-data-centers/db3/) | 1 | North Europe | Supported | CenturyLink Cloud Connect, Colt, eir, Equinix, GEANT, euNetworks, Interxion, Megaport, Zayo|
+| **Dublin2** | [Interxion DUB2](https://www.interxion.com/locations/europe/dublin) | 1 | North Europe | Supported | Interxion |
+| **Frankfurt** | [Interxion FRA11](https://www.interxion.com/Locations/frankfurt/) | 1 | Germany West Central | Supported | AT&T NetBond, British Telecom, CenturyLink Cloud Connect, China Unicom Global, Colt, DE-CIX, Equinix, euNetworks, GBI, GEANT, InterCloud, Interxion, Megaport, NTT Global DataCenters EMEA, Orange, Telia Carrier, T-Systems |
+| **Frankfurt2** | [Equinix FR7](https://www.equinix.com/locations/europe-colocation/germany-colocation/frankfurt-data-centers/fr7/) | 1 | Germany West Central | Supported | DE-CIX, Deutsche Telekom AG, Equinix |
+| **Geneva** | [Equinix GV2](https://www.equinix.com/locations/europe-colocation/switzerland-colocation/geneva-data-centers/gv2/) | 1 | Switzerland West | Supported | Colt, Equinix, InterCloud, Megaport, Swisscom |
+| **Hong Kong** | [Equinix HK1](https://www.equinix.com/data-centers/asia-pacific-colocation/hong-kong-colocation/hong-kong-data-centers/hk1) | 2 | East Asia | Supported | Aryaka Networks, British Telecom, CenturyLink Cloud Connect, Chief Telecom, China Telecom Global, China Unicom, Colt, Equinix, InterCloud, Megaport, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telia Carrier, Verizon, Zayo |
+| **Hong Kong2** | [iAdvantage MEGA-i](https://www.iadvantage.net/index.php/locations/mega-i) | 2 | East Asia | Supported | China Mobile International, China Telecom Global, iAdvantage, Megaport, PCCW Global Limited, SingTel |
+| **Jakarta** | [Telin](https://www.telin.net/) | 4 | n/a | Supported | NTT Communications, Telin, XL Axiata |
+| **Johannesburg** | [Teraco JB1](https://www.teraco.co.za/data-centre-locations/johannesburg/#jb1) | 3 | South Africa North | Supported | BCX, British Telecom, Internet Solutions - Cloud Connect, Liquid Telecom, MTN Global Connect, Orange, Teraco, Vodacom |
| **Kuala Lumpur** | [TIME dotCom Menara AIMS](https://www.time.com.my/enterprise/connectivity/direct-cloud) | 2 | n/a | n/a | TIME dotCom |
-| **Las Vegas** | [Switch LV](https://www.switch.com/las-vegas) | 1 | n/a | 10G, 100G | CenturyLink Cloud Connect, Megaport, PacketFabric |
-| **London** | [Equinix LD5](https://www.equinix.com/locations/europe-colocation/united-kingdom-colocation/london-data-centers/ld5/) | 1 | UK South | 10G, 100G | AT&T NetBond, British Telecom, CenturyLink, Colt, Equinix, euNetworks, Intelsat, InterCloud, Internet Solutions - Cloud Connect, Interxion, Jisc, Level 3 Communications, Megaport, MTN, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telehouse - KDDI, Telenor, Telia Carrier, Verizon, Vodafone, Zayo |
-| **London2** | [Telehouse North Two](https://www.telehouse.net/data-centres/emea/uk-data-centres/london-data-centres/north-two) | 1 | UK South | 10G, 100G | BICS, British Telecom, CenturyLink Cloud Connect, Colt, Equinix, GTT, Interxion, IX Reach, JISC, Megaport, NTT Global DataCenters EMEA, Orange, SES, Sohonet, Telehouse - KDDI, Zayo |
-| **Los Angeles** | [CoreSite LA1](https://www.coresite.com/data-centers/locations/los-angeles/one-wilshire) | 1 | n/a | 10G, 100G | CoreSite, Equinix*, Megaport, Neutrona Networks, NTT, Zayo</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Please create new circuits in Los Angeles2.* |
-| **Los Angeles2** | [Equinix LA1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/los-angeles-data-centers/la1/) | 1 | n/a | 10G, 100G | Equinix |
-| **Madrid** | [Interxion MAD1](https://www.interxion.com/es/donde-estamos/europa/madrid) | 1 | West Europe | 10G, 100G | DE-CIX, Interxion, Megaport, Telefonica |
+| **Las Vegas** | [Switch LV](https://www.switch.com/las-vegas) | 1 | n/a | Supported | CenturyLink Cloud Connect, Megaport, PacketFabric |
+| **London** | [Equinix LD5](https://www.equinix.com/locations/europe-colocation/united-kingdom-colocation/london-data-centers/ld5/) | 1 | UK South | Supported | AT&T NetBond, British Telecom, CenturyLink, Colt, Equinix, euNetworks, Intelsat, InterCloud, Internet Solutions - Cloud Connect, Interxion, Jisc, Level 3 Communications, Megaport, MTN, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telehouse - KDDI, Telenor, Telia Carrier, Verizon, Vodafone, Zayo |
+| **London2** | [Telehouse North Two](https://www.telehouse.net/data-centres/emea/uk-data-centres/london-data-centres/north-two) | 1 | UK South | Supported | BICS, British Telecom, CenturyLink Cloud Connect, Colt, Equinix, GTT, Interxion, IX Reach, JISC, Megaport, NTT Global DataCenters EMEA, Orange, SES, Sohonet, Telehouse - KDDI, Zayo |
+| **Los Angeles** | [CoreSite LA1](https://www.coresite.com/data-centers/locations/los-angeles/one-wilshire) | 1 | n/a | Supported | CoreSite, Equinix*, Megaport, Neutrona Networks, NTT, Zayo</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Please create new circuits in Los Angeles2.* |
+| **Los Angeles2** | [Equinix LA1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/los-angeles-data-centers/la1/) | 1 | n/a | Supported | Equinix |
+| **Madrid** | [Interxion MAD1](https://www.interxion.com/es/donde-estamos/europa/madrid) | 1 | West Europe | Supported | DE-CIX, Interxion, Megaport, Telefonica |
| **Marseille** |[Interxion MRS1](https://www.interxion.com/Locations/marseille/) | 1 | France South | n/a | Colt, DE-CIX, GEANT, Interxion, Jaguar Network, Ooredoo Cloud Connect |
-| **Melbourne** | [NextDC M1](https://www.nextdc.com/data-centres/m1-melbourne-data-centre) | 2 | Australia Southeast | 10G, 100G | AARNet, Devoli, Equinix, Megaport, NEXTDC, Optus, Orange, Telstra Corporation, TPG Telecom |
-| **Miami** | [Equinix MI1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/miami-data-centers/mi1/) | 1 | n/a | 10G, 100G | Claro, C3ntro, Equinix, Megaport, Neutrona Networks |
-| **Milan** | [IRIDEOS](https://irideos.it/en/data-centers/) | 1 | n/a | 10G | Colt, Equinix, Fastweb, IRIDEOS, Retelit |
-| **Minneapolis** | [Cologix MIN1](https://www.cologix.com/data-centers/minneapolis/min1/) | 1 | n/a | 10G, 100G | Cologix, Megaport |
-| **Montreal** | [Cologix MTL3](https://www.cologix.com/data-centers/montreal/mtl3/) | 1 | n/a | 10G, 100G | Bell Canada, CenturyLink Cloud Connect, Cologix, Fibrenoire, Megaport, Telus, Zayo |
-| **Mumbai** | Tata Communications | 2 | West India | 10G | BSNL, DE-CIX, Global CloudXchange (GCX), Reliance Jio, Sify, Tata Communications, Verizon |
-| **Mumbai2** | Airtel | 2 | West India | 10G | Airtel, Sify, Orange, Vodafone Idea |
-| **Munich** | [EdgeConneX](https://www.edgeconnex.com/locations/europe/munich/) | 1 | n/a | 10G | Colt, DE-CIX, Megaport |
-| **New York** | [Equinix NY5](https://www.equinix.com/locations/americas-colocation/united-states-colocation/new-york-data-centers/ny5/) | 1 | n/a | 10G, 100G | CenturyLink Cloud Connect, Coresite, DE-CIX, Equinix, InterCloud, Megaport, Packet, Zayo |
-| **Newport(Wales)** | [Next Generation Data](https://www.nextgenerationdata.co.uk) | 1 | UK West | 10G, 100G | British Telecom, Colt, Jisc, Level 3 Communications, Next Generation Data |
-| **Osaka** | [Equinix OS1](https://www.equinix.com/locations/asia-colocation/japan-colocation/osaka-data-centers/os1/) | 2 | Japan West | 10G, 100G | AT TOKYO, BBIX, Colt, Equinix, Internet Initiative Japan Inc. - IIJ, Megaport, NTT Communications, NTT SmartConnect, Softbank, Tokai Communications |
-| **Oslo** | [DigiPlex Ulven](https://www.digiplex.com/locations/oslo-datacentre) | 1 | Norway East | 10G, 100G | GlobalConnect, Megaport, Telenor, Telia Carrier |
-| **Paris** | [Interxion PAR5](https://www.interxion.com/Locations/paris/) | 1 | France Central | 10G, 100G | British Telecom, CenturyLink Cloud Connect, Colt, Equinix, Intercloud, Interxion, Jaguar Network, Megaport, Orange, Telia Carrier, Zayo |
-| **Perth** | [NextDC P1](https://www.nextdc.com/data-centres/p1-perth-data-centre) | 2 | n/a | 10G | Megaport, NextDC |
-| **Phoenix** | [EdgeConneX PHX01](https://www.edgeconnex.com/locations/north-america/phoenix-az/) | 1 | West US 3 | 10G, 100G | Cox Business Cloud Port, CenturyLink Cloud Connect, Megaport, Zayo |
-| **Pune** | [STT GDC Pune DC1](https://www.sttelemediagdc.in/our-data-centres-in-india) | 2 | Central India| 10G | Tata Communications |
-| **Quebec City** | [Vantage](https://vantage-dc.com/data_centers/quebec-city-data-center-campus/) | 1 | Canada East | 10G, 100G | Bell Canada, Equinix, Megaport, Telus |
-| **Queretaro (Mexico)** | [KIO Networks QR01](https://www.kionetworks.com/es-mx/) | 4 | n/a | 10G | Transtelco|
-| **Quincy** | [Sabey Datacenter - Building A](https://sabeydatacenters.com/data-center-locations/central-washington-data-centers/quincy-data-center) | 1 | West US 2 | 10G, 100G | |
-| **Rio de Janeiro** | [Equinix-RJ2](https://www.equinix.com/locations/americas-colocation/brazil-colocation/rio-de-janeiro-data-centers/rj2/) | 3 | Brazil Southeast | 10G | Equinix |
-| **San Antonio** | [CyrusOne SA1](https://cyrusone.com/locations/texas/san-antonio-texas/) | 1 | South Central US | 10G, 100G | CenturyLink Cloud Connect, Megaport, Zayo |
-| **Sao Paulo** | [Equinix SP2](https://www.equinix.com/locations/americas-colocation/brazil-colocation/sao-paulo-data-centers/sp2/) | 3 | Brazil South | 10G, 100G | Aryaka Networks, Ascenty Data Centers, British Telecom, Equinix, InterCloud, Level 3 Communications, Neutrona Networks, Orange, Tata Communications, Telefonica, UOLDIVEO |
-| **Sao Paulo2** | [TIVIT TSM](https://www.tivit.com/en/tivit/) | 3 | Brazil South | 10G, 100G | Ascenty Data Centers |
-| **Seattle** | [Equinix SE2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/seattle-data-centers/se2/) | 1 | West US 2 | 10G, 100G | Aryaka Networks, CenturyLink Cloud Connect, Equinix, Level 3 Communications, Megaport, Telus, Zayo |
-| **Seoul** | [KINX Gasan IDC](https://www.kinx.net/?lang=en) | 2 | Korea Central | 10G, 100G | KINX, KT, LG CNS, LGUplus, Equinix, Sejong Telecom, SK Telecom |
+| **Melbourne** | [NextDC M1](https://www.nextdc.com/data-centres/m1-melbourne-data-centre) | 2 | Australia Southeast | Supported | AARNet, Devoli, Equinix, Megaport, NEXTDC, Optus, Orange, Telstra Corporation, TPG Telecom |
+| **Miami** | [Equinix MI1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/miami-data-centers/mi1/) | 1 | n/a | Supported | Claro, C3ntro, Equinix, Megaport, Neutrona Networks |
+| **Milan** | [IRIDEOS](https://irideos.it/en/data-centers/) | 1 | n/a | Supported | Colt, Equinix, Fastweb, IRIDEOS, Retelit |
+| **Minneapolis** | [Cologix MIN1](https://www.cologix.com/data-centers/minneapolis/min1/) | 1 | n/a | Supported | Cologix, Megaport |
+| **Montreal** | [Cologix MTL3](https://www.cologix.com/data-centers/montreal/mtl3/) | 1 | n/a | Supported | Bell Canada, CenturyLink Cloud Connect, Cologix, Fibrenoire, Megaport, Telus, Zayo |
+| **Mumbai** | Tata Communications | 2 | West India | Supported | BSNL, DE-CIX, Global CloudXchange (GCX), Reliance Jio, Sify, Tata Communications, Verizon |
+| **Mumbai2** | Airtel | 2 | West India | Supported | Airtel, Sify, Orange, Vodafone Idea |
+| **Munich** | [EdgeConneX](https://www.edgeconnex.com/locations/europe/munich/) | 1 | n/a | Supported | Colt, DE-CIX, Megaport |
+| **New York** | [Equinix NY5](https://www.equinix.com/locations/americas-colocation/united-states-colocation/new-york-data-centers/ny5/) | 1 | n/a | Supported | CenturyLink Cloud Connect, Coresite, DE-CIX, Equinix, InterCloud, Megaport, Packet, Zayo |
+| **Newport(Wales)** | [Next Generation Data](https://www.nextgenerationdata.co.uk) | 1 | UK West | Supported | British Telecom, Colt, Jisc, Level 3 Communications, Next Generation Data |
+| **Osaka** | [Equinix OS1](https://www.equinix.com/locations/asia-colocation/japan-colocation/osaka-data-centers/os1/) | 2 | Japan West | Supported | AT TOKYO, BBIX, Colt, Equinix, Internet Initiative Japan Inc. - IIJ, Megaport, NTT Communications, NTT SmartConnect, Softbank, Tokai Communications |
+| **Oslo** | [DigiPlex Ulven](https://www.digiplex.com/locations/oslo-datacentre) | 1 | Norway East | Supported| GlobalConnect, Megaport, Telenor, Telia Carrier |
+| **Paris** | [Interxion PAR5](https://www.interxion.com/Locations/paris/) | 1 | France Central | Supported | British Telecom, CenturyLink Cloud Connect, Colt, Equinix, Intercloud, Interxion, Jaguar Network, Megaport, Orange, Telia Carrier, Zayo |
+| **Perth** | [NextDC P1](https://www.nextdc.com/data-centres/p1-perth-data-centre) | 2 | n/a | Supported | Megaport, NextDC |
+| **Phoenix** | [EdgeConneX PHX01](https://www.edgeconnex.com/locations/north-america/phoenix-az/) | 1 | West US 3 | Supported | Cox Business Cloud Port, CenturyLink Cloud Connect, Megaport, Zayo |
+| **Pune** | [STT GDC Pune DC1](https://www.sttelemediagdc.in/our-data-centres-in-india) | 2 | Central India| Supported | Tata Communications |
+| **Quebec City** | [Vantage](https://vantage-dc.com/data_centers/quebec-city-data-center-campus/) | 1 | Canada East | Supported | Bell Canada, Equinix, Megaport, Telus |
+| **Queretaro (Mexico)** | [KIO Networks QR01](https://www.kionetworks.com/es-mx/) | 4 | n/a | Supported | Transtelco|
+| **Quincy** | [Sabey Datacenter - Building A](https://sabeydatacenters.com/data-center-locations/central-washington-data-centers/quincy-data-center) | 1 | West US 2 | Supported | |
+| **Rio de Janeiro** | [Equinix-RJ2](https://www.equinix.com/locations/americas-colocation/brazil-colocation/rio-de-janeiro-data-centers/rj2/) | 3 | Brazil Southeast | Supported | Equinix |
+| **San Antonio** | [CyrusOne SA1](https://cyrusone.com/locations/texas/san-antonio-texas/) | 1 | South Central US | Supported | CenturyLink Cloud Connect, Megaport, Zayo |
+| **Sao Paulo** | [Equinix SP2](https://www.equinix.com/locations/americas-colocation/brazil-colocation/sao-paulo-data-centers/sp2/) | 3 | Brazil South | Supported | Aryaka Networks, Ascenty Data Centers, British Telecom, Equinix, InterCloud, Level 3 Communications, Neutrona Networks, Orange, Tata Communications, Telefonica, UOLDIVEO |
+| **Sao Paulo2** | [TIVIT TSM](https://www.tivit.com/en/tivit/) | 3 | Brazil South | Supported | Ascenty Data Centers |
+| **Seattle** | [Equinix SE2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/seattle-data-centers/se2/) | 1 | West US 2 | Supported | Aryaka Networks, CenturyLink Cloud Connect, Equinix, Level 3 Communications, Megaport, Telus, Zayo |
+| **Seoul** | [KINX Gasan IDC](https://www.kinx.net/?lang=en) | 2 | Korea Central | Supported | KINX, KT, LG CNS, LGUplus, Equinix, Sejong Telecom, SK Telecom |
| **Seoul2** | [KT IDC](https://www.kt-idc.com/eng/introduce/sub1_4_10.jsp#tab) | 2 | Korea Central | n/a | KT |
-| **Silicon Valley** | [Equinix SV1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/silicon-valley-data-centers/sv1/) | 1 | West US | 10G, 100G | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Colt, Comcast, Coresite, Cox Business Cloud Port, Equinix, InterCloud, Internet2, IX Reach, Packet, PacketFabric, Level 3 Communications, Megaport, Orange, Sprint, Tata Communications, Telia Carrier, Verizon, Zayo |
-| **Silicon Valley2** | [Coresite SV7](https://www.coresite.com/data-centers/locations/silicon-valley/sv7) | 1 | West US | 10G, 100G | Colt, Coresite |
-| **Singapore** | [Equinix SG1](https://www.equinix.com/data-centers/asia-pacific-colocation/singapore-colocation/singapore-data-center/sg1) | 2 | Southeast Asia | 10G, 100G | Aryaka Networks, AT&T NetBond, British Telecom, China Mobile International, Epsilon Global Communications, Equinix, InterCloud, Level 3 Communications, Megaport, NTT Communications, Orange, SingTel, Tata Communications, Telstra Corporation, Verizon, Vodafone |
-| **Singapore2** | [Global Switch Tai Seng](https://www.globalswitch.com/locations/singapore-data-centres/) | 2 | Southeast Asia | 10G, 100G | CenturyLink Cloud Connect, China Unicom Global, Colt, DE-CIX, Epsilon Global Communications, Equinix, Megaport, PCCW Global Limited, SingTel, Telehouse - KDDI |
-| **Stavanger** | [Green Mountain DC1](https://greenmountain.no/dc1-stavanger/) | 1 | Norway West | 10G, 100G |GlobalConnect, Megaport |
-| **Stockholm** | [Equinix SK1](https://www.equinix.com/locations/europe-colocation/sweden-colocation/stockholm-data-centers/sk1/) | 1 | n/a | 10G | Equinix, Megaport, Telia Carrier |
-| **Sydney** | [Equinix SY2](https://www.equinix.com/locations/asia-colocation/australia-colocation/sydney-data-centers/sy2/) | 2 | Australia East | 10G, 100G | AARNet, AT&T NetBond, British Telecom, Devoli, Equinix, Kordia, Megaport, NEXTDC, NTT Communications, Optus, Orange, Spark NZ, Telstra Corporation, TPG Telecom, Verizon, Vocus Group NZ |
-| **Sydney2** | [NextDC S1](https://www.nextdc.com/data-centres/s1-sydney-data-centre) | 2 | Australia East | 10G, 100G | Megaport, NextDC |
-| **Taipei** | Chief Telecom | 2 | n/a | 10G | Chief Telecom, Chunghwa Telecom, FarEasTone |
-| **Tokyo** | [Equinix TY4](https://www.equinix.com/locations/asia-colocation/japan-colocation/tokyo-data-centers/ty4/) | 2 | Japan East | N/A | Aryaka Networks, AT&T NetBond, BBIX, British Telecom, CenturyLink Cloud Connect, Colt, Equinix, Intercloud, Internet Initiative Japan Inc. - IIJ, Megaport, NTT Communications, NTT EAST, Orange, Softbank, Telehouse - KDDI, Verizon </br></br> **We are currently unable to support new ExpressRoute circuits in Tokyo. Please create new circuits in Tokyo2 or Osaka.* |
-| **Tokyo2** | [AT TOKYO](https://www.attokyo.com/) | 2 | Japan East | 10G, 100G | AT TOKYO, China Unicom Global, Colt, Fibrenoire, IX Reach, Megaport, PCCW Global Limited, Tokai Communications |
-| **Toronto** | [Cologix TOR1](https://www.cologix.com/data-centers/toronto/tor1/) | 1 | Canada Central | 10G, 100G | AT&T NetBond, Bell Canada, CenturyLink Cloud Connect, Cologix, Equinix, IX Reach Megaport, Telus, Verizon, Zayo |
-| **Toronto2** | [Allied REIT](https://www.alliedreit.com/property/905-king-st-w/) | 1 | Canada Central | 10G, 100G | |
-| **Vancouver** | [Cologix VAN1](https://www.cologix.com/data-centers/vancouver/van1/) | 1 | n/a | 10G | Bell Canada, Cologix, Megaport, Telus, Zayo |
-| **Washington DC** | [Equinix DC2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/washington-dc-data-centers/dc2/), [Equinix DC6](https://www.equinix.com/data-centers/americas-colocation/united-states-colocation/washington-dc-data-centers/dc6) | 1 | East US, East US 2 | 10G, 100G | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Cologix, Colt, Comcast, Coresite, Cox Business Cloud Port, Equinix, Internet2, InterCloud, Iron Mountain, IX Reach, Level 3 Communications, Megaport, Neutrona Networks, NTT Communications, Orange, PacketFabric, SES, Sprint, Tata Communications, Telia Carrier, Verizon, Zayo |
-| **Washington DC2** | [Coresite VA2](https://www.coresite.com/data-center/va2-reston-va) | 1 | East US, East US 2 | 10G, 100G | CenturyLink Cloud Connect, Coresite, Intelsat, Megaport, Viasat, Zayo |
-| **Zurich** | [Interxion ZUR2](https://www.interxion.com/Locations/zurich/) | 1 | Switzerland North | 10G, 100G | Colt, Equinix, Intercloud, Interxion, Megaport, Swisscom, Zayo |
+| **Silicon Valley** | [Equinix SV1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/silicon-valley-data-centers/sv1/) | 1 | West US | Supported | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Colt, Comcast, Coresite, Cox Business Cloud Port, Equinix, InterCloud, Internet2, IX Reach, Packet, PacketFabric, Level 3 Communications, Megaport, Orange, Sprint, Tata Communications, Telia Carrier, Verizon, Zayo |
+| **Silicon Valley2** | [Coresite SV7](https://www.coresite.com/data-centers/locations/silicon-valley/sv7) | 1 | West US | Supported | Colt, Coresite |
+| **Singapore** | [Equinix SG1](https://www.equinix.com/data-centers/asia-pacific-colocation/singapore-colocation/singapore-data-center/sg1) | 2 | Southeast Asia | Supported | Aryaka Networks, AT&T NetBond, British Telecom, China Mobile International, Epsilon Global Communications, Equinix, InterCloud, Level 3 Communications, Megaport, NTT Communications, Orange, SingTel, Tata Communications, Telstra Corporation, Verizon, Vodafone |
+| **Singapore2** | [Global Switch Tai Seng](https://www.globalswitch.com/locations/singapore-data-centres/) | 2 | Southeast Asia | Supported | CenturyLink Cloud Connect, China Unicom Global, Colt, DE-CIX, Epsilon Global Communications, Equinix, Megaport, PCCW Global Limited, SingTel, Telehouse - KDDI |
+| **Stavanger** | [Green Mountain DC1](https://greenmountain.no/dc1-stavanger/) | 1 | Norway West | Supported |GlobalConnect, Megaport |
+| **Stockholm** | [Equinix SK1](https://www.equinix.com/locations/europe-colocation/sweden-colocation/stockholm-data-centers/sk1/) | 1 | n/a | Supported | Equinix, Megaport, Telia Carrier |
+| **Sydney** | [Equinix SY2](https://www.equinix.com/locations/asia-colocation/australia-colocation/sydney-data-centers/sy2/) | 2 | Australia East | Supported | AARNet, AT&T NetBond, British Telecom, Devoli, Equinix, Kordia, Megaport, NEXTDC, NTT Communications, Optus, Orange, Spark NZ, Telstra Corporation, TPG Telecom, Verizon, Vocus Group NZ |
+| **Sydney2** | [NextDC S1](https://www.nextdc.com/data-centres/s1-sydney-data-centre) | 2 | Australia East | Supported | Megaport, NextDC |
+| **Taipei** | Chief Telecom | 2 | n/a | Supported | Chief Telecom, Chunghwa Telecom, FarEasTone |
+| **Tokyo** | [Equinix TY4](https://www.equinix.com/locations/asia-colocation/japan-colocation/tokyo-data-centers/ty4/) | 2 | Japan East | n/a | Aryaka Networks, AT&T NetBond, BBIX, British Telecom, CenturyLink Cloud Connect, Colt, Equinix, Intercloud, Internet Initiative Japan Inc. - IIJ, Megaport, NTT Communications, NTT EAST, Orange, Softbank, Telehouse - KDDI, Verizon </br></br> **We are currently unable to support new ExpressRoute circuits in Tokyo. Please create new circuits in Tokyo2 or Osaka.* |
+| **Tokyo2** | [AT TOKYO](https://www.attokyo.com/) | 2 | Japan East | Supported | AT TOKYO, China Unicom Global, Colt, Fibrenoire, IX Reach, Megaport, PCCW Global Limited, Tokai Communications |
+| **Toronto** | [Cologix TOR1](https://www.cologix.com/data-centers/toronto/tor1/) | 1 | Canada Central | Supported | AT&T NetBond, Bell Canada, CenturyLink Cloud Connect, Cologix, Equinix, IX Reach Megaport, Telus, Verizon, Zayo |
+| **Toronto2** | [Allied REIT](https://www.alliedreit.com/property/905-king-st-w/) | 1 | Canada Central | Supported | |
+| **Vancouver** | [Cologix VAN1](https://www.cologix.com/data-centers/vancouver/van1/) | 1 | n/a | Supported | Bell Canada, Cologix, Megaport, Telus, Zayo |
+| **Washington DC** | [Equinix DC2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/washington-dc-data-centers/dc2/), [Equinix DC6](https://www.equinix.com/data-centers/americas-colocation/united-states-colocation/washington-dc-data-centers/dc6) | 1 | East US, East US 2 | Supported | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Cologix, Colt, Comcast, Coresite, Cox Business Cloud Port, Equinix, Internet2, InterCloud, Iron Mountain, IX Reach, Level 3 Communications, Megaport, Neutrona Networks, NTT Communications, Orange, PacketFabric, SES, Sprint, Tata Communications, Telia Carrier, Verizon, Zayo |
+| **Washington DC2** | [Coresite VA2](https://www.coresite.com/data-center/va2-reston-va) | 1 | East US, East US 2 | n/a | CenturyLink Cloud Connect, Coresite, Intelsat, Megaport, Viasat, Zayo |
+| **Zurich** | [Interxion ZUR2](https://www.interxion.com/Locations/zurich/) | 1 | Switzerland North | Supported | Colt, Equinix, Intercloud, Interxion, Megaport, Swisscom, Zayo |
**+** denotes coming soon
Azure national clouds are isolated from each other and from global commercial Az
### US Government cloud | **Location** | **Address** | **Local Azure regions**| **ER Direct** | **Service providers** | | | | | | |
-| **Atlanta** | [Equinix AT1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/atlanta-data-centers/at1/) | n/a | 10G, 100G | Equinix |
-| **Chicago** | [Equinix CH1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/chicago-data-centers/ch1/) | n/a | 10G, 100G | AT&T NetBond, British Telecom, Equinix, Level 3 Communications, Verizon |
-| **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | n/a | 10G, 100G | Equinix, Internet2, Megaport, Verizon |
-| **New York** | [Equinix NY5](https://www.equinix.com/locations/americas-colocation/united-states-colocation/new-york-data-centers/ny5/) | n/a | 10G, 100G | Equinix, CenturyLink Cloud Connect, Verizon |
-| **Phoenix** | [CyrusOne Chandler](https://cyrusone.com/locations/arizona/phoenix-arizona-chandler/) | US Gov Arizona | 10G, 100G | AT&T NetBond, CenturyLink Cloud Connect, Megaport |
-| **San Antonio** | [CyrusOne SA2](https://cyrusone.com/locations/texas/san-antonio-texas-ii/) | US Gov Texas | 10G, 100G | CenturyLink Cloud Connect, Megaport |
-| **Silicon Valley** | [Equinix SV4](https://www.equinix.com/locations/americas-colocation/united-states-colocation/silicon-valley-data-centers/sv4/) | n/a | 10G, 100G | AT&T, Equinix, Level 3 Communications, Verizon |
-| **Seattle** | [Equinix SE2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/seattle-data-centers/se2/) | n/a | 10G, 100G | Equinix, Megaport |
-| **Washington DC** | [Equinix DC2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/washington-dc-data-centers/dc2/) | US DoD East, US Gov Virginia | 10G, 100G | AT&T NetBond, CenturyLink Cloud Connect, Equinix, Level 3 Communications, Megaport, Verizon |
+| **Atlanta** | [Equinix AT1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/atlanta-data-centers/at1/) | n/a | Supported | Equinix |
+| **Chicago** | [Equinix CH1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/chicago-data-centers/ch1/) | n/a | Supported | AT&T NetBond, British Telecom, Equinix, Level 3 Communications, Verizon |
+| **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | n/a | Supported | Equinix, Internet2, Megaport, Verizon |
+| **New York** | [Equinix NY5](https://www.equinix.com/locations/americas-colocation/united-states-colocation/new-york-data-centers/ny5/) | n/a | Supported | Equinix, CenturyLink Cloud Connect, Verizon |
+| **Phoenix** | [CyrusOne Chandler](https://cyrusone.com/locations/arizona/phoenix-arizona-chandler/) | US Gov Arizona | Supported | AT&T NetBond, CenturyLink Cloud Connect, Megaport |
+| **San Antonio** | [CyrusOne SA2](https://cyrusone.com/locations/texas/san-antonio-texas-ii/) | US Gov Texas | Supported | CenturyLink Cloud Connect, Megaport |
+| **Silicon Valley** | [Equinix SV4](https://www.equinix.com/locations/americas-colocation/united-states-colocation/silicon-valley-data-centers/sv4/) | n/a | Supported | AT&T, Equinix, Level 3 Communications, Verizon |
+| **Seattle** | [Equinix SE2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/seattle-data-centers/se2/) | n/a | Supported | Equinix, Megaport |
+| **Washington DC** | [Equinix DC2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/washington-dc-data-centers/dc2/) | US DoD East, US Gov Virginia | Supported | AT&T NetBond, CenturyLink Cloud Connect, Equinix, Level 3 Communications, Megaport, Verizon |
### China | **Location** | **Address** | **Local Azure regions** | **ER Direct** | **Service providers** | | | | | | |
-| **Beijing** | China Telecom | n/a | 10G | China Telecom |
-| **Beijing2** | GDS | n/a | 10G | China Telecom, China Unicom, GDS |
-| **Shanghai** | China Telecom | n/a | 10G | China Telecom |
-| **Shanghai2** | GDS | n/a | 10G | China Telecom, China Unicom, GDS |
+| **Beijing** | China Telecom | n/a | Supported | China Telecom |
+| **Beijing2** | GDS | n/a | Supported | China Telecom, China Unicom, GDS |
+| **Shanghai** | China Telecom | n/a | Supported | China Telecom |
+| **Shanghai2** | GDS | n/a | Supported | China Telecom, China Unicom, GDS |
To learn more, see [ExpressRoute in China](http://www.windowsazure.cn/home/features/expressroute/).
firewall Deploy Rules Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-rules-powershell.md
+
+ Title: 'Add or modify multiple Azure Firewall rules using Azure PowerShell'
+description: In this article, you learn how to add or modify multiple Azure Firewall rules using the Azure PowerShell.
+++ Last updated : 02/23/2022++++
+# Add or modify multiple Azure Firewall rules using Azure PowerShell
+
+When you add new rules to Azure Firewall or Azure Firewall policy, you should use the following steps to reduce the total update time:
+
+1. Retrieve the Azure Firewall or Azure Firewall Policy object.
+1. Add all new rules and perform other desired modifications in the local object. You can add them to an existing rule collection or create new ones as needed.
+1. Push the Firewall or the Firewall Policy updates only when all modifications are done.
+
+The following example shows how to add multiple new DNAT rules to an existing firewall policy using Azure PowerShell. You should follow these same principles also when:
+
+- You update Application or Network rules.
+- You update a firewall managed with classic rules.
+
+Carefully review the following steps. You should first try it on a test policy to ensure it works as expected for your needs.
+
+## Connect to your Azure account and set the context to your subscription
+
+```azurepowershell
+Connect-AzAccount
+Set-AzContext -Subscription "<Subscritpion ID>"
+
+```
+
+## Create local objects of the firewall policy, rule collection group, and rule collection
+
+```azurepowershell
+$policy = Get-AzFirewallPolicy -Name "<Policy Name>" -ResourceGroupName "<Resource Group Name>"
+$natrulecollectiongroup = Get-AzFirewallPolicyRuleCollectionGroup -Name "<Rule Collection Group Name>" -ResourceGroupName "<Resource Group Name>" -AzureFirewallPolicyName "<Firewall Policy Name>"
+$existingrulecollection = $natrulecollectiongroup.Properties.RuleCollection | where {$_.Name -eq "<rule collection name"}
+```
+
+## Define new rules to add
+
+```azurepowershell
+$newrule1 = New-AzFirewallPolicyNatRule -Name "dnat-rule1" -Protocol "TCP" -SourceAddress "<Source Address>" -DestinationAddress "<Destination>" -DestinationPort "<Destination Port>" -TranslatedAddress "<Translated Address>" -TranslatedPort "<Translated Port>"
+$newrule2 = New-AzFirewallPolicyNatRule -Name "dnat-rule1" -Protocol "TCP" -SourceAddress "<Source Address>" -DestinationAddress "<Destination>" -DestinationPort "<Destination Port>" -TranslatedAddress "<Translated Address>" -TranslatedPort "<Translated Port>"
+```
+
+## Add the new rules to the local rule collection object
+
+```azurepowershell
+$existingrulecollection.Rules.Add($newrule1)
+$existingrulecollection.Rules.Add($newrule2)
+```
+
+Use this step to add any more rules, or perform any modifications to existing rules in the same rule collection group.
+
+## Update the rule collection on Azure
+
+```azurepowershell
+Set-AzFirewallPolicyRuleCollectionGroup -Name " <Rule Collection Group Name> " -FirewallPolicyObject $policy -Priority 200 -RuleCollection $natrulecollectiongroup.Properties.rulecollection
+```
+
+## Next steps
+
+- [Azure Firewall Policy rule sets](policy-rule-sets.md)
governance Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/resource-graph-samples.md
Title: Azure Resource Graph sample queries for management groups description: Sample Azure Resource Graph queries for management groups showing use of resource types and tables to access management group details. Previously updated : 01/20/2022 Last updated : 02/16/2022
governance Guest Configuration Baseline Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/guest-configuration-baseline-linux.md
Title: Reference - Azure Policy guest configuration baseline for Linux description: Details of the Linux baseline on Azure implemented through Azure Policy guest configuration. Previously updated : 08/24/2021 Last updated : 02/16/2022
For more information, see [Azure Policy guest configuration](../concepts/guest-c
|Name<br /><sub>(CCEID)</sub> |Details |Remediation check | ||||
-|Ensure `nodev` option set on /home partition.<br /><sub>(1.1.4)</sub> |Description: An attacker could mount a special device (for example, block or character device) on the /home partition. |Edit the /etc/fstab file and add `nodev` to the fourth field (mounting options) for the /home partition. For more information, see the fstab(5) manual pages. |
-|Ensure `nodev` option set on /tmp partition.<br /><sub>(1.1.5)</sub> |Description: An attacker could mount a special device (for example, block or character device) on the /tmp partition. |Edit the /etc/fstab file and add `nodev` to the fourth field (mounting options) for the /tmp partition. For more information, see the fstab(5) manual pages. |
-|Ensure `nodev` option set on /var/tmp partition.<br /><sub>(1.1.6)</sub> |Description: An attacker could mount a special device (for example, block or character device) on the /var/tmp partition. |Edit the /etc/fstab file and add nodev to the fourth field (mounting options) for the /var/tmp partition. For more information, see the fstab(5) manual pages. |
-|Ensure `nosuid` option set on /tmp partition.<br /><sub>(1.1.7)</sub> |Description: Since the /tmp filesystem is only intended for temporary file storage, set this option to ensure that users cannot create `setuid` files in /var/tmp. |Edit the /etc/fstab file and add `nosuid` to the fourth field (mounting options) for the /tmp partition. For more information, see the fstab(5) manual pages. |
-|Ensure `nosuid` option set on /var/tmp partition.<br /><sub>(1.1.8)</sub> |Description: Since the /var/tmp filesystem is only intended for temporary file storage, set this option to ensure that users cannot create `setuid` files in /var/tmp. |Edit the /etc/fstab file and add `nosuid` to the fourth field (mounting options) for the /var/tmp partition. For more information, see the fstab(5) manual pages. |
-|Ensure `noexec` option set on /var/tmp partition.<br /><sub>(1.1.9)</sub> |Description: Since the `/var/tmp` filesystem is only intended for temporary file storage, set this option to ensure that users cannot run executable binaries from `/var/tmp` . |Edit the /etc/fstab file and add `noexec` to the fourth field (mounting options) for the /var/tmp partition. For more information, see the fstab(5) manual pages. |
-|Ensure `noexec` option set on /dev/shm partition.<br /><sub>(1.1.16)</sub> |Description: Setting this option on a file system prevents users from executing programs from shared memory. This deters users from introducing potentially malicious software on the system. |Edit the /etc/fstab file and add `noexec` to the fourth field (mounting options) for the /dev/shm partition. For more information, see the fstab(5) manual pages. |
+|Ensure nodev option set on /home partition.<br /><sub>(1.1.4)</sub> |Description: An attacker could mount a special device (for example, block or character device) on the /home partition. |Edit the /etc/fstab file and add nodev to the fourth field (mounting options) for the /home partition. For more information, see the fstab(5) manual pages. |
+|Ensure nodev option set on /tmp partition.<br /><sub>(1.1.5)</sub> |Description: An attacker could mount a special device (for example, block or character device) on the /tmp partition. |Edit the /etc/fstab file and add nodev to the fourth field (mounting options) for the /tmp partition. For more information, see the fstab(5) manual pages. |
+|Ensure nodev option set on /var/tmp partition.<br /><sub>(1.1.6)</sub> |Description: An attacker could mount a special device (for example, block or character device) on the /var/tmp partition. |Edit the /etc/fstab file and add nodev to the fourth field (mounting options) for the /var/tmp partition. For more information, see the fstab(5) manual pages. |
+|Ensure nosuid option set on /tmp partition.<br /><sub>(1.1.7)</sub> |Description: Since the /tmp filesystem is only intended for temporary file storage, set this option to ensure that users cannot create setuid files in /var/tmp. |Edit the /etc/fstab file and add nosuid to the fourth field (mounting options) for the /tmp partition. For more information, see the fstab(5) manual pages. |
+|Ensure nosuid option set on /var/tmp partition.<br /><sub>(1.1.8)</sub> |Description: Since the /var/tmp filesystem is only intended for temporary file storage, set this option to ensure that users cannot create setuid files in /var/tmp. |Edit the /etc/fstab file and add nosuid to the fourth field (mounting options) for the /var/tmp partition. For more information, see the fstab(5) manual pages. |
+|Ensure noexec option set on /var/tmp partition.<br /><sub>(1.1.9)</sub> |Description: Since the `/var/tmp` filesystem is only intended for temporary file storage, set this option to ensure that users cannot run executable binaries from `/var/tmp` . |Edit the /etc/fstab file and add noexec to the fourth field (mounting options) for the /var/tmp partition. For more information, see the fstab(5) manual pages. |
+|Ensure noexec option set on /dev/shm partition.<br /><sub>(1.1.16)</sub> |Description: Setting this option on a file system prevents users from executing programs from shared memory. This deters users from introducing potentially malicious software on the system. |Edit the /etc/fstab file and add noexec to the fourth field (mounting options) for the /dev/shm partition. For more information, see the fstab(5) manual pages. |
|Disable automounting<br /><sub>(1.1.21)</sub> |Description: With automounting enabled, anyone with physical access could attach a USB drive or disc and have its contents available in system even if they lack permissions to mount it themselves. |Disable the autofs service or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-autofs' | |Ensure mounting of USB storage devices is disabled<br /><sub>(1.1.21.1)</sub> |Description: Removing support for USB storage devices reduces the local attack surface of the server. |Edit or create a file in the `/etc/modprobe.d/` directory ending in .conf and add `install usb-storage /bin/true` then unload the usb-storage module or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' | |Ensure core dumps are restricted.<br /><sub>(1.5.1)</sub> |Description: Setting a hard limit on core dumps prevents users from overriding the soft variable. If core dumps are required, consider setting limits for user groups (see `limits.conf(5)` ). In addition, setting the `fs.suid_dumpable` variable to 0 will prevent setuid programs from dumping core. |Add `hard core 0` to /etc/security/limits.conf or a file in the limits.d directory and set `fs.suid_dumpable = 0` in sysctl or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-core-dumps' |
-|Ensure prelink is disabled.<br /><sub>(1.5.4)</sub> |Description: The prelinking feature can interfere with the operation of AIDE, because it changes binaries. Prelinking can also increase the vulnerability of the system if a malicious user is able to compromise a common library such as `libc`. | Uninstall `prelink` using your package manager or run '/opt/microsoft/omsagent/plugin/omsremediate -r remove-prelink' |
+|Ensure prelink is disabled.<br /><sub>(1.5.4)</sub> |Description: The prelinking feature can interfere with the operation of AIDE, because it changes binaries. Prelinking can also increase the vulnerability of the system if a malicious user is able to compromise a common library such as libc. |uninstall `prelink` using your package manager or run '/opt/microsoft/omsagent/plugin/omsremediate -r remove-prelink' |
|Ensure permissions on /etc/motd are configured.<br /><sub>(1.7.1.4)</sub> |Description: If the `/etc/motd` file does not have the correct ownership, it could be modified by unauthorized users with incorrect or misleading information. |Set the owner and group of /etc/motd to root and set permissions to 0644 or run '/opt/microsoft/omsagent/plugin/omsremediate -r file-permissions' | |Ensure permissions on /etc/issue are configured.<br /><sub>(1.7.1.5)</sub> |Description: If the `/etc/issue` file does not have the correct ownership, it could be modified by unauthorized users with incorrect or misleading information. |Set the owner and group of /etc/issue to root and set permissions to 0644 or run '/opt/microsoft/omsagent/plugin/omsremediate -r file-permissions' | |Ensure permissions on /etc/issue.net are configured.<br /><sub>(1.7.1.6)</sub> |Description: If the `/etc/issue.net` file does not have the correct ownership, it could be modified by unauthorized users with incorrect or misleading information. |Set the owner and group of /etc/issue.net to root and set permissions to 0644 or run '/opt/microsoft/omsagent/plugin/omsremediate -r file-permissions' |
For more information, see [Azure Policy guest configuration](../concepts/guest-c
|The nodev/nosuid option should be enabled for all NFS mounts.<br /><sub>(5)</sub> |Description: An attacker could load files that run with an elevated security context or special devices via remote file system |Add the nosuid and nodev options to the fourth field (mounting options) in /etc/fstab. For more information, see the fstab(5) manual pages. | |Ensure permissions on /etc/ssh/sshd_config are configured.<br /><sub>(5.2.1)</sub> |Description: The `/etc/ssh/sshd_config` file needs to be protected from unauthorized changes by non-privileged users. |Set the owner and group of /etc/ssh/sshd_config to root and set the permissions to 0600 or run '/opt/microsoft/omsagent/plugin/omsremediate -r sshd-config-file-permissions' | |Ensure password creation requirements are configured.<br /><sub>(5.3.1)</sub> |Description: Strong passwords protect systems from being hacked through brute force methods. |Set the following key/value pairs in the appropriate PAM for your distro: minlen=14, minclass = 4, dcredit = -1, ucredit = -1, ocredit = -1, lcredit = -1, or run '/opt/microsoft/omsagent/plugin/omsremediate -r enable-password-requirements' |
-|Ensure lockout for failed password attempts is configured.<br /><sub>(5.3.2)</sub> |Description: Locking out user IDs after `n` unsuccessful consecutive login attempts mitigates brute force password attacks against your systems. | For Ubuntu and Debian, add the pam_tally and pam_deny modules as appropriate. For all other distros, refer to your distro's documentation |
+|Ensure lockout for failed password attempts is configured.<br /><sub>(5.3.2)</sub> |Description: Locking out user IDs after `n` unsuccessful consecutive login attempts mitigates brute force password attacks against your systems. |for Ubuntu and Debian, add the pam_tally and pam_deny modules as appropriate. For all other distros, refer to your distro's documentation |
|Disable the installation and use of file systems that are not required (cramfs)<br /><sub>(6.1)</sub> |Description: An attacker could use a vulnerability in cramfs to elevate privileges |Add a file to the /etc/modprob.d directory that disables cramfs or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' | |Disable the installation and use of file systems that are not required (freevxfs)<br /><sub>(6.2)</sub> |Description: An attacker could use a vulnerability in freevxfs to elevate privileges |Add a file to the /etc/modprob.d directory that disables freevxfs or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' | |Ensure all users' home directories exist<br /><sub>(6.2.7)</sub> |Description: If the user's home directory does not exist or is unassigned, the user will be placed in '/' and will not be able to write any files or have local environment variables set. |If any users' home directories do not exist, create them and make sure the respective user owns the directory. Users without an assigned home directory should be removed or assigned a home directory as appropriate. | |Ensure users own their home directories<br /><sub>(6.2.9)</sub> |Description: Since the user is accountable for files stored in the user home directory, the user must be the owner of the directory. |Change the ownership of any home directories that are not owned by the defined user to the correct user. | |Ensure users' dot files are not group or world writable.<br /><sub>(6.2.10)</sub> |Description: Group or world-writable user configuration files may enable malicious users to steal or modify other users' data or to gain another user's system privileges. |Making global modifications to users' files without alerting the user community can result in unexpected outages and unhappy users. Therefore, it is recommended that a monitoring policy be established to report user dot file permissions and determine the action to be taken in accordance with site policy. |
-|Ensure no users have `.forward` files<br /><sub>(6.2.11)</sub> |Description: Use of the `.forward` file poses a security risk in that sensitive data may be inadvertently transferred outside the organization. The `.forward` file also poses a risk as it can be used to execute commands that may perform unintended actions. |Making global modifications to users' files without alerting the user community can result in unexpected outages and unhappy users. Therefore, it is recommended that a monitoring policy be established to report user `.forward` files and determine the action to be taken in accordance with site policy. |
-|Ensure no users have `.netrc` files<br /><sub>(6.2.12)</sub> |Description: The `.netrc` file presents a significant security risk since it stores passwords in unencrypted form. Even if FTP is disabled, user accounts may have brought over `.netrc` files from other systems which could pose a risk to those systems |Making global modifications to users' files without alerting the user community can result in unexpected outages and unhappy users. Therefore, it is recommended that a monitoring policy be established to report user `.netrc` files and determine the action to be taken in accordance with site policy. |
-|Ensure no users have `.rhosts` files<br /><sub>(6.2.14)</sub> |Description: This action is only meaningful if `.rhosts` support is permitted in the file `/etc/pam.conf` . Even though the `.rhosts` files are ineffective if support is disabled in `/etc/pam.conf` , they may have been brought over from other systems and could contain information useful to an attacker for those other systems. |Making global modifications to users' files without alerting the user community can result in unexpected outages and unhappy users. Therefore, it is recommended that a monitoring policy be established to report user `.rhosts` files and determine the action to be taken in accordance with site policy. |
+|Ensure no users have .forward files<br /><sub>(6.2.11)</sub> |Description: Use of the `.forward` file poses a security risk in that sensitive data may be inadvertently transferred outside the organization. The `.forward` file also poses a risk as it can be used to execute commands that may perform unintended actions. |Making global modifications to users' files without alerting the user community can result in unexpected outages and unhappy users. Therefore, it is recommended that a monitoring policy be established to report user `.forward` files and determine the action to be taken in accordance with site policy. |
+|Ensure no users have .netrc files<br /><sub>(6.2.12)</sub> |Description: The `.netrc` file presents a significant security risk since it stores passwords in unencrypted form. Even if FTP is disabled, user accounts may have brought over `.netrc` files from other systems which could pose a risk to those systems |Making global modifications to users' files without alerting the user community can result in unexpected outages and unhappy users. Therefore, it is recommended that a monitoring policy be established to report user `.netrc` files and determine the action to be taken in accordance with site policy. |
+|Ensure no users have .rhosts files<br /><sub>(6.2.14)</sub> |Description: This action is only meaningful if `.rhosts` support is permitted in the file `/etc/pam.conf` . Even though the `.rhosts` files are ineffective if support is disabled in `/etc/pam.conf` , they may have been brought over from other systems and could contain information useful to an attacker for those other systems. |Making global modifications to users' files without alerting the user community can result in unexpected outages and unhappy users. Therefore, it is recommended that a monitoring policy be established to report user `.rhosts` files and determine the action to be taken in accordance with site policy. |
|Ensure all groups in /etc/passwd exist in /etc/group<br /><sub>(6.2.15)</sub> |Description: Groups which are defined in the /etc/passwd file but not in the /etc/group file poses a threat to system security since group permissions are not properly managed. |For each group defined in /etc/passwd, ensure there is a corresponding group in /etc/group | |Ensure no duplicate UIDs exist<br /><sub>(6.2.16)</sub> |Description: Users must be assigned unique UIDs for accountability and to ensure appropriate access protections. |Establish unique UIDs and review all files owned by the shared UIDs to determine which UID they are supposed to belong to. | |Ensure no duplicate GIDs exist<br /><sub>(6.2.17)</sub> |Description: Groups must be assigned unique GIDs for accountability and to ensure appropriate access protections. |Establish unique GIDs and review all files owned by the shared GIDs to determine which GID they are supposed to belong to. |
For more information, see [Azure Policy guest configuration](../concepts/guest-c
|/etc/group- file permissions should be 0644<br /><sub>(12.4)</sub> |Description: An attacker could elevate privileges by modifying group membership |Set the permissions and ownership of /etc/group- or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-etc-group-perms | |Access to the root account via su should be restricted to the 'root' group<br /><sub>(21)</sub> |Description: An attacker could escalate permissions by password guessing if su is not restricted to users in the root group. |Run the command '/opt/microsoft/omsagent/plugin/omsremediate -r fix-su-permissions'. This will add the line 'auth required pam_wheel.so use_uid' to the file '/etc/pam.d/su' | |The 'root' group should exist, and contain all members who can su to root<br /><sub>(22)</sub> |Description: An attacker could escalate permissions by password guessing if su is not restricted to users in the root group. |Create the root group via the command 'groupadd -g 0 root' |
-|All accounts should have a password<br /><sub>(23.2)</sub> |Description: An attacker can log in to accounts with no password and execute arbitrary commands. |Use the passwd command to set passwords for all accounts |
+|All accounts should have a password<br /><sub>(23.2)</sub> |Description: An attacker can login to accounts with no password and execute arbitrary commands. |Use the passwd command to set passwords for all accounts |
|Accounts other than root must have unique UIDs greater than zero(0)<br /><sub>(24)</sub> |Description: If an account other than root has uid zero, an attacker could compromise the account and gain root privileges. |Assign unique, non-zero uids to all non-root accounts using 'usermod -u' | |Randomized placement of virtual memory regions should be enabled<br /><sub>(25)</sub> |Description: An attacker could write executable code to known regions in memory resulting in elevation of privilege |Add the value '1' or '2' to the file '/proc/sys/kernel/randomize_va_space' | |Kernel support for the XD/NX processor feature should be enabled<br /><sub>(26)</sub> |Description: An attacker could cause a system to executable code from data regions in memory resulting in elevation of privilege. |Confirm the file '/proc/cpuinfo' contains the flag 'nx' |
For more information, see [Azure Policy guest configuration](../concepts/guest-c
|User home directories should be mode 750 or more restrictive<br /><sub>(28)</sub> |Description: An attacker could retrieve sensitive information from the home folders of other users. |Set home folder permissions to 750 or run '/opt/microsoft/omsagent/plugin/omsremediate -r fix-home-dir-permissions | |The default umask for all users should be set to 077 in login.defs<br /><sub>(29)</sub> |Description: An attacker could retrieve sensitive information from files owned by other users. |Run the command '/opt/microsoft/omsagent/plugin/omsremediate -r set-default-user-umask'. This will add the line 'UMASK 077' to the file '/etc/login.defs' | |All bootloaders should have password protection enabled.<br /><sub>(31)</sub> |Description: An attacker with physical access could modify bootloader options, yielding unrestricted system access |Add a boot loader password to the file '/boot/grub/grub.cfg' |
-|Ensure permissions on bootloader config are configured<br /><sub>(31.1)</sub> |Description: Setting the permissions to read and write for root only prevents non-root users from seeing the boot parameters or changing them. Non-root users who read the boot parameters may be able to identify weaknesses in security upon boot and be able to exploit them. |Set the owner and group of your bootloader to `root:root` and permissions to 0400 or run '/opt/microsoft/omsagent/plugin/omsremediate -r bootloader-permissions' |
+|Ensure permissions on bootloader config are configured<br /><sub>(31.1)</sub> |Description: Setting the permissions to read and write for root only prevents non-root users from seeing the boot parameters or changing them. Non-root users who read the boot parameters may be able to identify weaknesses in security upon boot and be able to exploit them. |Set the owner and group of your bootloader to root:root and permissions to 0400 or run '/opt/microsoft/omsagent/plugin/omsremediate -r bootloader-permissions |
|Ensure authentication required for single user mode.<br /><sub>(33)</sub> |Description: Requiring authentication in single user mode prevents an unauthorized user from rebooting the system into single user to gain root privileges without credentials. |run the following command to set a password for the root user: `passwd root` |
-|Ensure packet redirect sending is disabled.<br /><sub>(38.3)</sub> |Description: An attacker could use a compromised host to send invalid ICMP redirects to other router devices in an attempt to corrupt routing and have users access a system set up by the attacker as opposed to a valid system. |set the following parameters in /etc/sysctl.conf: `net.ipv4.conf.all.send_redirects = 0` and `net.ipv4.conf.default.send_redirects = 0` or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-send-redirects' |
-|Sending ICMP redirects should be disabled for all interfaces. (net.ipv4.conf.default.accept_redirects = 0)<br /><sub>(38.4)</sub> |Description: An attacker could alter this system's routing table, redirecting traffic to an alternate destination |Run `sysctl -w key=value` and set to a compliant value or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-accept-redirects' |
+|Ensure packet redirect sending is disabled.<br /><sub>(38.3)</sub> |Description: An attacker could use a compromised host to send invalid ICMP redirects to other router devices in an attempt to corrupt routing and have users access a system set up by the attacker as opposed to a valid system. |set the following parameters in /etc/sysctl.conf: 'net.ipv4.conf.all.send_redirects = 0' and 'net.ipv4.conf.default.send_redirects = 0' or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-send-redirects |
+|Sending ICMP redirects should be disabled for all interfaces. (net.ipv4.conf.default.accept_redirects = 0)<br /><sub>(38.4)</sub> |Description: An attacker could alter this system's routing table, redirecting traffic to an alternate destination |Run `sysctl -w key=value` and set to a compliant value or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-accept-redirects'. |
|Sending ICMP redirects should be disabled for all interfaces. (net.ipv4.conf.default.secure_redirects = 0)<br /><sub>(38.5)</sub> |Description: An attacker could alter this system's routing table, redirecting traffic to an alternate destination |Run `sysctl -w key=value` and set to a compliant value or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-secure-redirects' | |Accepting source routed packets should be disabled for all interfaces. (net.ipv4.conf.all.accept_source_route = 0)<br /><sub>(40.1)</sub> |Description: An attacker could redirect traffic for malicious purposes. |Run `sysctl -w key=value` and set to a compliant value. | |Accepting source routed packets should be disabled for all interfaces. (net.ipv6.conf.all.accept_source_route = 0)<br /><sub>(40.2)</sub> |Description: An attacker could redirect traffic for malicious purposes. |Run `sysctl -w key=value` and set to a compliant value. |
For more information, see [Azure Policy guest configuration](../concepts/guest-c
|The logrotate (syslog rotater) service should be enabled.<br /><sub>(68)</sub> |Description: Logfiles could grow unbounded and consume all disk space |Install the logrotate package and confirm the logrotate cron entry is active (chmod 755 /etc/cron.daily/logrotate; chown root:root /etc/cron.daily/logrotate) | |The rlogin service should be disabled.<br /><sub>(69)</sub> |Description: An attacker could gain access, bypassing strict authentication requirements |Remove the inetd service. | |Disable inetd unless required. (inetd)<br /><sub>(70.1)</sub> |Description: An attacker could exploit a vulnerability in an inetd service to gain access |Uninstall the inetd service (apt-get remove inetd) |
-|Disable xinetd unless required. (xinetd)<br /><sub>(70.2)</sub> |Description: An attacker could exploit a vulnerability in an xinetd service to gain access |Uninstall the inetd service (apt-get remove xinetd) |
+|Disable xinetd unless required. (xinetd)<br /><sub>(70.2)</sub> |Description: An attacker could exploit a vulnerability in a xinetd service to gain access |Uninstall the inetd service (apt-get remove xinetd) |
|Install inetd only if appropriate and required by your distro. Secure according to current hardening standards. (if required)<br /><sub>(71.1)</sub> |Description: An attacker could exploit a vulnerability in an inetd service to gain access |Uninstall the inetd service (apt-get remove inetd) | |Install xinetd only if appropriate and required by your distro. Secure according to current hardening standards. (if required)<br /><sub>(71.2)</sub> |Description: An attacker could exploit a vulnerability in an xinetd service to gain access |Uninstall the inetd service (apt-get remove xinetd) | |The telnet service should be disabled.<br /><sub>(72)</sub> |Description: An attacker could eavesdrop or hijack unencrypted telnet sessions |Remove or comment out the telnet entry in the file '/etc/inetd.conf' |
For more information, see [Azure Policy guest configuration](../concepts/guest-c
|Ensure permissions on /etc/cron.hourly are configured.<br /><sub>(95)</sub> |Description: Granting write access to this directory for non-privileged users could provide them the means for gaining unauthorized elevated privileges. Granting read access to this directory could give an unprivileged user insight in how to gain elevated privileges or circumvent auditing controls. |Set the owner and group of /etc/chron.hourly to root and permissions to 0700 or run '/opt/microsoft/omsagent/plugin/omsremediate -r fix-cron-file-perms | |Ensure permissions on /etc/cron.monthly are configured.<br /><sub>(96)</sub> |Description: Granting write access to this directory for non-privileged users could provide them the means for gaining unauthorized elevated privileges. Granting read access to this directory could give an unprivileged user insight in how to gain elevated privileges or circumvent auditing controls. |Set the owner and group of /etc/chron.monthly to root and permissions to 0700 or run '/opt/microsoft/omsagent/plugin/omsremediate -r fix-cron-file-perms | |Ensure permissions on /etc/cron.weekly are configured.<br /><sub>(97)</sub> |Description: Granting write access to this directory for non-privileged users could provide them the means for gaining unauthorized elevated privileges. Granting read access to this directory could give an unprivileged user insight in how to gain elevated privileges or circumvent auditing controls. |Set the owner and group of /etc/chron.weekly to root and permissions to 0700 or run '/opt/microsoft/omsagent/plugin/omsremediate -r fix-cron-file-perms |
-|Ensure at/cron is restricted to authorized users<br /><sub>(98)</sub> |Description: On many systems, only the system administrator is authorized to schedule `cron` jobs. Using the `cron.allow` file to control who can run `cron` jobs enforces this policy. It is easier to manage an allowlist than a deny list. In a deny list, you could potentially add a user ID to the system and forget to add it to the deny files. |replace /etc/cron.deny and /etc/at.deny with their respective `allow` files |
+|Ensure at/cron is restricted to authorized users<br /><sub>(98)</sub> |Description: On many systems, only the system administrator is authorized to schedule `cron` jobs. Using the `cron.allow` file to control who can run `cron` jobs enforces this policy. It is easier to manage an allowlist than a denylist. In a denylist, you could potentially add a user ID to the system and forget to add it to the deny files. |Replace /etc/cron.deny and /etc/at.deny with their respective `allow` files or run '/opt/microsoft/omsagent/plugin/omsremediate -r fix-cron-job-allow' |
|SSH must be configured and managed to meet best practices. - '/etc/ssh/sshd_config Protocol = 2'<br /><sub>(106.1)</sub> |Description: An attacker could use flaws in an earlier version of the SSH protocol to gain access |Run the command '/opt/microsoft/omsagent/plugin/omsremediate -r configure-ssh-protocol'. This will set 'Protocol 2' in the file '/etc/ssh/sshd_config' | |SSH must be configured and managed to meet best practices. - '/etc/ssh/sshd_config IgnoreRhosts = yes'<br /><sub>(106.3)</sub> |Description: An attacker could use flaws in the Rhosts protocol to gain access |Run the command '/usr/local/bin/azsecd remediate (/opt/microsoft/omsagent/plugin/omsremediate) -r enable-ssh-ignore-rhosts'. This will add the line 'IgnoreRhosts yes' to the file '/etc/ssh/sshd_config' | |Ensure SSH LogLevel is set to INFO<br /><sub>(106.5)</sub> |Description: SSH provides several logging levels with varying amounts of verbosity. `DEBUG `is specifically _not_ recommended other than strictly for debugging SSH communications since it provides so much data that it is difficult to identify important security information. `INFO `level is the basic level that only records login activity of SSH users. In many situations, such as Incident Response, it is important to determine when a particular user was active on a system. The logout record can eliminate those users who disconnected, which helps narrow the field. |Edit the `/etc/ssh/sshd_config` file to set the parameter as follows: ``` LogLevel INFO ``` |
For more information, see [Azure Policy guest configuration](../concepts/guest-c
|The ldap service should be disabled.<br /><sub>(124)</sub> |Description: An attacker could manipulate the LDAP service on this host to distribute false data to LDAP clients |Uninstall the slapd package (apt-get remove slapd) | |The rpcgssd service should be disabled.<br /><sub>(126)</sub> |Description: An attacker could use a flaw in rpcgssd/nfs to gain access |Disable the rpcgssd service or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-rpcgssd' | |The rpcidmapd service should be disabled.<br /><sub>(127)</sub> |Description: An attacker could use a flaw in idmapd/nfs to gain access |Disable the rpcidmapd service or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-rpcidmapd' |
-|The portmap service should be disabled.<br /><sub>(129)</sub> |Description: An attacker could use a flaw in portmap to gain access |Disable the rpcbind service or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-rpcbind' |
+|The portmap service should be disabled.<br /><sub>(129.1)</sub> |Description: An attacker could use a flaw in portmap to gain access |Disable the rpcbind service or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-rpcbind' |
+|The Network File System (NFS) service should be disabled.<br /><sub>(129.2)</sub> |Description: An attacker could use nfs to mount shares and execute/copy files. |Disable the nfs service or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-nfs' |
|The rpcsvcgssd service should be disabled.<br /><sub>(130)</sub> |Description: An attacker could use a flaw in rpcsvcgssd to gain access |Remove the line 'NEED_SVCGSSD = yes' from the file '/etc/inetd.conf' | |The named service should be disabled.<br /><sub>(131)</sub> |Description: An attacker could use the DNS service to distribute false data to clients |Uninstall the bind9 package (apt-get remove bind9) | |The bind package should be uninstalled.<br /><sub>(132)</sub> |Description: An attacker could use the DNS service to distribute false data to clients |Uninstall the bind9 package (apt-get remove bind9) |
governance Guest Configuration Baseline Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/guest-configuration-baseline-windows.md
Title: Reference - Azure Policy guest configuration baseline for Windows description: Details of the Windows baseline on Azure implemented through Azure Policy guest configuration. Previously updated : 08/24/2021 Last updated : 02/16/2022
For more information, see [Azure Policy guest configuration](../concepts/guest-c
|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Allow Input Personalization<br /><sub>(AZ-WIN-00168)</sub> |**Description**: This policy enables the automatic learning component of input personalization that includes speech, inking, and typing. Automatic learning enables the collection of speech and handwriting patterns, typing history, contacts, and recent calendar information. It is required for the use of Cortana. Some of this collected information may be stored on the user's OneDrive, in the case of inking and typing; some of the information will be uploaded to Microsoft to personalize speech. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\InputPersonalization\AllowInputPersonalization<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
-|Prevent enabling lock screen camera<br /><sub>(CCE-38347-1)</sub> |**Description**: Disables the lock screen camera toggle switch in PC Settings and prevents a camera from being invoked on the lock screen. By default, users can enable invocation of an available camera on the lock screen. If you enable this setting, users will no longer be able to enable or disable lock screen camera access in PC Settings, and the camera cannot be invoked on the lock screen.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Personalization\NoLockScreenCamera<br />**OS**: WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Prevent enabling lock screen slide show<br /><sub>(CCE-38348-9)</sub> |**Description**: Disables the lock screen slide show settings in PC Settings and prevents a slide show from playing on the lock screen. By default, users can enable a slide show that will run after they lock the machine. If you enable this setting, users will no longer be able to modify slide show settings in PC Settings, and no slide show will ever start.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Personalization\NoLockScreenSlideshow<br />**OS**: WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Allow Input Personalization<br /><sub>(AZ-WIN-00168)</sub> |**Description**: This policy enables the automatic learning component of input personalization that includes speech, inking, and typing. Automatic learning enables the collection of speech and handwriting patterns, typing history, contacts, and recent calendar information. It is required for the use of Cortana. Some of this collected information may be stored on the user's OneDrive, in the case of inking and typing; some of the information will be uploaded to Microsoft to personalize speech. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\InputPersonalization\AllowInputPersonalization<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Prevent enabling lock screen camera<br /><sub>(CCE-38347-1)</sub> |**Description**: Disables the lock screen camera toggle switch in PC Settings and prevents a camera from being invoked on the lock screen. By default, users can enable invocation of an available camera on the lock screen. If you enable this setting, users will no longer be able to enable or disable lock screen camera access in PC Settings, and the camera cannot be invoked on the lock screen.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Personalization\NoLockScreenCamera<br />**OS**: WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Prevent enabling lock screen slide show<br /><sub>(CCE-38348-9)</sub> |**Description**: Disables the lock screen slide show settings in PC Settings and prevents a slide show from playing on the lock screen. By default, users can enable a slide show that will run after they lock the machine. If you enable this setting, users will no longer be able to modify slide show settings in PC Settings, and no slide show will ever start.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Personalization\NoLockScreenSlideshow<br />**OS**: WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
## Administrative Templates - Network |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Enable insecure guest logons<br /><sub>(AZ-WIN-00171)</sub> |**Description**: This policy setting determines if the SMB client will allow insecure guest logons to an SMB server. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\LanmanWorkstation\AllowInsecureGuestAuth<br />**OS**: WS2016<br />**Server Type**: Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
-|Minimize the number of simultaneous connections to the Internet or a Windows Domain<br /><sub>(CCE-38338-0)</sub> |**Description**: This policy setting prevents computers from connecting to both a domain based network and a non-domain based network at the same time. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WcmSvc\GroupPolicy\fMinimizeConnections<br />**OS**: WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning |
-|Prohibit installation and configuration of Network Bridge on your DNS domain network<br /><sub>(CCE-38002-2)</sub> |**Description**: You can use this procedure to control user's ability to install and configure a network bridge. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Network Connections\NC_AllowNetBridge_NLA<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
-|Prohibit use of Internet Connection Sharing on your DNS domain network<br /><sub>(AZ-WIN-00172)</sub> |**Description**: Although this "legacy" setting traditionally applied to the use of Internet Connection Sharing (ICS) in Windows 2000, Windows XP & Server 2003, this setting now freshly applies to the Mobile Hotspot feature in Windows 10 & Server 2016. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Network Connections\NC_ShowSharedAccessUI<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
-|Turn off multicast name resolution<br /><sub>(AZ-WIN-00145)</sub> |**Description**: LLMNR is a secondary name resolution protocol. With LLMNR, queries are sent using multicast over a local network link on a single subnet from a client computer to another client computer on the same subnet that also has LLMNR enabled. LLMNR does not require a DNS server or DNS client configuration, and provides name resolution in scenarios in which conventional DNS name resolution is not possible. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\DNSClient\EnableMulticast<br />**OS**: WS2016<br />**Server Type**: Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Enable insecure guest logons<br /><sub>(AZ-WIN-00171)</sub> |**Description**: This policy setting determines if the SMB client will allow insecure guest logons to an SMB server. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\LanmanWorkstation\AllowInsecureGuestAuth<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Minimize the number of simultaneous connections to the Internet or a Windows Domain<br /><sub>(CCE-38338-0)</sub> |**Description**: This policy setting prevents computers from connecting to both a domain based network and a non-domain based network at the same time. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WcmSvc\GroupPolicy\fMinimizeConnections<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning |
+|Prohibit installation and configuration of Network Bridge on your DNS domain network<br /><sub>(CCE-38002-2)</sub> |**Description**: You can use this procedure to control user's ability to install and configure a network bridge. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Network Connections\NC_AllowNetBridge_NLA<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Prohibit use of Internet Connection Sharing on your DNS domain network<br /><sub>(AZ-WIN-00172)</sub> |**Description**: Although this "legacy" setting traditionally applied to the use of Internet Connection Sharing (ICS) in Windows 2000, Windows XP & Server 2003, this setting now freshly applies to the Mobile Hotspot feature in Windows 10 & Server 2016. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Network Connections\NC_ShowSharedAccessUI<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Turn off multicast name resolution<br /><sub>(AZ-WIN-00145)</sub> |**Description**: LLMNR is a secondary name resolution protocol. With LLMNR, queries are sent using multicast over a local network link on a single subnet from a client computer to another client computer on the same subnet that also has LLMNR enabled. LLMNR does not require a DNS server or DNS client configuration, and provides name resolution in scenarios in which conventional DNS name resolution is not possible. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\DNSClient\EnableMulticast<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
## Administrative Templates - System |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Block user from showing account details on sign-in<br /><sub>(AZ-WIN-00138)</sub> |**Description**: This policy prevents the user from showing account details (email address or user name) on the sign-in screen. If you enable this policy setting, the user cannot choose to show account details on the sign-in screen. If you disable or do not configure this policy setting, the user may choose to show account details on the sign-in screen.<br />**Key Path**: Software\Policies\Microsoft\Windows\System\BlockUserFromShowingAccountDetailsOnSignin<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Boot-Start Driver Initialization Policy<br /><sub>(CCE-37912-3)</sub> |**Description**: This policy setting allows you to specify which boot-start drivers are initialized based on a classification determined by an Early Launch Antimalware boot-start driver. The Early Launch Antimalware boot-start driver can return the following classifications for each boot-start driver: - Good: The driver has been signed and has not been tampered with. - Bad: The driver has been identified as malware. It is recommended that you do not allow known bad drivers to be initialized. - Bad, but required for boot: The driver has been identified as malware, but the computer cannot successfully boot without loading this driver. - Unknown: This driver has not been attested to by your malware detection application and has not been classified by the Early Launch Antimalware boot-start driver. If you enable this policy setting you will be able to choose which boot-start drivers to initialize the next time the computer is started. If you disable or do not configure this policy setting, the boot start drivers determined to be Good, Unknown or Bad but Boot Critical are initialized and the initialization of drivers determined to be Bad is skipped. If your malware detection application does not include an Early Launch Antimalware boot-start driver or if your Early Launch Antimalware boot-start driver has been disabled, this setting has no effect and all boot-start drivers are initialized.<br />**Key Path**: SYSTEM\CurrentControlSet\Policies\EarlyLaunch\DriverLoadPolicy<br />**OS**: WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 3<br /><sub>(Registry)</sub> |Warning |
-|Configure Offer Remote Assistance<br /><sub>(CCE-36388-7)</sub> |**Description**: This policy setting allows you to turn on or turn off Offer (Unsolicited) Remote Assistance on this computer. Help desk and support personnel will not be able to proactively offer assistance, although they can still respond to user assistance requests. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fAllowUnsolicited<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
-|Configure Solicited Remote Assistance<br /><sub>(CCE-37281-3)</sub> |**Description**: This policy setting allows you to turn on or turn off Solicited (Ask for) Remote Assistance on this computer. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fAllowToGetHelp<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
-|Do not display network selection UI<br /><sub>(CCE-38353-9)</sub> |**Description**: This policy setting allows you to control whether anyone can interact with available networks UI on the logon screen. If you enable this policy setting, the PC's network connectivity state cannot be changed without signing into Windows. If you disable or don't configure this policy setting, any user can disconnect the PC from the network or can connect the PC to other available networks without signing into Windows.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\DontDisplayNetworkSelectionUI<br />**OS**: WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Enable RPC Endpoint Mapper Client Authentication<br /><sub>(CCE-37346-4)</sub> |**Description**: This policy setting controls whether RPC clients authenticate with the Endpoint Mapper Service when the call they are making contains authentication information. The Endpoint Mapper Service on computers running Windows NT4 (all service packs) cannot process authentication information supplied in this manner. If you disable this policy setting, RPC clients will not authenticate to the Endpoint Mapper Service, but they will be able to communicate with the Endpoint Mapper Service on Windows NT4 Server. If you enable this policy setting, RPC clients will authenticate to the Endpoint Mapper Service for calls that contain authentication information. Clients making such calls will not be able to communicate with the Windows NT4 Server Endpoint Mapper Service. If you do not configure this policy setting, it remains disabled. RPC clients will not authenticate to the Endpoint Mapper Service, but they will be able to communicate with the Windows NT4 Server Endpoint Mapper Service. Note: This policy will not be applied until the system is rebooted.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Rpc\EnableAuthEpResolution<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Enable Windows NTP Client<br /><sub>(CCE-37843-0)</sub> |**Description**: This policy setting specifies whether the Windows NTP Client is enabled. Enabling the Windows NTP Client allows your computer to synchronize its computer clock with other NTP servers. You might want to disable this service if you decide to use a third-party time provider. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpClient\Enabled<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Ensure 'Continue experiences on this device' is set to 'Disabled'<br /><sub>(AZ-WIN-00170)</sub> |**Description**: This policy setting determines whether the Windows device is allowed to participate in cross-device experiences (continue experiences). The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\EnableCdp<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
-|Include command line in process creation events<br /><sub>(CCE-36925-6)</sub> |**Description**: This policy setting determines what information is logged in security audit events when a new process has been created. This setting only applies when the Audit Process Creation policy is enabled. If you enable this policy setting the command line information for every process will be logged in plain text in the security event log as part of the Audit Process Creation event 4688, "a new process has been created," on the workstations and servers on which this policy setting is applied. If you disable or do not configure this policy setting, the process's command line information will not be included in Audit Process Creation events. Default: Not configured Note: When this policy setting is enabled, any user with access to read the security events will be able to read the command line arguments for any successfully created process. Command line arguments can contain sensitive or private information such as passwords or user data.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\Audit\ProcessCreationIncludeCmdLine_Enabled<br />**OS**: WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Turn off app notifications on the lock screen<br /><sub>(CCE-35893-7)</sub> |**Description**: This policy setting allows you to prevent app notifications from appearing on the lock screen. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\DisableLockScreenAppNotifications<br />**OS**: WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Turn off downloading of print drivers over HTTP<br /><sub>(CCE-36625-2)</sub> |**Description**: This policy setting controls whether the computer can download print driver packages over HTTP. To set up HTTP printing, printer drivers that are not available in the standard operating system installation might need to be downloaded over HTTP. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Printers\DisableWebPnPDownload<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning |
-|Turn off Internet Connection Wizard if URL connection is referring to Microsoft.com<br /><sub>(CCE-37163-3)</sub> |**Description**: This policy setting specifies whether the Internet Connection Wizard can connect to Microsoft to download a list of Internet Service Providers (ISPs). The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Internet Connection Wizard\ExitOnMSICW<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Turn on convenience PIN sign-in<br /><sub>(CCE-37528-7)</sub> |**Description**: This policy setting allows you to control whether a domain user can sign in using a convenience PIN. In Windows 10, convenience PIN was replaced with Passport, which has stronger security properties. To configure Passport for domain users, use the policies under Computer configuration\\Administrative Templates\\Windows Components\\Microsoft Passport for Work. **Note:** The user's domain password will be cached in the system vault when using this feature. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\AllowDomainPINLogon<br />**OS**: WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Block user from showing account details on sign-in<br /><sub>(AZ-WIN-00138)</sub> |**Description**: This policy prevents the user from showing account details (email address or user name) on the sign-in screen. If you enable this policy setting, the user cannot choose to show account details on the sign-in screen. If you disable or do not configure this policy setting, the user may choose to show account details on the sign-in screen.<br />**Key Path**: Software\Policies\Microsoft\Windows\System\BlockUserFromShowingAccountDetailsOnSignin<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Boot-Start Driver Initialization Policy<br /><sub>(CCE-37912-3)</sub> |**Description**: This policy setting allows you to specify which boot-start drivers are initialized based on a classification determined by an Early Launch Antimalware boot-start driver. The Early Launch Antimalware boot-start driver can return the following classifications for each boot-start driver: - Good: The driver has been signed and has not been tampered with. - Bad: The driver has been identified as malware. It is recommended that you do not allow known bad drivers to be initialized. - Bad, but required for boot: The driver has been identified as malware, but the computer cannot successfully boot without loading this driver. - Unknown: This driver has not been attested to by your malware detection application and has not been classified by the Early Launch Antimalware boot-start driver. If you enable this policy setting you will be able to choose which boot-start drivers to initialize the next time the computer is started. If you disable or do not configure this policy setting, the boot start drivers determined to be Good, Unknown or Bad but Boot Critical are initialized and the initialization of drivers determined to be Bad is skipped. If your malware detection application does not include an Early Launch Antimalware boot-start driver or if your Early Launch Antimalware boot-start driver has been disabled, this setting has no effect and all boot-start drivers are initialized.<br />**Key Path**: SYSTEM\CurrentControlSet\Policies\EarlyLaunch\DriverLoadPolicy<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 3<br /><sub>(Registry)</sub> |Warning |
+|Configure Offer Remote Assistance<br /><sub>(CCE-36388-7)</sub> |**Description**: This policy setting allows you to turn on or turn off Offer (Unsolicited) Remote Assistance on this computer. Help desk and support personnel will not be able to proactively offer assistance, although they can still respond to user assistance requests. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fAllowUnsolicited<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Configure Solicited Remote Assistance<br /><sub>(CCE-37281-3)</sub> |**Description**: This policy setting allows you to turn on or turn off Solicited (Ask for) Remote Assistance on this computer. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fAllowToGetHelp<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Do not display network selection UI<br /><sub>(CCE-38353-9)</sub> |**Description**: This policy setting allows you to control whether anyone can interact with available networks UI on the logon screen. If you enable this policy setting, the PC's network connectivity state cannot be changed without signing into Windows. If you disable or don't configure this policy setting, any user can disconnect the PC from the network or can connect the PC to other available networks without signing into Windows.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\DontDisplayNetworkSelectionUI<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Enable RPC Endpoint Mapper Client Authentication<br /><sub>(CCE-37346-4)</sub> |**Description**: This policy setting controls whether RPC clients authenticate with the Endpoint Mapper Service when the call they are making contains authentication information. The Endpoint Mapper Service on computers running Windows NT4 (all service packs) cannot process authentication information supplied in this manner. If you disable this policy setting, RPC clients will not authenticate to the Endpoint Mapper Service, but they will be able to communicate with the Endpoint Mapper Service on Windows NT4 Server. If you enable this policy setting, RPC clients will authenticate to the Endpoint Mapper Service for calls that contain authentication information. Clients making such calls will not be able to communicate with the Windows NT4 Server Endpoint Mapper Service. If you do not configure this policy setting, it remains disabled. RPC clients will not authenticate to the Endpoint Mapper Service, but they will be able to communicate with the Windows NT4 Server Endpoint Mapper Service. Note: This policy will not be applied until the system is rebooted.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Rpc\EnableAuthEpResolution<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Enable Windows NTP Client<br /><sub>(CCE-37843-0)</sub> |**Description**: This policy setting specifies whether the Windows NTP Client is enabled. Enabling the Windows NTP Client allows your computer to synchronize its computer clock with other NTP servers. You might want to disable this service if you decide to use a third-party time provider. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpClient\Enabled<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Ensure 'Continue experiences on this device' is set to 'Disabled'<br /><sub>(AZ-WIN-00170)</sub> |**Description**: This policy setting determines whether the Windows device is allowed to participate in cross-device experiences (continue experiences). The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\EnableCdp<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Include command line in process creation events<br /><sub>(CCE-36925-6)</sub> |**Description**: This policy setting determines what information is logged in security audit events when a new process has been created. This setting only applies when the Audit Process Creation policy is enabled. If you enable this policy setting the command line information for every process will be logged in plain text in the security event log as part of the Audit Process Creation event 4688, "a new process has been created," on the workstations and servers on which this policy setting is applied. If you disable or do not configure this policy setting, the process's command line information will not be included in Audit Process Creation events. Default: Not configured Note: When this policy setting is enabled, any user with access to read the security events will be able to read the command line arguments for any successfully created process. Command line arguments can contain sensitive or private information such as passwords or user data.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\Audit\ProcessCreationIncludeCmdLine_Enabled<br />**OS**: WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Turn off app notifications on the lock screen<br /><sub>(CCE-35893-7)</sub> |**Description**: This policy setting allows you to prevent app notifications from appearing on the lock screen. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\DisableLockScreenAppNotifications<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Turn off downloading of print drivers over HTTP<br /><sub>(CCE-36625-2)</sub> |**Description**: This policy setting controls whether the computer can download print driver packages over HTTP. To set up HTTP printing, printer drivers that are not available in the standard operating system installation might need to be downloaded over HTTP. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Printers\DisableWebPnPDownload<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning |
+|Turn off Internet Connection Wizard if URL connection is referring to Microsoft.com<br /><sub>(CCE-37163-3)</sub> |**Description**: This policy setting specifies whether the Internet Connection Wizard can connect to Microsoft to download a list of Internet Service Providers (ISPs). The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Internet Connection Wizard\ExitOnMSICW<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Turn on convenience PIN sign-in<br /><sub>(CCE-37528-7)</sub> |**Description**: This policy setting allows you to control whether a domain user can sign in using a convenience PIN. In Windows 10, convenience PIN was replaced with Passport, which has stronger security properties. To configure Passport for domain users, use the policies under Computer configuration\\Administrative Templates\\Windows Components\\Microsoft Passport for Work. **Note:** The user's domain password will be cached in the system vault when using this feature. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\AllowDomainPINLogon<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
## Security Options - Accounts |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Accounts: Guest account status<br /><sub>(CCE-37432-2)</sub> |**Description**: This policy setting determines whether the Guest account is enabled or disabled. The Guest account allows unauthenticated network users to gain access to the system. The recommended state for this setting is: `Disabled`. **Note:** This setting will have no impact when applied to the domain controller organizational unit via group policy because domain controllers have no local account database. It can be configured at the domain level via group policy, similar to account lockout and password policy settings.<br />**Key Path**: [System Access]EnableGuestAccount<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Policy)</sub> |Critical |
-|Accounts: Limit local account use of blank passwords to console logon only<br /><sub>(CCE-37615-2)</sub> |**Description**: This policy setting determines whether local accounts that are not password protected can be used to log on from locations other than the physical computer console. If you enable this policy setting, local accounts that have blank passwords will not be able to log on to the network from remote client computers. Such accounts will only be able to log on at the keyboard of the computer. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\LimitBlankPasswordUse<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Accounts: Guest account status<br /><sub>(CCE-37432-2)</sub> |**Description**: This policy setting determines whether the Guest account is enabled or disabled. The Guest account allows unauthenticated network users to gain access to the system. The recommended state for this setting is: `Disabled`. **Note:** This setting will have no impact when applied to the domain controller organizational unit via group policy because domain controllers have no local account database. It can be configured at the domain level via group policy, similar to account lockout and password policy settings.<br />**Key Path**: [System Access]EnableGuestAccount<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Policy)</sub> |Critical |
+|Accounts: Limit local account use of blank passwords to console logon only<br /><sub>(CCE-37615-2)</sub> |**Description**: This policy setting determines whether local accounts that are not password protected can be used to log on from locations other than the physical computer console. If you enable this policy setting, local accounts that have blank passwords will not be able to log on to the network from remote client computers. Such accounts will only be able to log on at the keyboard of the computer. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\LimitBlankPasswordUse<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
## Security Options - Audit |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Audit: Force audit policy subcategory settings (Windows Vista or later) to override audit policy category settings<br /><sub>(CCE-37850-5)</sub> |**Description**: This policy setting allows administrators to enable the more precise auditing capabilities present in Windows Vista. The Audit Policy settings available in Windows Server 2003 Active Directory do not yet contain settings for managing the new auditing subcategories. To properly apply the auditing policies prescribed in this baseline, the Audit: Force audit policy subcategory settings (Windows Vista or later) to override audit policy category settings setting needs to be configured to Enabled.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\SCENoApplyLegacyAuditPolicy<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
-|Audit: Shut down system immediately if unable to log security audits<br /><sub>(CCE-35907-5)</sub> |**Description**: This policy setting determines whether the system shuts down if it is unable to log Security events. It is a requirement for Trusted Computer System Evaluation Criteria (TCSEC)-C2 and Common Criteria certification to prevent auditable events from occurring if the audit system is unable to log them. Microsoft has chosen to meet this requirement by halting the system and displaying a stop message if the auditing system experiences a failure. When this policy setting is enabled, the system will be shut down if a security audit cannot be logged for any reason. If the Audit: Shut down system immediately if unable to log security audits setting is enabled, unplanned system failures can occur. The administrative burden can be significant, especially if you also configure the Retention method for the Security log to Do not overwrite events (clear log manually). This configuration causes a repudiation threat (a backup operator could deny that they backed up or restored data) to become a denial of service (DoS) vulnerability, because a server could be forced to shut down if it is overwhelmed with logon events and other security events that are written to the Security log. Also, because the shutdown is not graceful, it is possible that irreparable damage to the operating system, applications, or data could result. Although the NTFS file system guarantees its integrity when an ungraceful computer shutdown occurs, it cannot guarantee that every data file for every application will still be in a usable form when the computer restarts. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\CrashOnAuditFail<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Audit: Force audit policy subcategory settings (Windows Vista or later) to override audit policy category settings<br /><sub>(CCE-37850-5)</sub> |**Description**: This policy setting allows administrators to enable the more precise auditing capabilities present in Windows Vista. The Audit Policy settings available in Windows Server 2003 Active Directory do not yet contain settings for managing the new auditing subcategories. To properly apply the auditing policies prescribed in this baseline, the Audit: Force audit policy subcategory settings (Windows Vista or later) to override audit policy category settings setting needs to be configured to Enabled.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\SCENoApplyLegacyAuditPolicy<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Audit: Shut down system immediately if unable to log security audits<br /><sub>(CCE-35907-5)</sub> |**Description**: This policy setting determines whether the system shuts down if it is unable to log Security events. It is a requirement for Trusted Computer System Evaluation Criteria (TCSEC)-C2 and Common Criteria certification to prevent auditable events from occurring if the audit system is unable to log them. Microsoft has chosen to meet this requirement by halting the system and displaying a stop message if the auditing system experiences a failure. When this policy setting is enabled, the system will be shut down if a security audit cannot be logged for any reason. If the Audit: Shut down system immediately if unable to log security audits setting is enabled, unplanned system failures can occur. The administrative burden can be significant, especially if you also configure the Retention method for the Security log to Do not overwrite events (clear log manually). This configuration causes a repudiation threat (a backup operator could deny that they backed up or restored data) to become a denial of service (DoS) vulnerability, because a server could be forced to shut down if it is overwhelmed with logon events and other security events that are written to the Security log. Also, because the shutdown is not graceful, it is possible that irreparable damage to the operating system, applications, or data could result. Although the NTFS file system guarantees its integrity when an ungraceful computer shutdown occurs, it cannot guarantee that every data file for every application will still be in a usable form when the computer restarts. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\CrashOnAuditFail<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
## Security Options - Devices |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Devices: Allow undock without having to log on<br /><sub>(AZ-WIN-00120)</sub> |**Description**: This policy setting determines whether a portable computer can be undocked if the user does not log on to the system. Enable this policy setting to eliminate a Logon requirement and allow use of an external hardware eject button to undock the computer. If you disable this policy setting, a user must log on and have been assigned the Remove computer from docking station user right to undock the computer.<br />**Key Path**: Software\Microsoft\Windows\CurrentVersion\Policies\System\UndockWithoutLogon<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Informational |
-|Devices: Allowed to format and eject removable media<br /><sub>(CCE-37701-0)</sub> |**Description**: This policy setting determines who is allowed to format and eject removable media. You can use this policy setting to prevent unauthorized users from removing data on one computer to access it on another computer on which they have local administrator privileges.<br />**Key Path**: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\AllocateDASD<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
-|Devices: Prevent users from installing printer drivers<br /><sub>(CCE-37942-0)</sub> |**Description**: For a computer to print to a shared printer, the driver for that shared printer must be installed on the local computer. This security setting determines who is allowed to install a printer driver as part of connecting to a shared printer. The recommended state for this setting is: `Enabled`. **Note:** This setting does not affect the ability to add a local printer. This setting does not affect Administrators.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Print\Providers\LanMan Print Services\Servers\AddPrinterDrivers<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning |
+|Devices: Allow undock without having to log on<br /><sub>(AZ-WIN-00120)</sub> |**Description**: This policy setting determines whether a portable computer can be undocked if the user does not log on to the system. Enable this policy setting to eliminate a Logon requirement and allow use of an external hardware eject button to undock the computer. If you disable this policy setting, a user must log on and have been assigned the Remove computer from docking station user right to undock the computer.<br />**Key Path**: Software\Microsoft\Windows\CurrentVersion\Policies\System\UndockWithoutLogon<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Informational |
+|Devices: Allowed to format and eject removable media<br /><sub>(CCE-37701-0)</sub> |**Description**: This policy setting determines who is allowed to format and eject removable media. You can use this policy setting to prevent unauthorized users from removing data on one computer to access it on another computer on which they have local administrator privileges.<br />**Key Path**: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\AllocateDASD<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Devices: Prevent users from installing printer drivers<br /><sub>(CCE-37942-0)</sub> |**Description**: For a computer to print to a shared printer, the driver for that shared printer must be installed on the local computer. This security setting determines who is allowed to install a printer driver as part of connecting to a shared printer. The recommended state for this setting is: `Enabled`. **Note:** This setting does not affect the ability to add a local printer. This setting does not affect Administrators.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Print\Providers\LanMan Print Services\Servers\AddPrinterDrivers<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning |
## Security Options - Interactive Logon |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Interactive logon: Do not display last user name<br /><sub>(CCE-36056-0)</sub> |**Description**: This policy setting determines whether the account name of the last user to log on to the client computers in your organization will be displayed in each computer's respective Windows logon screen. Enable this policy setting to prevent intruders from collecting account names visually from the screens of desktop or laptop computers in your organization. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\DontDisplayLastUserName<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Interactive logon: Do not require CTRL+ALT+DEL<br /><sub>(CCE-37637-6)</sub> |**Description**: This policy setting determines whether users must press CTRL+ALT+DEL before they log on. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\DisableCAD<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Interactive logon: Do not display last user name<br /><sub>(CCE-36056-0)</sub> |**Description**: This policy setting determines whether the account name of the last user to log on to the client computers in your organization will be displayed in each computer's respective Windows logon screen. Enable this policy setting to prevent intruders from collecting account names visually from the screens of desktop or laptop computers in your organization. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\DontDisplayLastUserName<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Interactive logon: Do not require CTRL+ALT+DEL<br /><sub>(CCE-37637-6)</sub> |**Description**: This policy setting determines whether users must press CTRL+ALT+DEL before they log on. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\DisableCAD<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
## Security Options - Microsoft Network Client |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Microsoft network client: Digitally sign communications (always)<br /><sub>(CCE-36325-9)</sub> |**Description**: <p><span>This policy setting determines whether packet signing is required by the SMB client component. **Note:** When Windows Vista-based computers have this policy setting enabled and they connect to file or print shares on remote servers, it is important that the setting is synchronized with its companion setting, **Microsoft network server: Digitally sign communications (always)**, on those servers. For more information about these settings, see the &quot;Microsoft network client and server: Digitally sign communications (four related settings)&quot; section in Chapter 5 of the Threats and Countermeasures guide. The recommended state for this setting is: 'Enabled'.</span></p><br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters\RequireSecuritySignature<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Microsoft network client: Digitally sign communications (if server agrees)<br /><sub>(CCE-36269-9)</sub> |**Description**: This policy setting determines whether the SMB client will attempt to negotiate SMB packet signing. **Note:** Enabling this policy setting on SMB clients on your network makes them fully effective for packet signing with all clients and servers in your environment. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters\EnableSecuritySignature<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
-|Microsoft network client: Send unencrypted password to third-party SMB servers<br /><sub>(CCE-37863-8)</sub> |**Description**: <p><span>This policy setting determines whether the SMB redirector will send plaintext passwords during authentication to third-party SMB servers that do not support password encryption. It is recommended that you disable this policy setting unless there is a strong business case to enable it. If this policy setting is enabled, unencrypted passwords will be allowed across the network. The recommended state for this setting is: 'Disabled'.</span></p><br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters\EnablePlainTextPassword<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
-|Microsoft network server: Amount of idle time required before suspending session<br /><sub>(CCE-38046-9)</sub> |**Description**: This policy setting allows you to specify the amount of continuous idle time that must pass in an SMB session before the session is suspended because of inactivity. Administrators can use this policy setting to control when a computer suspends an inactive SMB session. If client activity resumes, the session is automatically reestablished. A value of 0 appears to allow sessions to persist indefinitely. The maximum value is 99999, which is over 69 days; in effect, this value disables the setting. The recommended state for this setting is: `15 or fewer minute(s), but not 0`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanManServer\Parameters\AutoDisconnect<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 1-15<br /><sub>(Registry)</sub> |Critical |
-|Microsoft network server: Digitally sign communications (always)<br /><sub>(CCE-37864-6)</sub> |**Description**: This policy setting determines whether packet signing is required by the SMB server component. Enable this policy setting in a mixed environment to prevent downstream clients from using the workstation as a network server. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanManServer\Parameters\RequireSecuritySignature<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Microsoft network server: Digitally sign communications (if client agrees)<br /><sub>(CCE-35988-5)</sub> |**Description**: This policy setting determines whether the SMB server will negotiate SMB packet signing with clients that request it. If no signing request comes from the client, a connection will be allowed without a signature if the **Microsoft network server: Digitally sign communications (always)** setting is not enabled. **Note:** Enable this policy setting on SMB clients on your network to make them fully effective for packet signing with all clients and servers in your environment. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanManServer\Parameters\EnableSecuritySignature<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Microsoft network server: Disconnect clients when logon hours expire<br /><sub>(CCE-37972-7)</sub> |**Description**: This security setting determines whether to disconnect users who are connected to the local computer outside their user account's valid logon hours. This setting affects the Server Message Block (SMB) component. If you enable this policy setting you should also enable **Network security: Force logoff when logon hours expire** (Rule 2.3.11.6). If your organization configures logon hours for users, this policy setting is necessary to ensure they are effective. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanManServer\Parameters\EnableForcedLogoff<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Microsoft network client: Digitally sign communications (always)<br /><sub>(CCE-36325-9)</sub> |**Description**: <p><span>This policy setting determines whether packet signing is required by the SMB client component. **Note:** When Windows Vista-based computers have this policy setting enabled and they connect to file or print shares on remote servers, it is important that the setting is synchronized with its companion setting, **Microsoft network server: Digitally sign communications (always)**, on those servers. For more information about these settings, see the &quot;Microsoft network client and server: Digitally sign communications (four related settings)&quot; section in Chapter 5 of the Threats and Countermeasures guide. The recommended state for this setting is: 'Enabled'.</span></p><br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters\RequireSecuritySignature<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Microsoft network client: Digitally sign communications (if server agrees)<br /><sub>(CCE-36269-9)</sub> |**Description**: This policy setting determines whether the SMB client will attempt to negotiate SMB packet signing. **Note:** Enabling this policy setting on SMB clients on your network makes them fully effective for packet signing with all clients and servers in your environment. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters\EnableSecuritySignature<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Microsoft network client: Send unencrypted password to third-party SMB servers<br /><sub>(CCE-37863-8)</sub> |**Description**: <p><span>This policy setting determines whether the SMB redirector will send plaintext passwords during authentication to third-party SMB servers that do not support password encryption. It is recommended that you disable this policy setting unless there is a strong business case to enable it. If this policy setting is enabled, unencrypted passwords will be allowed across the network. The recommended state for this setting is: 'Disabled'.</span></p><br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters\EnablePlainTextPassword<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Microsoft network server: Amount of idle time required before suspending session<br /><sub>(CCE-38046-9)</sub> |**Description**: This policy setting allows you to specify the amount of continuous idle time that must pass in an SMB session before the session is suspended because of inactivity. Administrators can use this policy setting to control when a computer suspends an inactive SMB session. If client activity resumes, the session is automatically reestablished. A value of 0 appears to allow sessions to persist indefinitely. The maximum value is 99999, which is over 69 days; in effect, this value disables the setting. The recommended state for this setting is: `15 or fewer minute(s), but not 0`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanManServer\Parameters\AutoDisconnect<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 1-15<br /><sub>(Registry)</sub> |Critical |
+|Microsoft network server: Digitally sign communications (always)<br /><sub>(CCE-37864-6)</sub> |**Description**: This policy setting determines whether packet signing is required by the SMB server component. Enable this policy setting in a mixed environment to prevent downstream clients from using the workstation as a network server. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanManServer\Parameters\RequireSecuritySignature<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Microsoft network server: Digitally sign communications (if client agrees)<br /><sub>(CCE-35988-5)</sub> |**Description**: This policy setting determines whether the SMB server will negotiate SMB packet signing with clients that request it. If no signing request comes from the client, a connection will be allowed without a signature if the **Microsoft network server: Digitally sign communications (always)** setting is not enabled. **Note:** Enable this policy setting on SMB clients on your network to make them fully effective for packet signing with all clients and servers in your environment. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanManServer\Parameters\EnableSecuritySignature<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Microsoft network server: Disconnect clients when logon hours expire<br /><sub>(CCE-37972-7)</sub> |**Description**: This security setting determines whether to disconnect users who are connected to the local computer outside their user account's valid logon hours. This setting affects the Server Message Block (SMB) component. If you enable this policy setting you should also enable **Network security: Force logoff when logon hours expire** (Rule 2.3.11.6). If your organization configures logon hours for users, this policy setting is necessary to ensure they are effective. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanManServer\Parameters\EnableForcedLogoff<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
## Security Options - Microsoft Network Server |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Disable SMB v1 server<br /><sub>(AZ-WIN-00175)</sub> |**Description**: Disabling this setting disables server-side processing of the SMBv1 protocol. (Recommended.) Enabling this setting enables server-side processing of the SMBv1 protocol. (Default.) Changes to this setting require a reboot to take effect. For more information, see https://support.microsoft.com/kb/2696547<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\SMB1<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Disable SMB v1 server<br /><sub>(AZ-WIN-00175)</sub> |**Description**: Disabling this setting disables server-side processing of the SMBv1 protocol. (Recommended.) Enabling this setting enables server-side processing of the SMBv1 protocol. (Default.) Changes to this setting require a reboot to take effect. For more information, see https://support.microsoft.com/kb/2696547<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\SMB1<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
## Security Options - Network Access |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Network access: Do not allow anonymous enumeration of SAM accounts<br /><sub>(CCE-36316-8)</sub> |**Description**: This policy setting controls the ability of anonymous users to enumerate the accounts in the Security Accounts Manager (SAM). If you enable this policy setting, users with anonymous connections will not be able to enumerate domain account user names on the systems in your environment. This policy setting also allows additional restrictions on anonymous connections. The recommended state for this setting is: `Enabled`. **Note:** This policy has no effect on domain controllers.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\RestrictAnonymousSAM<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
-|Network access: Do not allow anonymous enumeration of SAM accounts and shares<br /><sub>(CCE-36077-6)</sub> |**Description**: This policy setting controls the ability of anonymous users to enumerate SAM accounts as well as shares. If you enable this policy setting, anonymous users will not be able to enumerate domain account user names and network share names on the systems in your environment. The recommended state for this setting is: `Enabled`. **Note:** This policy has no effect on domain controllers.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\RestrictAnonymous<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Network access: Let Everyone permissions apply to anonymous users<br /><sub>(CCE-36148-5)</sub> |**Description**: This policy setting determines what additional permissions are assigned for anonymous connections to the computer. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\EveryoneIncludesAnonymous<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
-|Network access: Remotely accessible registry paths<br /><sub>(CCE-37194-8)</sub> |**Description**: This policy setting determines which registry paths will be accessible after referencing the WinReg key to determine access permissions to the paths. Note: This setting does not exist in Windows XP. There was a setting with that name in Windows XP, but it is called "Network access: Remotely accessible registry paths and subpaths" in Windows Server 2003, Windows Vista, and Windows Server 2008. Note: When you configure this setting, you specify a list of one or more objects. The delimiter used when entering the list is a line feed or carriage return, that is, type the first object on the list, press the Enter button, type the next object, press Enter again, etc. The setting value is stored as a comma-delimited list in group policy security templates. It is also rendered as a comma-delimited list in Group Policy Editor's display pane and the Resultant Set of Policy console. It is recorded in the registry as a line-feed delimited list in a REG_MULTI_SZ value.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\SecurePipeServers\Winreg\AllowedExactPaths\Machine<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= System\CurrentControlSet\Control\ProductOptions\0System\CurrentControlSet\Control\Server Applications\0Software\Microsoft\Windows NT\CurrentVersion\0\0<br /><sub>(Registry)</sub> |Critical |
-|Network access: Remotely accessible registry paths and sub-paths<br /><sub>(CCE-36347-3)</sub> |**Description**: This policy setting determines which registry paths and sub-paths will be accessible when an application or process references the WinReg key to determine access permissions. Note: In Windows XP this setting is called "Network access: Remotely accessible registry paths," the setting with that same name in Windows Vista, Windows Server 2008, and Windows Server 2003 does not exist in Windows XP. Note: When you configure this setting, you specify a list of one or more objects. The delimiter used when entering the list is a line feed or carriage return, that is, type the first object on the list, press the Enter button, type the next object, press Enter again, etc. The setting value is stored as a comma-delimited list in group policy security templates. It is also rendered as a comma-delimited list in Group Policy Editor's display pane and the Resultant Set of Policy console. It is recorded in the registry as a line-feed delimited list in a REG_MULTI_SZ value.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\SecurePipeServers\Winreg\AllowedPaths\Machine<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= System\CurrentControlSet\Control\Print\Printers\0System\CurrentControlSet\Services\Eventlog\0Software\Microsoft\OLAP Server\0Software\Microsoft\Windows NT\CurrentVersion\Print\0Software\Microsoft\Windows NT\CurrentVersion\Windows\0System\CurrentControlSet\Control\ContentIndex\0System\CurrentControlSet\Control\Terminal Server\0System\CurrentControlSet\Control\Terminal Server\UserConfig\0System\CurrentControlSet\Control\Terminal Server\DefaultUserConfiguration\0Software\Microsoft\Windows NT\CurrentVersion\Perflib\0System\CurrentControlSet\Services\SysmonLog\0\0<br /><sub>(Registry)</sub> |Critical |
-|Network access: Restrict anonymous access to Named Pipes and Shares<br /><sub>(CCE-36021-4)</sub> |**Description**: When enabled, this policy setting restricts anonymous access to only those shares and pipes that are named in the `Network access: Named pipes that can be accessed anonymously` and `Network access: Shares that can be accessed anonymously` settings. This policy setting controls null session access to shares on your computers by adding `RestrictNullSessAccess` with the value `1` in the `HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\LanManServer\Parameters` registry key. This registry value toggles null session shares on or off to control whether the server service restricts unauthenticated clients' access to named resources. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanManServer\Parameters\RestrictNullSessAccess<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
-|Network access: Restrict clients allowed to make remote calls to SAM<br /><sub>(AZ-WIN-00142)</sub> |**Description**: This policy setting allows you to restrict remote RPC connections to SAM. If not selected, the default security descriptor will be used. This policy is supported on at least Windows Server 2016.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\RestrictRemoteSAM<br />**OS**: WS2016<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= O:BAG:BAD:(A;;RC;;;BA)<br /><sub>(Registry)</sub> |Critical |
-|Network access: Shares that can be accessed anonymously<br /><sub>(CCE-38095-6)</sub> |**Description**: This policy setting determines which network shares can be accessed by anonymous users. The default configuration for this policy setting has little effect because all users have to be authenticated before they can access shared resources on the server. Note: It can be very dangerous to add other shares to this Group Policy setting. Any network user can access any shares that are listed, which could exposure or corrupt sensitive data. Note: When you configure this setting, you specify a list of one or more objects. The delimiter used when entering the list is a line feed or carriage return, that is, type the first object on the list, press the Enter button, type the next object, press Enter again, etc. The setting value is stored as a comma-delimited list in group policy security templates. It is also rendered as a comma-delimited list in Group Policy Editor's display pane and the Resultant Set of Policy console. It is recorded in the registry as a line-feed delimited list in a REG_MULTI_SZ value.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanManServer\Parameters\NullSessionShares<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= <br /><sub>(Registry)</sub> |Critical |
-|Network access: Sharing and security model for local accounts<br /><sub>(CCE-37623-6)</sub> |**Description**: This policy setting determines how network logons that use local accounts are authenticated. The Classic option allows precise control over access to resources, including the ability to assign different types of access to different users for the same resource. The Guest only option allows you to treat all users equally. In this context, all users authenticate as Guest only to receive the same access level to a given resource. The recommended state for this setting is: `Classic - local users authenticate as themselves`. **Note:** This setting does not affect interactive logons that are performed remotely by using such services as Telnet or Remote Desktop Services (formerly called Terminal Services).<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\ForceGuest<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Network access: Do not allow anonymous enumeration of SAM accounts<br /><sub>(CCE-36316-8)</sub> |**Description**: This policy setting controls the ability of anonymous users to enumerate the accounts in the Security Accounts Manager (SAM). If you enable this policy setting, users with anonymous connections will not be able to enumerate domain account user names on the systems in your environment. This policy setting also allows additional restrictions on anonymous connections. The recommended state for this setting is: `Enabled`. **Note:** This policy has no effect on domain controllers.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\RestrictAnonymousSAM<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Network access: Do not allow anonymous enumeration of SAM accounts and shares<br /><sub>(CCE-36077-6)</sub> |**Description**: This policy setting controls the ability of anonymous users to enumerate SAM accounts as well as shares. If you enable this policy setting, anonymous users will not be able to enumerate domain account user names and network share names on the systems in your environment. The recommended state for this setting is: `Enabled`. **Note:** This policy has no effect on domain controllers.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\RestrictAnonymous<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Network access: Let Everyone permissions apply to anonymous users<br /><sub>(CCE-36148-5)</sub> |**Description**: This policy setting determines what additional permissions are assigned for anonymous connections to the computer. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\EveryoneIncludesAnonymous<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Network access: Remotely accessible registry paths<br /><sub>(CCE-37194-8)</sub> |**Description**: This policy setting determines which registry paths will be accessible after referencing the WinReg key to determine access permissions to the paths. Note: This setting does not exist in Windows XP. There was a setting with that name in Windows XP, but it is called "Network access: Remotely accessible registry paths and subpaths" in Windows Server 2003, Windows Vista, and Windows Server 2008. Note: When you configure this setting, you specify a list of one or more objects. The delimiter used when entering the list is a line feed or carriage return, that is, type the first object on the list, press the Enter button, type the next object, press Enter again, etc. The setting value is stored as a comma-delimited list in group policy security templates. It is also rendered as a comma-delimited list in Group Policy Editor's display pane and the Resultant Set of Policy console. It is recorded in the registry as a line-feed delimited list in a REG_MULTI_SZ value.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\SecurePipeServers\Winreg\AllowedExactPaths\Machine<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= System\CurrentControlSet\Control\ProductOptions\0System\CurrentControlSet\Control\Server Applications\0Software\Microsoft\Windows NT\CurrentVersion\0\0<br /><sub>(Registry)</sub> |Critical |
+|Network access: Remotely accessible registry paths and sub-paths<br /><sub>(CCE-36347-3)</sub> |**Description**: This policy setting determines which registry paths and sub-paths will be accessible when an application or process references the WinReg key to determine access permissions. Note: In Windows XP this setting is called "Network access: Remotely accessible registry paths," the setting with that same name in Windows Vista, Windows Server 2008, and Windows Server 2003 does not exist in Windows XP. Note: When you configure this setting, you specify a list of one or more objects. The delimiter used when entering the list is a line feed or carriage return, that is, type the first object on the list, press the Enter button, type the next object, press Enter again, etc. The setting value is stored as a comma-delimited list in group policy security templates. It is also rendered as a comma-delimited list in Group Policy Editor's display pane and the Resultant Set of Policy console. It is recorded in the registry as a line-feed delimited list in a REG_MULTI_SZ value.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\SecurePipeServers\Winreg\AllowedPaths\Machine<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= System\CurrentControlSet\Control\Print\Printers\0System\CurrentControlSet\Services\Eventlog\0Software\Microsoft\OLAP Server\0Software\Microsoft\Windows NT\CurrentVersion\Print\0Software\Microsoft\Windows NT\CurrentVersion\Windows\0System\CurrentControlSet\Control\ContentIndex\0System\CurrentControlSet\Control\Terminal Server\0System\CurrentControlSet\Control\Terminal Server\UserConfig\0System\CurrentControlSet\Control\Terminal Server\DefaultUserConfiguration\0Software\Microsoft\Windows NT\CurrentVersion\Perflib\0System\CurrentControlSet\Services\SysmonLog\0\0<br /><sub>(Registry)</sub> |Critical |
+|Network access: Restrict anonymous access to Named Pipes and Shares<br /><sub>(CCE-36021-4)</sub> |**Description**: When enabled, this policy setting restricts anonymous access to only those shares and pipes that are named in the `Network access: Named pipes that can be accessed anonymously` and `Network access: Shares that can be accessed anonymously` settings. This policy setting controls null session access to shares on your computers by adding `RestrictNullSessAccess` with the value `1` in the `HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\LanManServer\Parameters` registry key. This registry value toggles null session shares on or off to control whether the server service restricts unauthenticated clients' access to named resources. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanManServer\Parameters\RestrictNullSessAccess<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Network access: Restrict clients allowed to make remote calls to SAM<br /><sub>(AZ-WIN-00142)</sub> |**Description**: This policy setting allows you to restrict remote RPC connections to SAM. If not selected, the default security descriptor will be used. This policy is supported on at least Windows Server 2016.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\RestrictRemoteSAM<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= O:BAG:BAD:(A;;RC;;;BA)<br /><sub>(Registry)</sub> |Critical |
+|Network access: Shares that can be accessed anonymously<br /><sub>(CCE-38095-6)</sub> |**Description**: This policy setting determines which network shares can be accessed by anonymous users. The default configuration for this policy setting has little effect because all users have to be authenticated before they can access shared resources on the server. Note: It can be very dangerous to add other shares to this Group Policy setting. Any network user can access any shares that are listed, which could exposure or corrupt sensitive data. Note: When you configure this setting, you specify a list of one or more objects. The delimiter used when entering the list is a line feed or carriage return, that is, type the first object on the list, press the Enter button, type the next object, press Enter again, etc. The setting value is stored as a comma-delimited list in group policy security templates. It is also rendered as a comma-delimited list in Group Policy Editor's display pane and the Resultant Set of Policy console. It is recorded in the registry as a line-feed delimited list in a REG_MULTI_SZ value.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanManServer\Parameters\NullSessionShares<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= <br /><sub>(Registry)</sub> |Critical |
+|Network access: Sharing and security model for local accounts<br /><sub>(CCE-37623-6)</sub> |**Description**: This policy setting determines how network logons that use local accounts are authenticated. The Classic option allows precise control over access to resources, including the ability to assign different types of access to different users for the same resource. The Guest only option allows you to treat all users equally. In this context, all users authenticate as Guest only to receive the same access level to a given resource. The recommended state for this setting is: `Classic - local users authenticate as themselves`. **Note:** This setting does not affect interactive logons that are performed remotely by using such services as Telnet or Remote Desktop Services (formerly called Terminal Services).<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\ForceGuest<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
## Security Options - Network Security |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Network security: Allow Local System to use computer identity for NTLM<br /><sub>(CCE-38341-4)</sub> |**Description**: When enabled, this policy setting causes Local System services that use Negotiate to use the computer identity when NTLM authentication is selected by the negotiation. This policy is supported on at least Windows 7 or Windows Server 2008 R2.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\UseMachineId<br />**OS**: WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Network security: Allow LocalSystem NULL session fallback<br /><sub>(CCE-37035-3)</sub> |**Description**: This policy setting determines whether NTLM is allowed to fall back to a NULL session when used with LocalSystem. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0\AllowNullSessionFallback<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
-|Network Security: Allow PKU2U authentication requests to this computer to use online identities<br /><sub>(CCE-38047-7)</sub> |**Description**: This setting determines if online identities are able to authenticate to this computer. The Public Key Cryptography Based User-to-User (PKU2U) protocol introduced in Windows 7 and Windows Server 2008 R2 is implemented as a security support provider (SSP). The SSP enables peer-to-peer authentication, particularly through the Windows 7 media and file sharing feature called Homegroup, which permits sharing between computers that are not members of a domain. With PKU2U, a new extension was introduced to the Negotiate authentication package, `Spnego.dll`. In previous versions of Windows, Negotiate decided whether to use Kerberos or NTLM for authentication. The extension SSP for Negotiate, `Negoexts.dll`, which is treated as an authentication protocol by Windows, supports Microsoft SSPs including PKU2U. When computers are configured to accept authentication requests by using online IDs, `Negoexts.dll` calls the PKU2U SSP on the computer that is used to log on. The PKU2U SSP obtains a local certificate and exchanges the policy between the peer computers. When validated on the peer computer, the certificate within the metadata is sent to the logon peer for validation and associates the user's certificate to a security token and the logon process completes. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\pku2u\AllowOnlineID<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
-|Network Security: Configure encryption types allowed for Kerberos<br /><sub>(CCE-37755-6)</sub> |**Description**: This policy setting allows you to set the encryption types that Kerberos is allowed to use. This policy is supported on at least Windows 7 or Windows Server 2008 R2.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\Kerberos\Parameters\SupportedEncryptionTypes<br />**OS**: WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 2147483644<br /><sub>(Registry)</sub> |Critical |
-|Network security: Do not store LAN Manager hash value on next password change<br /><sub>(CCE-36326-7)</sub> |**Description**: This policy setting determines whether the LAN Manager (LM) hash value for the new password is stored when the password is changed. The LM hash is relatively weak and prone to attack compared to the cryptographically stronger Microsoft Windows NT hash. Since LM hashes are stored on the local computer in the security database, passwords can then be easily compromised if the database is attacked. **Note:** Older operating systems and some third-party applications may fail when this policy setting is enabled. Also, note that the password will need to be changed on all accounts after you enable this setting to gain the proper benefit. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\NoLMHash<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
-|Network security: LAN Manager authentication level<br /><sub>(CCE-36173-3)</sub> |**Description**: LAN Manager (LM) is a family of early Microsoft client/server software that allows users to link personal computers together on a single network. Network capabilities include transparent file and print sharing, user security features, and network administration tools. In Active Directory domains, the Kerberos protocol is the default authentication protocol. However, if the Kerberos protocol is not negotiated for some reason, Active Directory will use LM, NTLM, or NTLMv2. LAN Manager authentication includes the LM, NTLM, and NTLM version 2 (NTLMv2) variants, and is the protocol that is used to authenticate all Windows clients when they perform the following operations: - Join a domain - Authenticate between Active Directory forests - Authenticate to down-level domains - Authenticate to computers that do not run Windows 2000, Windows Server 2003, or Windows XP) - Authenticate to computers that are not in the domain The possible values for the Network security: LAN Manager authentication level settings are: - Send LM & NTLM responses - Send LM & NTLM ΓÇö use NTLMv2 session security if negotiated - Send NTLM responses only - Send NTLMv2 responses only - Send NTLMv2 responses only\refuse LM - Send NTLMv2 responses only\refuse LM & NTLM - Not Defined The Network security: LAN Manager authentication level setting determines which challenge/response authentication protocol is used for network logons. This choice affects the authentication protocol level that clients use, the session security level that the computers negotiate, and the authentication level that servers accept as follows: - Send LM & NTLM responses. Clients use LM and NTLM authentication and never use NTLMv2 session security. Domain controllers accept LM, NTLM, and NTLMv2 authentication. - Send LM & NTLM ΓÇö use NTLMv2 session security if negotiated. Clients use LM and NTLM authentication and use NTLMv2 session security if the server supports it. Domain controllers accept LM, NTLM, and NTLMv2 authentication. - Send NTLM response only. Clients use NTLM authentication only and use NTLMv2 session security if the server supports it. Domain controllers accept LM, NTLM, and NTLMv2 authentication. - Send NTLMv2 response only. Clients use NTLMv2 authentication only and use NTLMv2 session security if the server supports it. Domain controllers accept LM, NTLM, and NTLMv2 authentication. - Send NTLMv2 response only\refuse LM. Clients use NTLMv2 authentication only and use NTLMv2 session security if the server supports it. Domain controllers refuse LM (accept only NTLM and NTLMv2 authentication). - Send NTLMv2 response only\refuse LM & NTLM. Clients use NTLMv2 authentication only and use NTLMv2 session security if the server supports it. Domain controllers refuse LM and NTLM (accept only NTLMv2 authentication). These settings correspond to the levels discussed in other Microsoft documents as follows: - Level 0 ΓÇö Send LM and NTLM response; never use NTLMv2 session security. Clients use LM and NTLM authentication, and never use NTLMv2 session security. Domain controllers accept LM, NTLM, and NTLMv2 authentication. - Level 1 ΓÇö Use NTLMv2 session security if negotiated. Clients use LM and NTLM authentication, and use NTLMv2 session security if the server supports it. Domain controllers accept LM, NTLM, and NTLMv2 authentication. - Level 2 ΓÇö Send NTLM response only. Clients use only NTLM authentication, and use NTLMv2 session security if the server supports it. Domain controllers accept LM, NTLM, and NTLMv2 authentication. - Level 3 ΓÇö Send NTLMv2 response only. Clients use NTLMv2 authentication, and use NTLMv2 session security if the server supports it. Domain controllers accept LM, NTLM, and NTLMv2 authentication. - Level 4 ΓÇö Domain controllers refuse LM responses. Clients use NTLM authentication, and use NTLMv2 session security if the server supports it. Domain controllers refuse LM authentication, that is, they accept NTLM and NTLMv2. - Level 5 ΓÇö Domain controllers refuse LM and NTLM responses (accept only NTLMv2). Clients use NTLMv2 authentication, use and NTLMv2 session security if the server supports it. Domain controllers refuse NTLM and LM authentication (they accept only NTLMv2).<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\LmCompatibilityLevel<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 5<br /><sub>(Registry)</sub> |Critical |
-|Network security: LDAP client signing requirements<br /><sub>(CCE-36858-9)</sub> |**Description**: This policy setting determines the level of data signing that is requested on behalf of clients that issue LDAP BIND requests. **Note:** This policy setting does not have any impact on LDAP simple bind (`ldap_simple_bind`) or LDAP simple bind through SSL (`ldap_simple_bind_s`). No Microsoft LDAP clients that are included with Windows XP Professional use ldap_simple_bind or ldap_simple_bind_s to communicate with a domain controller. The recommended state for this setting is: `Negotiate signing`. Configuring this setting to `Require signing` also conforms with the benchmark.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LDAP\LDAPClientIntegrity<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
-|Network security: Minimum session security for NTLM SSP based (including secure RPC) clients<br /><sub>(CCE-37553-5)</sub> |**Description**: This policy setting determines which behaviors are allowed by clients for applications using the NTLM Security Support Provider (SSP). The SSP Interface (SSPI) is used by applications that need authentication services. The setting does not modify how the authentication sequence works but instead require certain behaviors in applications that use the SSPI. The recommended state for this setting is: `Require NTLMv2 session security, Require 128-bit encryption`. **Note:** These values are dependent on the _Network security: LAN Manager Authentication Level_ security setting value.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0\NTLMMinClientSec<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 537395200<br /><sub>(Registry)</sub> |Critical |
-|Network security: Minimum session security for NTLM SSP based (including secure RPC) servers<br /><sub>(CCE-37835-6)</sub> |**Description**: This policy setting determines which behaviors are allowed by servers for applications using the NTLM Security Support Provider (SSP). The SSP Interface (SSPI) is used by applications that need authentication services. The setting does not modify how the authentication sequence works but instead require certain behaviors in applications that use the SSPI. The recommended state for this setting is: `Require NTLMv2 session security, Require 128-bit encryption`. **Note:** These values are dependent on the _Network security: LAN Manager Authentication Level_ security setting value.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0\NTLMMinServerSec<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 537395200<br /><sub>(Registry)</sub> |Critical |
+|Network security: Allow Local System to use computer identity for NTLM<br /><sub>(CCE-38341-4)</sub> |**Description**: When enabled, this policy setting causes Local System services that use Negotiate to use the computer identity when NTLM authentication is selected by the negotiation. This policy is supported on at least Windows 7 or Windows Server 2008 R2.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\UseMachineId<br />**OS**: WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Network security: Allow LocalSystem NULL session fallback<br /><sub>(CCE-37035-3)</sub> |**Description**: This policy setting determines whether NTLM is allowed to fall back to a NULL session when used with LocalSystem. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0\AllowNullSessionFallback<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Network Security: Allow PKU2U authentication requests to this computer to use online identities<br /><sub>(CCE-38047-7)</sub> |**Description**: This setting determines if online identities are able to authenticate to this computer. The Public Key Cryptography Based User-to-User (PKU2U) protocol introduced in Windows 7 and Windows Server 2008 R2 is implemented as a security support provider (SSP). The SSP enables peer-to-peer authentication, particularly through the Windows 7 media and file sharing feature called Homegroup, which permits sharing between computers that are not members of a domain. With PKU2U, a new extension was introduced to the Negotiate authentication package, `Spnego.dll`. In previous versions of Windows, Negotiate decided whether to use Kerberos or NTLM for authentication. The extension SSP for Negotiate, `Negoexts.dll`, which is treated as an authentication protocol by Windows, supports Microsoft SSPs including PKU2U. When computers are configured to accept authentication requests by using online IDs, `Negoexts.dll` calls the PKU2U SSP on the computer that is used to log on. The PKU2U SSP obtains a local certificate and exchanges the policy between the peer computers. When validated on the peer computer, the certificate within the metadata is sent to the logon peer for validation and associates the user's certificate to a security token and the logon process completes. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\pku2u\AllowOnlineID<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Network Security: Configure encryption types allowed for Kerberos<br /><sub>(CCE-37755-6)</sub> |**Description**: This policy setting allows you to set the encryption types that Kerberos is allowed to use. This policy is supported on at least Windows 7 or Windows Server 2008 R2.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\Kerberos\Parameters\SupportedEncryptionTypes<br />**OS**: WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 2147483644<br /><sub>(Registry)</sub> |Critical |
+|Network security: Do not store LAN Manager hash value on next password change<br /><sub>(CCE-36326-7)</sub> |**Description**: This policy setting determines whether the LAN Manager (LM) hash value for the new password is stored when the password is changed. The LM hash is relatively weak and prone to attack compared to the cryptographically stronger Microsoft Windows NT hash. Since LM hashes are stored on the local computer in the security database, passwords can then be easily compromised if the database is attacked. **Note:** Older operating systems and some third-party applications may fail when this policy setting is enabled. Also, note that the password will need to be changed on all accounts after you enable this setting to gain the proper benefit. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\NoLMHash<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Network security: LAN Manager authentication level<br /><sub>(CCE-36173-3)</sub> |**Description**: LAN Manager (LM) is a family of early Microsoft client/server software that allows users to link personal computers together on a single network. Network capabilities include transparent file and print sharing, user security features, and network administration tools. In Active Directory domains, the Kerberos protocol is the default authentication protocol. However, if the Kerberos protocol is not negotiated for some reason, Active Directory will use LM, NTLM, or NTLMv2. LAN Manager authentication includes the LM, NTLM, and NTLM version 2 (NTLMv2) variants, and is the protocol that is used to authenticate all Windows clients when they perform the following operations: - Join a domain - Authenticate between Active Directory forests - Authenticate to down-level domains - Authenticate to computers that do not run Windows 2000, Windows Server 2003, or Windows XP) - Authenticate to computers that are not in the domain The possible values for the Network security: LAN Manager authentication level settings are: - Send LM & NTLM responses - Send LM & NTLM ΓÇö use NTLMv2 session security if negotiated - Send NTLM responses only - Send NTLMv2 responses only - Send NTLMv2 responses only\refuse LM - Send NTLMv2 responses only\refuse LM & NTLM - Not Defined The Network security: LAN Manager authentication level setting determines which challenge/response authentication protocol is used for network logons. This choice affects the authentication protocol level that clients use, the session security level that the computers negotiate, and the authentication level that servers accept as follows: - Send LM & NTLM responses. Clients use LM and NTLM authentication and never use NTLMv2 session security. Domain controllers accept LM, NTLM, and NTLMv2 authentication. - Send LM & NTLM ΓÇö use NTLMv2 session security if negotiated. Clients use LM and NTLM authentication and use NTLMv2 session security if the server supports it. Domain controllers accept LM, NTLM, and NTLMv2 authentication. - Send NTLM response only. Clients use NTLM authentication only and use NTLMv2 session security if the server supports it. Domain controllers accept LM, NTLM, and NTLMv2 authentication. - Send NTLMv2 response only. Clients use NTLMv2 authentication only and use NTLMv2 session security if the server supports it. Domain controllers accept LM, NTLM, and NTLMv2 authentication. - Send NTLMv2 response only\refuse LM. Clients use NTLMv2 authentication only and use NTLMv2 session security if the server supports it. Domain controllers refuse LM (accept only NTLM and NTLMv2 authentication). - Send NTLMv2 response only\refuse LM & NTLM. Clients use NTLMv2 authentication only and use NTLMv2 session security if the server supports it. Domain controllers refuse LM and NTLM (accept only NTLMv2 authentication). These settings correspond to the levels discussed in other Microsoft documents as follows: - Level 0 ΓÇö Send LM and NTLM response; never use NTLMv2 session security. Clients use LM and NTLM authentication, and never use NTLMv2 session security. Domain controllers accept LM, NTLM, and NTLMv2 authentication. - Level 1 ΓÇö Use NTLMv2 session security if negotiated. Clients use LM and NTLM authentication, and use NTLMv2 session security if the server supports it. Domain controllers accept LM, NTLM, and NTLMv2 authentication. - Level 2 ΓÇö Send NTLM response only. Clients use only NTLM authentication, and use NTLMv2 session security if the server supports it. Domain controllers accept LM, NTLM, and NTLMv2 authentication. - Level 3 ΓÇö Send NTLMv2 response only. Clients use NTLMv2 authentication, and use NTLMv2 session security if the server supports it. Domain controllers accept LM, NTLM, and NTLMv2 authentication. - Level 4 ΓÇö Domain controllers refuse LM responses. Clients use NTLM authentication, and use NTLMv2 session security if the server supports it. Domain controllers refuse LM authentication, that is, they accept NTLM and NTLMv2. - Level 5 ΓÇö Domain controllers refuse LM and NTLM responses (accept only NTLMv2). Clients use NTLMv2 authentication, use and NTLMv2 session security if the server supports it. Domain controllers refuse NTLM and LM authentication (they accept only NTLMv2).<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\LmCompatibilityLevel<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 5<br /><sub>(Registry)</sub> |Critical |
+|Network security: LDAP client signing requirements<br /><sub>(CCE-36858-9)</sub> |**Description**: This policy setting determines the level of data signing that is requested on behalf of clients that issue LDAP BIND requests. **Note:** This policy setting does not have any impact on LDAP simple bind (`ldap_simple_bind`) or LDAP simple bind through SSL (`ldap_simple_bind_s`). No Microsoft LDAP clients that are included with Windows XP Professional use ldap_simple_bind or ldap_simple_bind_s to communicate with a domain controller. The recommended state for this setting is: `Negotiate signing`. Configuring this setting to `Require signing` also conforms with the benchmark.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LDAP\LDAPClientIntegrity<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Network security: Minimum session security for NTLM SSP based (including secure RPC) clients<br /><sub>(CCE-37553-5)</sub> |**Description**: This policy setting determines which behaviors are allowed by clients for applications using the NTLM Security Support Provider (SSP). The SSP Interface (SSPI) is used by applications that need authentication services. The setting does not modify how the authentication sequence works but instead require certain behaviors in applications that use the SSPI. The recommended state for this setting is: `Require NTLMv2 session security, Require 128-bit encryption`. **Note:** These values are dependent on the _Network security: LAN Manager Authentication Level_ security setting value.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0\NTLMMinClientSec<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 537395200<br /><sub>(Registry)</sub> |Critical |
+|Network security: Minimum session security for NTLM SSP based (including secure RPC) servers<br /><sub>(CCE-37835-6)</sub> |**Description**: This policy setting determines which behaviors are allowed by servers for applications using the NTLM Security Support Provider (SSP). The SSP Interface (SSPI) is used by applications that need authentication services. The setting does not modify how the authentication sequence works but instead require certain behaviors in applications that use the SSPI. The recommended state for this setting is: `Require NTLMv2 session security, Require 128-bit encryption`. **Note:** These values are dependent on the _Network security: LAN Manager Authentication Level_ security setting value.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0\NTLMMinServerSec<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 537395200<br /><sub>(Registry)</sub> |Critical |
## Security Options - Recovery console |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Recovery console: Allow floppy copy and access to all drives and all folders<br /><sub>(AZ-WIN-00180)</sub> |**Description**: This policy setting makes the Recovery Console SET command available, which allows you to set the following recovery console environment variables: ΓÇó AllowWildCards. Enables wildcard support for some commands (such as the DEL command). ΓÇó AllowAllPaths. Allows access to all files and folders on the computer. ΓÇó AllowRemovableMedia. Allows files to be copied to removable media, such as a floppy disk. ΓÇó NoCopyPrompt. Does not prompt when overwriting an existing file.<br />**Key Path**: Software\Microsoft\Windows NT\CurrentVersion\Setup\RecoveryConsole\setcommand<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Recovery console: Allow floppy copy and access to all drives and all folders<br /><sub>(AZ-WIN-00180)</sub> |**Description**: This policy setting makes the Recovery Console SET command available, which allows you to set the following recovery console environment variables: ΓÇó AllowWildCards. Enables wildcard support for some commands (such as the DEL command). ΓÇó AllowAllPaths. Allows access to all files and folders on the computer. ΓÇó AllowRemovableMedia. Allows files to be copied to removable media, such as a floppy disk. ΓÇó NoCopyPrompt. Does not prompt when overwriting an existing file.<br />**Key Path**: Software\Microsoft\Windows NT\CurrentVersion\Setup\RecoveryConsole\setcommand<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
## Security Options - Shutdown |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Shutdown: Allow system to be shut down without having to log on<br /><sub>(CCE-36788-8)</sub> |**Description**: This policy setting determines whether a computer can be shut down when a user is not logged on. If this policy setting is enabled, the shutdown command is available on the Windows logon screen. It is recommended to disable this policy setting to restrict the ability to shut down the computer to users with credentials on the system. The recommended state for this setting is: `Disabled`. **Note:** In Server 2008 R2 and older versions, this setting had no impact on Remote Desktop (RDP) / Terminal Services sessions - it only affected the local console. However, Microsoft changed the behavior in Windows Server 2012 (non-R2) and above, where if set to Enabled, RDP sessions are also allowed to shut down or restart the server.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\ShutdownWithoutLogon<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
-|Shutdown: Clear virtual memory pagefile<br /><sub>(AZ-WIN-00181)</sub> |**Description**: This policy setting determines whether the virtual memory pagefile is cleared when the system is shut down. When this policy setting is enabled, the system pagefile is cleared each time that the system shuts down properly. If you enable this security setting, the hibernation file (Hiberfil.sys) is zeroed out when hibernation is disabled on a portable computer system. It will take longer to shut down and restart the computer, and will be especially noticeable on computers with large paging files.<br />**Key Path**: System\CurrentControlSet\Control\Session Manager\Memory Management\ClearPageFileAtShutdown<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Shutdown: Allow system to be shut down without having to log on<br /><sub>(CCE-36788-8)</sub> |**Description**: This policy setting determines whether a computer can be shut down when a user is not logged on. If this policy setting is enabled, the shutdown command is available on the Windows logon screen. It is recommended to disable this policy setting to restrict the ability to shut down the computer to users with credentials on the system. The recommended state for this setting is: `Disabled`. **Note:** In Server 2008 R2 and older versions, this setting had no impact on Remote Desktop (RDP) / Terminal Services sessions - it only affected the local console. However, Microsoft changed the behavior in Windows Server 2012 (non-R2) and above, where if set to Enabled, RDP sessions are also allowed to shut down or restart the server.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\ShutdownWithoutLogon<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Shutdown: Clear virtual memory pagefile<br /><sub>(AZ-WIN-00181)</sub> |**Description**: This policy setting determines whether the virtual memory pagefile is cleared when the system is shut down. When this policy setting is enabled, the system pagefile is cleared each time that the system shuts down properly. If you enable this security setting, the hibernation file (Hiberfil.sys) is zeroed out when hibernation is disabled on a portable computer system. It will take longer to shut down and restart the computer, and will be especially noticeable on computers with large paging files.<br />**Key Path**: System\CurrentControlSet\Control\Session Manager\Memory Management\ClearPageFileAtShutdown<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
## Security Options - System objects |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|System objects: Require case insensitivity for non-Windows subsystems<br /><sub>(CCE-37885-1)</sub> |**Description**: This policy setting determines whether case insensitivity is enforced for all subsystems. The Microsoft Win32 subsystem is case insensitive. However, the kernel supports case sensitivity for other subsystems, such as the Portable Operating System Interface for UNIX (POSIX). Because Windows is case insensitive (but the POSIX subsystem will support case sensitivity), failure to enforce this policy setting makes it possible for a user of the POSIX subsystem to create a file with the same name as another file by using mixed case to label it. Such a situation can block access to these files by another user who uses typical Win32 tools, because only one of the files will be available. The recommended state for this setting is: `Enabled`.<br />**Key Path**: System\CurrentControlSet\Control\Session Manager\Kernel\ObCaseInsensitive<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning |
-|System objects: Strengthen default permissions of internal system objects (e.g. Symbolic Links)<br /><sub>(CCE-37644-2)</sub> |**Description**: This policy setting determines the strength of the default discretionary access control list (DACL) for objects. Active Directory maintains a global list of shared system resources, such as DOS device names, mutexes, and semaphores. In this way, objects can be located and shared among processes. Each type of object is created with a default DACL that specifies who can access the objects and what permissions are granted. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Session Manager\ProtectionMode<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|System objects: Require case insensitivity for non-Windows subsystems<br /><sub>(CCE-37885-1)</sub> |**Description**: This policy setting determines whether case insensitivity is enforced for all subsystems. The Microsoft Win32 subsystem is case insensitive. However, the kernel supports case sensitivity for other subsystems, such as the Portable Operating System Interface for UNIX (POSIX). Because Windows is case insensitive (but the POSIX subsystem will support case sensitivity), failure to enforce this policy setting makes it possible for a user of the POSIX subsystem to create a file with the same name as another file by using mixed case to label it. Such a situation can block access to these files by another user who uses typical Win32 tools, because only one of the files will be available. The recommended state for this setting is: `Enabled`.<br />**Key Path**: System\CurrentControlSet\Control\Session Manager\Kernel\ObCaseInsensitive<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning |
+|System objects: Strengthen default permissions of internal system objects (e.g. Symbolic Links)<br /><sub>(CCE-37644-2)</sub> |**Description**: This policy setting determines the strength of the default discretionary access control list (DACL) for objects. Active Directory maintains a global list of shared system resources, such as DOS device names, mutexes, and semaphores. In this way, objects can be located and shared among processes. Each type of object is created with a default DACL that specifies who can access the objects and what permissions are granted. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Session Manager\ProtectionMode<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
## Security Options - System settings |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|System settings: Use Certificate Rules on Windows Executables for Software Restriction Policies<br /><sub>(AZ-WIN-00155)</sub> |**Description**: This policy setting determines whether digital certificates are processed when software restriction policies are enabled and a user or process attempts to run software with an .exe file name extension. It enables or disables certificate rules (a type of software restriction policies rule). With software restriction policies, you can create a certificate rule that will allow or disallow the execution of Authenticode ®-signed software, based on the digital certificate that is associated with the software. For certificate rules to take effect in software restriction policies, you must enable this policy setting.<br />**Key Path**: Software\Policies\Microsoft\Windows\Safer\CodeIdentifiers\AuthenticodeEnabled<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|System settings: Use Certificate Rules on Windows Executables for Software Restriction Policies<br /><sub>(AZ-WIN-00155)</sub> |**Description**: This policy setting determines whether digital certificates are processed when software restriction policies are enabled and a user or process attempts to run software with an .exe file name extension. It enables or disables certificate rules (a type of software restriction policies rule). With software restriction policies, you can create a certificate rule that will allow or disallow the execution of Authenticode ®-signed software, based on the digital certificate that is associated with the software. For certificate rules to take effect in software restriction policies, you must enable this policy setting.<br />**Key Path**: Software\Policies\Microsoft\Windows\Safer\CodeIdentifiers\AuthenticodeEnabled<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
## Security Options - User Account Control |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|User Account Control: Admin Approval Mode for the Built-in Administrator account<br /><sub>(CCE-36494-3)</sub> |**Description**: This policy setting controls the behavior of Admin Approval Mode for the built-in Administrator account. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\FilterAdministratorToken<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|User Account Control: Allow UIAccess applications to prompt for elevation without using the secure desktop<br /><sub>(CCE-36863-9)</sub> |**Description**: This policy setting controls whether User Interface Accessibility (UIAccess or UIA) programs can automatically disable the secure desktop for elevation prompts used by a standard user. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableUIADesktopToggle<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
-|User Account Control: Behavior of the elevation prompt for administrators in Admin Approval Mode<br /><sub>(CCE-37029-6)</sub> |**Description**: This policy setting controls the behavior of the elevation prompt for administrators. The recommended state for this setting is: `Prompt for consent on the secure desktop`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\ConsentPromptBehaviorAdmin<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 2<br /><sub>(Registry)</sub> |Critical |
-|User Account Control: Behavior of the elevation prompt for standard users<br /><sub>(CCE-36864-7)</sub> |**Description**: This policy setting controls the behavior of the elevation prompt for standard users. The recommended state for this setting is: `Automatically deny elevation requests`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\ConsentPromptBehaviorUser<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
-|User Account Control: Detect application installations and prompt for elevation<br /><sub>(CCE-36533-8)</sub> |**Description**: This policy setting controls the behavior of application installation detection for the computer. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableInstallerDetection<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|User Account Control: Only elevate UIAccess applications that are installed in secure locations<br /><sub>(CCE-37057-7)</sub> |**Description**: This policy setting controls whether applications that request to run with a User Interface Accessibility (UIAccess) integrity level must reside in a secure location in the file system. Secure locations are limited to the following: - `…\Program Files\`, including subfolders - `…\Windows\system32\` - `…\Program Files (x86)\`, including subfolders for 64-bit versions of Windows **Note:** Windows enforces a public key infrastructure (PKI) signature check on any interactive application that requests to run with a UIAccess integrity level regardless of the state of this security setting. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableSecureUIAPaths<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
-|User Account Control: Run all administrators in Admin Approval Mode<br /><sub>(CCE-36869-6)</sub> |**Description**: This policy setting controls the behavior of all User Account Control (UAC) policy settings for the computer. If you change this policy setting, you must restart your computer. The recommended state for this setting is: `Enabled`. **Note:** If this policy setting is disabled, the Security Center notifies you that the overall security of the operating system has been reduced.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableLUA<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
-|User Account Control: Switch to the secure desktop when prompting for elevation<br /><sub>(CCE-36866-2)</sub> |**Description**: This policy setting controls whether the elevation request prompt is displayed on the interactive user's desktop or the secure desktop. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\PromptOnSecureDesktop<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
-|User Account Control: Virtualize file and registry write failures to per-user locations<br /><sub>(CCE-37064-3)</sub> |**Description**: This policy setting controls whether application write failures are redirected to defined registry and file system locations. This policy setting mitigates applications that run as administrator and write run-time application data to: - `%ProgramFiles%`, - `%Windir%`, - `%Windir%\system32`, or - `HKEY_LOCAL_MACHINE\Software`. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableVirtualization<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|User Account Control: Admin Approval Mode for the Built-in Administrator account<br /><sub>(CCE-36494-3)</sub> |**Description**: This policy setting controls the behavior of Admin Approval Mode for the built-in Administrator account. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\FilterAdministratorToken<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|User Account Control: Allow UIAccess applications to prompt for elevation without using the secure desktop<br /><sub>(CCE-36863-9)</sub> |**Description**: This policy setting controls whether User Interface Accessibility (UIAccess or UIA) programs can automatically disable the secure desktop for elevation prompts used by a standard user. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableUIADesktopToggle<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|User Account Control: Behavior of the elevation prompt for administrators in Admin Approval Mode<br /><sub>(CCE-37029-6)</sub> |**Description**: This policy setting controls the behavior of the elevation prompt for administrators. The recommended state for this setting is: `Prompt for consent on the secure desktop`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\ConsentPromptBehaviorAdmin<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 2<br /><sub>(Registry)</sub> |Critical |
+|User Account Control: Behavior of the elevation prompt for standard users<br /><sub>(CCE-36864-7)</sub> |**Description**: This policy setting controls the behavior of the elevation prompt for standard users. The recommended state for this setting is: `Automatically deny elevation requests`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\ConsentPromptBehaviorUser<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|User Account Control: Detect application installations and prompt for elevation<br /><sub>(CCE-36533-8)</sub> |**Description**: This policy setting controls the behavior of application installation detection for the computer. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableInstallerDetection<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|User Account Control: Only elevate UIAccess applications that are installed in secure locations<br /><sub>(CCE-37057-7)</sub> |**Description**: This policy setting controls whether applications that request to run with a User Interface Accessibility (UIAccess) integrity level must reside in a secure location in the file system. Secure locations are limited to the following: - `…\Program Files\`, including subfolders - `…\Windows\system32\` - `…\Program Files (x86)\`, including subfolders for 64-bit versions of Windows **Note:** Windows enforces a public key infrastructure (PKI) signature check on any interactive application that requests to run with a UIAccess integrity level regardless of the state of this security setting. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableSecureUIAPaths<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|User Account Control: Run all administrators in Admin Approval Mode<br /><sub>(CCE-36869-6)</sub> |**Description**: This policy setting controls the behavior of all User Account Control (UAC) policy settings for the computer. If you change this policy setting, you must restart your computer. The recommended state for this setting is: `Enabled`. **Note:** If this policy setting is disabled, the Security Center notifies you that the overall security of the operating system has been reduced.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableLUA<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|User Account Control: Switch to the secure desktop when prompting for elevation<br /><sub>(CCE-36866-2)</sub> |**Description**: This policy setting controls whether the elevation request prompt is displayed on the interactive user's desktop or the secure desktop. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\PromptOnSecureDesktop<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|User Account Control: Virtualize file and registry write failures to per-user locations<br /><sub>(CCE-37064-3)</sub> |**Description**: This policy setting controls whether application write failures are redirected to defined registry and file system locations. This policy setting mitigates applications that run as administrator and write run-time application data to: - `%ProgramFiles%`, - `%Windir%`, - `%Windir%\system32`, or - `HKEY_LOCAL_MACHINE\Software`. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableVirtualization<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
## Security Settings - Account Policies |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Enforce password history<br /><sub>(CCE-37166-6)</sub> |**Description**: <p><span>This policy setting determines the number of renewed, unique passwords that have to be associated with a user account before you can reuse an old password. The value for this policy setting must be between 0 and 24 passwords. The default value for Windows Vista is 0 passwords, but the default setting in a domain is 24 passwords. To maintain the effectiveness of this policy setting, use the Minimum password age setting to prevent users from repeatedly changing their password. The recommended state for this setting is: '24 or more password(s)'.</span></p><br />**Key Path**: [System Access]PasswordHistorySize<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 24<br /><sub>(Policy)</sub> |Critical |
-|Maximum password age<br /><sub>(CCE-37167-4)</sub> |**Description**: This policy setting defines how long a user can use their password before it expires. Values for this policy setting range from 0 to 999 days. If you set the value to 0, the password will never expire. Because attackers can crack passwords, the more frequently you change the password the less opportunity an attacker has to use a cracked password. However, the lower this value is set, the higher the potential for an increase in calls to help desk support due to users having to change their password or forgetting which password is current. The recommended state for this setting is `60 or fewer days, but not 0`.<br />**Key Path**: [System Access]MaximumPasswordAge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 1-70<br /><sub>(Policy)</sub> |Critical |
-|Minimum password age<br /><sub>(CCE-37073-4)</sub> |**Description**: This policy setting determines the number of days that you must use a password before you can change it. The range of values for this policy setting is between 1 and 999 days. (You may also set the value to 0 to allow immediate password changes.) The default value for this setting is 0 days. The recommended state for this setting is: `1 or more day(s)`.<br />**Key Path**: [System Access]MinimumPasswordAge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 1<br /><sub>(Policy)</sub> |Critical |
-|Minimum password length<br /><sub>(CCE-36534-6)</sub> |**Description**: This policy setting determines the least number of characters that make up a password for a user account. There are many different theories about how to determine the best password length for an organization, but perhaps "pass phrase" is a better term than "password." In Microsoft Windows 2000 or later, pass phrases can be quite long and can include spaces. Therefore, a phrase such as "I want to drink a $5 milkshake" is a valid pass phrase; it is a considerably stronger password than an 8 or 10 character string of random numbers and letters, and yet is easier to remember. Users must be educated about the proper selection and maintenance of passwords, especially with regard to password length. In enterprise environments, the ideal value for the Minimum password length setting is 14 characters, however you should adjust this value to meet your organization's business requirements. The recommended state for this setting is: `14 or more character(s)`.<br />**Key Path**: [System Access]MinimumPasswordLength<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 14<br /><sub>(Policy)</sub> |Critical |
-|Password must meet complexity requirements<br /><sub>(CCE-37063-5)</sub> |**Description**: This policy setting checks all new passwords to ensure that they meet basic requirements for strong passwords. When this policy is enabled, passwords must meet the following minimum requirements: - Does not contain the user's account name or parts of the user's full name that exceed two consecutive characters - Be at least six characters in length - Contain characters from three of the following four categories: - English uppercase characters (A through Z) - English lowercase characters (a through z) - Base 10 digits (0 through 9) - Non-alphabetic characters (for example, !, $, #, %) - A catch-all category of any Unicode character that does not fall under the previous four categories. This fifth category can be regionally specific. Each additional character in a password increases its complexity exponentially. For instance, a seven-character, all lower-case alphabetic password would have 267 (approximately 8 x 109 or 8 billion) possible combinations. At 1,000,000 attempts per second (a capability of many password-cracking utilities), it would only take 133 minutes to crack. A seven-character alphabetic password with case sensitivity has 527 combinations. A seven-character case-sensitive alphanumeric password without punctuation has 627 combinations. An eight-character password has 268 (or 2 x 1011) possible combinations. Although this might seem to be a large number, at 1,000,000 attempts per second it would take only 59 hours to try all possible passwords. Remember, these times will significantly increase for passwords that use ALT characters and other special keyboard characters such as "!" or "@". Proper use of the password settings can help make it difficult to mount a brute force attack. The recommended state for this setting is: `Enabled`.<br />**Key Path**: [System Access]PasswordComplexity<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= true<br /><sub>(Policy)</sub> |Critical |
-|Store passwords using reversible encryption<br /><sub>(CCE-36286-3)</sub> |**Description**: This policy setting determines whether the operating system stores passwords in a way that uses reversible encryption, which provides support for application protocols that require knowledge of the user's password for authentication purposes. Passwords that are stored with reversible encryption are essentially the same as plaintext versions of the passwords. The recommended state for this setting is: `Disabled`.<br />**Key Path**: [System Access]ClearTextPassword<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Policy)</sub> |Critical |
+|Enforce password history<br /><sub>(CCE-37166-6)</sub> |**Description**: <p><span>This policy setting determines the number of renewed, unique passwords that have to be associated with a user account before you can reuse an old password. The value for this policy setting must be between 0 and 24 passwords. The default value for Windows Vista is 0 passwords, but the default setting in a domain is 24 passwords. To maintain the effectiveness of this policy setting, use the Minimum password age setting to prevent users from repeatedly changing their password. The recommended state for this setting is: '24 or more password(s)'.</span></p><br />**Key Path**: [System Access]PasswordHistorySize<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 24<br /><sub>(Policy)</sub> |Critical |
+|Maximum password age<br /><sub>(CCE-37167-4)</sub> |**Description**: This policy setting defines how long a user can use their password before it expires. Values for this policy setting range from 0 to 999 days. If you set the value to 0, the password will never expire. Because attackers can crack passwords, the more frequently you change the password the less opportunity an attacker has to use a cracked password. However, the lower this value is set, the higher the potential for an increase in calls to help desk support due to users having to change their password or forgetting which password is current. The recommended state for this setting is `60 or fewer days, but not 0`.<br />**Key Path**: [System Access]MaximumPasswordAge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 1-70<br /><sub>(Policy)</sub> |Critical |
+|Minimum password age<br /><sub>(CCE-37073-4)</sub> |**Description**: This policy setting determines the number of days that you must use a password before you can change it. The range of values for this policy setting is between 1 and 999 days. (You may also set the value to 0 to allow immediate password changes.) The default value for this setting is 0 days. The recommended state for this setting is: `1 or more day(s)`.<br />**Key Path**: [System Access]MinimumPasswordAge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 1<br /><sub>(Policy)</sub> |Critical |
+|Minimum password length<br /><sub>(CCE-36534-6)</sub> |**Description**: This policy setting determines the least number of characters that make up a password for a user account. There are many different theories about how to determine the best password length for an organization, but perhaps "pass phrase" is a better term than "password." In Microsoft Windows 2000 or later, pass phrases can be quite long and can include spaces. Therefore, a phrase such as "I want to drink a $5 milkshake" is a valid pass phrase; it is a considerably stronger password than an 8 or 10 character string of random numbers and letters, and yet is easier to remember. Users must be educated about the proper selection and maintenance of passwords, especially with regard to password length. In enterprise environments, the ideal value for the Minimum password length setting is 14 characters, however you should adjust this value to meet your organization's business requirements. The recommended state for this setting is: `14 or more character(s)`.<br />**Key Path**: [System Access]MinimumPasswordLength<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 14<br /><sub>(Policy)</sub> |Critical |
+|Password must meet complexity requirements<br /><sub>(CCE-37063-5)</sub> |**Description**: This policy setting checks all new passwords to ensure that they meet basic requirements for strong passwords. When this policy is enabled, passwords must meet the following minimum requirements: - Does not contain the user's account name or parts of the user's full name that exceed two consecutive characters - Be at least six characters in length - Contain characters from three of the following four categories: - English uppercase characters (A through Z) - English lowercase characters (a through z) - Base 10 digits (0 through 9) - Non-alphabetic characters (for example, !, $, #, %) - A catch-all category of any Unicode character that does not fall under the previous four categories. This fifth category can be regionally specific. Each additional character in a password increases its complexity exponentially. For instance, a seven-character, all lower-case alphabetic password would have 267 (approximately 8 x 109 or 8 billion) possible combinations. At 1,000,000 attempts per second (a capability of many password-cracking utilities), it would only take 133 minutes to crack. A seven-character alphabetic password with case sensitivity has 527 combinations. A seven-character case-sensitive alphanumeric password without punctuation has 627 combinations. An eight-character password has 268 (or 2 x 1011) possible combinations. Although this might seem to be a large number, at 1,000,000 attempts per second it would take only 59 hours to try all possible passwords. Remember, these times will significantly increase for passwords that use ALT characters and other special keyboard characters such as "!" or "@". Proper use of the password settings can help make it difficult to mount a brute force attack. The recommended state for this setting is: `Enabled`.<br />**Key Path**: [System Access]PasswordComplexity<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= true<br /><sub>(Policy)</sub> |Critical |
+|Store passwords using reversible encryption<br /><sub>(CCE-36286-3)</sub> |**Description**: This policy setting determines whether the operating system stores passwords in a way that uses reversible encryption, which provides support for application protocols that require knowledge of the user's password for authentication purposes. Passwords that are stored with reversible encryption are essentially the same as plaintext versions of the passwords. The recommended state for this setting is: `Disabled`.<br />**Key Path**: [System Access]ClearTextPassword<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Policy)</sub> |Critical |
## System Audit Policies - Account Logon |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Audit Credential Validation<br /><sub>(CCE-37741-6)</sub> |**Description**: <p><span>This subcategory reports the results of validation tests on credentials submitted for a user account logon request. These events occur on the computer that is authoritative for the credentials. For domain accounts, the domain controller is authoritative, whereas for local accounts, the local computer is authoritative. In domain environments, most of the Account Logon events occur in the Security log of the domain controllers that are authoritative for the domain accounts. However, these events can occur on other computers in the organization when local accounts are used to log on. Events for this subcategory include: - 4774: An account was mapped for logon. - 4775: An account could not be mapped for logon. - 4776: The domain controller attempted to validate the credentials for an account. - 4777: The domain controller failed to validate the credentials for an account. The recommended state for this setting is: 'Success and Failure'.</span></p><br />**Key Path**: {0CCE923F-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit Credential Validation<br /><sub>(CCE-37741-6)</sub> |**Description**: <p><span>This subcategory reports the results of validation tests on credentials submitted for a user account logon request. These events occur on the computer that is authoritative for the credentials. For domain accounts, the domain controller is authoritative, whereas for local accounts, the local computer is authoritative. In domain environments, most of the Account Logon events occur in the Security log of the domain controllers that are authoritative for the domain accounts. However, these events can occur on other computers in the organization when local accounts are used to log on. Events for this subcategory include: - 4774: An account was mapped for logon. - 4775: An account could not be mapped for logon. - 4776: The domain controller attempted to validate the credentials for an account. - 4777: The domain controller failed to validate the credentials for an account. The recommended state for this setting is: 'Success and Failure'.</span></p><br />**Key Path**: {0CCE923F-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
## System Audit Policies - Account Management |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Audit Other Account Management Events<br /><sub>(CCE-37855-4)</sub> |**Description**: This subcategory reports other account management events. Events for this subcategory include: ΓÇö 4782: The password hash an account was accessed. ΓÇö 4793: The Password Policy Checking API was called. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE923A-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
-|Audit Security Group Management<br /><sub>(CCE-38034-5)</sub> |**Description**: This subcategory reports each event of security group management, such as when a security group is created, changed, or deleted or when a member is added to or removed from a security group. If you enable this Audit policy setting, administrators can track events to detect malicious, accidental, and authorized creation of security group accounts. Events for this subcategory include: - 4727: A security-enabled global group was created. - 4728: A member was added to a security-enabled global group. - 4729: A member was removed from a security-enabled global group. - 4730: A security-enabled global group was deleted. - 4731: A security-enabled local group was created. - 4732: A member was added to a security-enabled local group. - 4733: A member was removed from a security-enabled local group. - 4734: A security-enabled local group was deleted. - 4735: A security-enabled local group was changed. - 4737: A security-enabled global group was changed. - 4754: A security-enabled universal group was created. - 4755: A security-enabled universal group was changed. - 4756: A member was added to a security-enabled universal group. - 4757: A member was removed from a security-enabled universal group. - 4758: A security-enabled universal group was deleted. - 4764: A group's type was changed. The recommended state for this setting is: `Success and Failure`.<br />**Key Path**: {0CCE9237-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
-|Audit User Account Management<br /><sub>(CCE-37856-2)</sub> |**Description**: This subcategory reports each event of user account management, such as when a user account is created, changed, or deleted; a user account is renamed, disabled, or enabled; or a password is set or changed. If you enable this Audit policy setting, administrators can track events to detect malicious, accidental, and authorized creation of user accounts. Events for this subcategory include: - 4720: A user account was created. - 4722: A user account was enabled. - 4723: An attempt was made to change an account's password. - 4724: An attempt was made to reset an account's password. - 4725: A user account was disabled. - 4726: A user account was deleted. - 4738: A user account was changed. - 4740: A user account was locked out. - 4765: SID History was added to an account. - 4766: An attempt to add SID History to an account failed. - 4767: A user account was unlocked. - 4780: The ACL was set on accounts which are members of administrators groups. - 4781: The name of an account was changed: - 4794: An attempt was made to set the Directory Services Restore Mode. - 5376: Credential Manager credentials were backed up. - 5377: Credential Manager credentials were restored from a backup. The recommended state for this setting is: `Success and Failure`.<br />**Key Path**: {0CCE9235-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit Other Account Management Events<br /><sub>(CCE-37855-4)</sub> |**Description**: This subcategory reports other account management events. Events for this subcategory include: ΓÇö 4782: The password hash an account was accessed. ΓÇö 4793: The Password Policy Checking API was called. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE923A-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit Security Group Management<br /><sub>(CCE-38034-5)</sub> |**Description**: This subcategory reports each event of security group management, such as when a security group is created, changed, or deleted or when a member is added to or removed from a security group. If you enable this Audit policy setting, administrators can track events to detect malicious, accidental, and authorized creation of security group accounts. Events for this subcategory include: - 4727: A security-enabled global group was created. - 4728: A member was added to a security-enabled global group. - 4729: A member was removed from a security-enabled global group. - 4730: A security-enabled global group was deleted. - 4731: A security-enabled local group was created. - 4732: A member was added to a security-enabled local group. - 4733: A member was removed from a security-enabled local group. - 4734: A security-enabled local group was deleted. - 4735: A security-enabled local group was changed. - 4737: A security-enabled global group was changed. - 4754: A security-enabled universal group was created. - 4755: A security-enabled universal group was changed. - 4756: A member was added to a security-enabled universal group. - 4757: A member was removed from a security-enabled universal group. - 4758: A security-enabled universal group was deleted. - 4764: A group's type was changed. The recommended state for this setting is: `Success and Failure`.<br />**Key Path**: {0CCE9237-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit User Account Management<br /><sub>(CCE-37856-2)</sub> |**Description**: This subcategory reports each event of user account management, such as when a user account is created, changed, or deleted; a user account is renamed, disabled, or enabled; or a password is set or changed. If you enable this Audit policy setting, administrators can track events to detect malicious, accidental, and authorized creation of user accounts. Events for this subcategory include: - 4720: A user account was created. - 4722: A user account was enabled. - 4723: An attempt was made to change an account's password. - 4724: An attempt was made to reset an account's password. - 4725: A user account was disabled. - 4726: A user account was deleted. - 4738: A user account was changed. - 4740: A user account was locked out. - 4765: SID History was added to an account. - 4766: An attempt to add SID History to an account failed. - 4767: A user account was unlocked. - 4780: The ACL was set on accounts which are members of administrators groups. - 4781: The name of an account was changed: - 4794: An attempt was made to set the Directory Services Restore Mode. - 5376: Credential Manager credentials were backed up. - 5377: Credential Manager credentials were restored from a backup. The recommended state for this setting is: `Success and Failure`.<br />**Key Path**: {0CCE9235-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
## System Audit Policies - Detailed Tracking |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Audit PNP Activity<br /><sub>(AZ-WIN-00182)</sub> |**Description**: This policy setting allows you to audit when plug and play detects an external device. The recommended state for this setting is: `Success`. **Note:** A Windows 10, Server 2016 or higher OS is required to access and set this value in Group Policy.<br />**Key Path**: {0CCE9248-69AE-11D9-BED3-505054503030}<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
-|Audit Process Creation<br /><sub>(CCE-36059-4)</sub> |**Description**: This subcategory reports the creation of a process and the name of the program or user that created it. Events for this subcategory include: - 4688: A new process has been created. - 4696: A primary token was assigned to process. The recommended state for this setting is: `Success`.<br />**Key Path**: {0CCE922B-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit PNP Activity<br /><sub>(AZ-WIN-00182)</sub> |**Description**: This policy setting allows you to audit when plug and play detects an external device. The recommended state for this setting is: `Success`. **Note:** A Windows 10, Server 2016 or higher OS is required to access and set this value in Group Policy.<br />**Key Path**: {0CCE9248-69AE-11D9-BED3-505054503030}<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit Process Creation<br /><sub>(CCE-36059-4)</sub> |**Description**: This subcategory reports the creation of a process and the name of the program or user that created it. Events for this subcategory include: - 4688: A new process has been created. - 4696: A primary token was assigned to process. Refer to Microsoft Knowledge Base article 947226: [Description of security events in Windows Vista and in Windows Server 2008](https://support.microsoft.com/en-us/kb/947226) for the most recent information about this setting. The recommended state for this setting is: `Success`.<br />**Key Path**: {0CCE922B-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
## System Audit Policies - Logon-Logoff |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Audit Account Lockout<br /><sub>(CCE-37133-6)</sub> |**Description**: This subcategory reports when a user's account is locked out as a result of too many failed logon attempts. Events for this subcategory include: ΓÇö 4625: An account failed to log on. Refer to the Microsoft Knowledgebase article 'Description of security events in Windows Vista and in Windows Server 2008' for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9217-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
-|Audit Group Membership<br /><sub>(AZ-WIN-00026)</sub> |**Description**: Audit Group Membership enables you to audit group memberships when they are enumerated on the client computer. This policy allows you to audit the group membership information in the user's logon token. Events in this subcategory are generated on the computer on which a logon session is created. For an interactive logon, the security audit event is generated on the computer that the user logged on to. For a network logon, such as accessing a shared folder on the network, the security audit event is generated on the computer hosting the resource. You must also enable the Audit Logon subcategory. Multiple events are generated if the group membership information cannot fit in a single security audit event. The events that are audited include the following: - 4627(S): Group membership information.<br />**Key Path**: {0CCE9249-69AE-11D9-BED3-505054503030}<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
-|Audit Logoff<br /><sub>(CCE-38237-4)</sub> |**Description**: <p><span>This subcategory reports when a user logs off from the system. These events occur on the accessed computer. For interactive logons, the generation of these events occurs on the computer that is logged on to. If a network logon takes place to access a share, these events generate on the computer that hosts the accessed resource. If you configure this setting to No auditing, it is difficult or impossible to determine which user has accessed or attempted to access organization computers. Events for this subcategory include: - 4634: An account was logged off. - 4647: User initiated logoff. The recommended state for this setting is: 'Success'.</span></p><br />**Key Path**: {0CCE9216-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
-|Audit Logon<br /><sub>(CCE-38036-0)</sub> |**Description**: <p><span>This subcategory reports when a user attempts to log on to the system. These events occur on the accessed computer. For interactive logons, the generation of these events occurs on the computer that is logged on to. If a network logon takes place to access a share, these events generate on the computer that hosts the accessed resource. If you configure this setting to No auditing, it is difficult or impossible to determine which user has accessed or attempted to access organization computers. Events for this subcategory include: - 4624: An account was successfully logged on. - 4625: An account failed to log on. - 4648: A logon was attempted using explicit credentials. - 4675: SIDs were filtered. The recommended state for this setting is: 'Success and Failure'.</span></p><br />**Key Path**: {0CCE9215-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
-|Audit Other Logon/Logoff Events<br /><sub>(CCE-36322-6)</sub> |**Description**: This subcategory reports other logon/logoff-related events, such as Terminal Services session disconnects and reconnects, using RunAs to run processes under a different account, and locking and unlocking a workstation. Events for this subcategory include: ΓÇö 4649: A replay attack was detected. ΓÇö 4778: A session was reconnected to a Window Station. ΓÇö 4779: A session was disconnected from a Window Station. ΓÇö 4800: The workstation was locked. ΓÇö 4801: The workstation was unlocked. ΓÇö 4802: The screen saver was invoked. ΓÇö 4803: The screen saver was dismissed. ΓÇö 5378: The requested credentials delegation was disallowed by policy. ΓÇö 5632: A request was made to authenticate to a wireless network. ΓÇö 5633: A request was made to authenticate to a wired network. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE921C-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
-|Audit Special Logon<br /><sub>(CCE-36266-5)</sub> |**Description**: This subcategory reports when a special logon is used. A special logon is a logon that has administrator-equivalent privileges and can be used to elevate a process to a higher level. Events for this subcategory include: - 4964 : Special groups have been assigned to a new logon. The recommended state for this setting is: `Success`.<br />**Key Path**: {0CCE921B-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit Account Lockout<br /><sub>(CCE-37133-6)</sub> |**Description**: This subcategory reports when a user's account is locked out as a result of too many failed logon attempts. Events for this subcategory include: ΓÇö 4625: An account failed to log on. Refer to the Microsoft Knowledgebase article 'Description of security events in Windows Vista and in Windows Server 2008' for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9217-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit Group Membership<br /><sub>(AZ-WIN-00026)</sub> |**Description**: Audit Group Membership enables you to audit group memberships when they are enumerated on the client computer. This policy allows you to audit the group membership information in the user's logon token. Events in this subcategory are generated on the computer on which a logon session is created. For an interactive logon, the security audit event is generated on the computer that the user logged on to. For a network logon, such as accessing a shared folder on the network, the security audit event is generated on the computer hosting the resource. You must also enable the Audit Logon subcategory. Multiple events are generated if the group membership information cannot fit in a single security audit event. The events that are audited include the following: - 4627(S): Group membership information.<br />**Key Path**: {0CCE9249-69AE-11D9-BED3-505054503030}<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit Logoff<br /><sub>(CCE-38237-4)</sub> |**Description**: <p><span>This subcategory reports when a user logs off from the system. These events occur on the accessed computer. For interactive logons, the generation of these events occurs on the computer that is logged on to. If a network logon takes place to access a share, these events generate on the computer that hosts the accessed resource. If you configure this setting to No auditing, it is difficult or impossible to determine which user has accessed or attempted to access organization computers. Events for this subcategory include: - 4634: An account was logged off. - 4647: User initiated logoff. The recommended state for this setting is: 'Success'.</span></p><br />**Key Path**: {0CCE9216-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit Logon<br /><sub>(CCE-38036-0)</sub> |**Description**: <p><span>This subcategory reports when a user attempts to log on to the system. These events occur on the accessed computer. For interactive logons, the generation of these events occurs on the computer that is logged on to. If a network logon takes place to access a share, these events generate on the computer that hosts the accessed resource. If you configure this setting to No auditing, it is difficult or impossible to determine which user has accessed or attempted to access organization computers. Events for this subcategory include: - 4624: An account was successfully logged on. - 4625: An account failed to log on. - 4648: A logon was attempted using explicit credentials. - 4675: SIDs were filtered. The recommended state for this setting is: 'Success and Failure'.</span></p><br />**Key Path**: {0CCE9215-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit Other Logon/Logoff Events<br /><sub>(CCE-36322-6)</sub> |**Description**: This subcategory reports other logon/logoff-related events, such as Terminal Services session disconnects and reconnects, using RunAs to run processes under a different account, and locking and unlocking a workstation. Events for this subcategory include: ΓÇö 4649: A replay attack was detected. ΓÇö 4778: A session was reconnected to a Window Station. ΓÇö 4779: A session was disconnected from a Window Station. ΓÇö 4800: The workstation was locked. ΓÇö 4801: The workstation was unlocked. ΓÇö 4802: The screen saver was invoked. ΓÇö 4803: The screen saver was dismissed. ΓÇö 5378: The requested credentials delegation was disallowed by policy. ΓÇö 5632: A request was made to authenticate to a wireless network. ΓÇö 5633: A request was made to authenticate to a wired network. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE921C-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit Special Logon<br /><sub>(CCE-36266-5)</sub> |**Description**: This subcategory reports when a special logon is used. A special logon is a logon that has administrator-equivalent privileges and can be used to elevate a process to a higher level. Events for this subcategory include: - 4964 : Special groups have been assigned to a new logon. The recommended state for this setting is: `Success`.<br />**Key Path**: {0CCE921B-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
## System Audit Policies - Object Access |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Audit Other Object Access Events<br /><sub>(AZ-WIN-00113)</sub> |**Description**: This subcategory reports other object access-related events such as Task Scheduler jobs and COM+ objects. Events for this subcategory include: ΓÇö 4671: An application attempted to access a blocked ordinal through the TBS. ΓÇö 4691: Indirect access to an object was requested. ΓÇö 4698: A scheduled task was created. ΓÇö 4699: A scheduled task was deleted. ΓÇö 4700: A scheduled task was enabled. ΓÇö 4701: A scheduled task was disabled. ΓÇö 4702: A scheduled task was updated. ΓÇö 5888: An object in the COM+ Catalog was modified. ΓÇö 5889: An object was deleted from the COM+ Catalog. ΓÇö 5890: An object was added to the COM+ Catalog. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9227-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
-|Audit Removable Storage<br /><sub>(CCE-37617-8)</sub> |**Description**: This policy setting allows you to audit user attempts to access file system objects on a removable storage device. A security audit event is generated only for all objects for all types of access requested. If you configure this policy setting, an audit event is generated each time an account accesses a file system object on a removable storage. Success audits record successful attempts and Failure audits record unsuccessful attempts. If you do not configure this policy setting, no audit event is generated when an account accesses a file system object on a removable storage. The recommended state for this setting is: `Success and Failure`. **Note:** A Windows 8, Server 2012 (non-R2) or higher OS is required to access and set this value in Group Policy.<br />**Key Path**: {0CCE9245-69AE-11D9-BED3-505054503030}<br />**OS**: WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit Other Object Access Events<br /><sub>(AZ-WIN-00113)</sub> |**Description**: This subcategory reports other object access-related events such as Task Scheduler jobs and COM+ objects. Events for this subcategory include: ΓÇö 4671: An application attempted to access a blocked ordinal through the TBS. ΓÇö 4691: Indirect access to an object was requested. ΓÇö 4698: A scheduled task was created. ΓÇö 4699: A scheduled task was deleted. ΓÇö 4700: A scheduled task was enabled. ΓÇö 4701: A scheduled task was disabled. ΓÇö 4702: A scheduled task was updated. ΓÇö 5888: An object in the COM+ Catalog was modified. ΓÇö 5889: An object was deleted from the COM+ Catalog. ΓÇö 5890: An object was added to the COM+ Catalog. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9227-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit Removable Storage<br /><sub>(CCE-37617-8)</sub> |**Description**: This policy setting allows you to audit user attempts to access file system objects on a removable storage device. A security audit event is generated only for all objects for all types of access requested. If you configure this policy setting, an audit event is generated each time an account accesses a file system object on a removable storage. Success audits record successful attempts and Failure audits record unsuccessful attempts. If you do not configure this policy setting, no audit event is generated when an account accesses a file system object on a removable storage. The recommended state for this setting is: `Success and Failure`. **Note:** A Windows 8, Server 2012 (non-R2) or higher OS is required to access and set this value in Group Policy.<br />**Key Path**: {0CCE9245-69AE-11D9-BED3-505054503030}<br />**OS**: WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
## System Audit Policies - Policy Change |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Audit Authentication Policy Change<br /><sub>(CCE-38327-3)</sub> |**Description**: This subcategory reports changes in authentication policy. Events for this subcategory include: ΓÇö 4706: A new trust was created to a domain. ΓÇö 4707: A trust to a domain was removed. ΓÇö 4713: Kerberos policy was changed. ΓÇö 4716: Trusted domain information was modified. ΓÇö 4717: System security access was granted to an account. ΓÇö 4718: System security access was removed from an account. ΓÇö 4739: Domain Policy was changed. ΓÇö 4864: A namespace collision was detected. ΓÇö 4865: A trusted forest information entry was added. ΓÇö 4866: A trusted forest information entry was removed. ΓÇö 4867: A trusted forest information entry was modified. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9230-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
-|Audit MPSSVC Rule-Level Policy Change<br /><sub>(AZ-WIN-00111)</sub> |**Description**: This subcategory reports changes in policy rules used by the Microsoft Protection Service (MPSSVC.exe). This service is used by Windows Firewall and by Microsoft OneCare. Events for this subcategory include: ΓÇö 4944: The following policy was active when the Windows Firewall started. ΓÇö 4945: A rule was listed when the Windows Firewall started. ΓÇö 4946: A change has been made to Windows Firewall exception list. A rule was added. ΓÇö 4947: A change has been made to Windows Firewall exception list. A rule was modified. ΓÇö 4948: A change has been made to Windows Firewall exception list. A rule was deleted. ΓÇö 4949: Windows Firewall settings were restored to the default values. ΓÇö 4950: A Windows Firewall setting has changed. ΓÇö 4951: A rule has been ignored because its major version number was not recognized by Windows Firewall. ΓÇö 4952: Parts of a rule have been ignored because its minor version number was not recognized by Windows Firewall. The other parts of the rule will be enforced. ΓÇö 4953: A rule has been ignored by Windows Firewall because it could not parse the rule. ΓÇö 4954: Windows Firewall Group Policy settings have changed. The new settings have been applied. ΓÇö 4956: Windows Firewall has changed the active profile. ΓÇö 4957: Windows Firewall did not apply the following rule: ΓÇö 4958: Windows Firewall did not apply the following rule because the rule referred to items not configured on this computer: Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9232-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
-|Audit Policy Change<br /><sub>(CCE-38028-7)</sub> |**Description**: This subcategory reports changes in audit policy including SACL changes. Events for this subcategory include: ΓÇö 4715: The audit policy (SACL) on an object was changed. ΓÇö 4719: System audit policy was changed. ΓÇö 4902: The Per-user audit policy table was created. ΓÇö 4904: An attempt was made to register a security event source. ΓÇö 4905: An attempt was made to unregister a security event source. ΓÇö 4906: The CrashOnAuditFail value has changed. ΓÇö 4907: Auditing settings on object were changed. ΓÇö 4908: Special Groups Logon table modified. ΓÇö 4912: Per User Audit Policy was changed. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE922F-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit Authentication Policy Change<br /><sub>(CCE-38327-3)</sub> |**Description**: This subcategory reports changes in authentication policy. Events for this subcategory include: ΓÇö 4706: A new trust was created to a domain. ΓÇö 4707: A trust to a domain was removed. ΓÇö 4713: Kerberos policy was changed. ΓÇö 4716: Trusted domain information was modified. ΓÇö 4717: System security access was granted to an account. ΓÇö 4718: System security access was removed from an account. ΓÇö 4739: Domain Policy was changed. ΓÇö 4864: A namespace collision was detected. ΓÇö 4865: A trusted forest information entry was added. ΓÇö 4866: A trusted forest information entry was removed. ΓÇö 4867: A trusted forest information entry was modified. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9230-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit MPSSVC Rule-Level Policy Change<br /><sub>(AZ-WIN-00111)</sub> |**Description**: This subcategory reports changes in policy rules used by the Microsoft Protection Service (MPSSVC.exe). This service is used by Windows Firewall and by Microsoft OneCare. Events for this subcategory include: ΓÇö 4944: The following policy was active when the Windows Firewall started. ΓÇö 4945: A rule was listed when the Windows Firewall started. ΓÇö 4946: A change has been made to Windows Firewall exception list. A rule was added. ΓÇö 4947: A change has been made to Windows Firewall exception list. A rule was modified. ΓÇö 4948: A change has been made to Windows Firewall exception list. A rule was deleted. ΓÇö 4949: Windows Firewall settings were restored to the default values. ΓÇö 4950: A Windows Firewall setting has changed. ΓÇö 4951: A rule has been ignored because its major version number was not recognized by Windows Firewall. ΓÇö 4952: Parts of a rule have been ignored because its minor version number was not recognized by Windows Firewall. The other parts of the rule will be enforced. ΓÇö 4953: A rule has been ignored by Windows Firewall because it could not parse the rule. ΓÇö 4954: Windows Firewall Group Policy settings have changed. The new settings have been applied. ΓÇö 4956: Windows Firewall has changed the active profile. ΓÇö 4957: Windows Firewall did not apply the following rule: ΓÇö 4958: Windows Firewall did not apply the following rule because the rule referred to items not configured on this computer: Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9232-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit Policy Change<br /><sub>(CCE-38028-7)</sub> |**Description**: This subcategory reports changes in audit policy including SACL changes. Events for this subcategory include: ΓÇö 4715: The audit policy (SACL) on an object was changed. ΓÇö 4719: System audit policy was changed. ΓÇö 4902: The Per-user audit policy table was created. ΓÇö 4904: An attempt was made to register a security event source. ΓÇö 4905: An attempt was made to unregister a security event source. ΓÇö 4906: The CrashOnAuditFail value has changed. ΓÇö 4907: Auditing settings on object were changed. ΓÇö 4908: Special Groups Logon table modified. ΓÇö 4912: Per User Audit Policy was changed. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE922F-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
## System Audit Policies - Privilege Use |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Audit Sensitive Privilege Use<br /><sub>(CCE-36267-3)</sub> |**Description**: This subcategory reports when a user account or service uses a sensitive privilege. A sensitive privilege includes the following user rights: Act as part of the operating system, Backup files and directories, Create a token object, Debug programs, Enable computer and user accounts to be trusted for delegation, Generate security audits, Impersonate a client after authentication, Load and unload device drivers, Manage auditing and security log, Modify firmware environment values, Replace a process-level token, Restore files and directories, and Take ownership of files or other objects. Auditing this subcategory will create a high volume of events. Events for this subcategory include: ΓÇö 4672: Special privileges assigned to new logon. ΓÇö 4673: A privileged service was called. ΓÇö 4674: An operation was attempted on a privileged object. Refer to the Microsoft Knowledgebase article 'Description of security events in Windows Vista and in Windows Server 2008' for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9228-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit Sensitive Privilege Use<br /><sub>(CCE-36267-3)</sub> |**Description**: This subcategory reports when a user account or service uses a sensitive privilege. A sensitive privilege includes the following user rights: Act as part of the operating system, Backup files and directories, Create a token object, Debug programs, Enable computer and user accounts to be trusted for delegation, Generate security audits, Impersonate a client after authentication, Load and unload device drivers, Manage auditing and security log, Modify firmware environment values, Replace a process-level token, Restore files and directories, and Take ownership of files or other objects. Auditing this subcategory will create a high volume of events. Events for this subcategory include: ΓÇö 4672: Special privileges assigned to new logon. ΓÇö 4673: A privileged service was called. ΓÇö 4674: An operation was attempted on a privileged object. Refer to the Microsoft Knowledgebase article 'Description of security events in Windows Vista and in Windows Server 2008' for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9228-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
## System Audit Policies - System |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Audit Security State Change<br /><sub>(CCE-38114-5)</sub> |**Description**: This subcategory reports changes in security state of the system, such as when the security subsystem starts and stops. Events for this subcategory include: ΓÇö 4608: Windows is starting up. ΓÇö 4609: Windows is shutting down. ΓÇö 4616: The system time was changed. ΓÇö 4621: Administrator recovered system from CrashOnAuditFail. Users who are not administrators will now be allowed to log on. Some auditable activity might not have been recorded. Refer to the Microsoft Knowledgebase article 'Description of security events in Windows Vista and in Windows Server 2008' for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9210-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
-|Audit Security System Extension<br /><sub>(CCE-36144-4)</sub> |**Description**: This subcategory reports the loading of extension code such as authentication packages by the security subsystem. Events for this subcategory include: ΓÇö 4610: An authentication package has been loaded by the Local Security Authority. ΓÇö 4611: A trusted logon process has been registered with the Local Security Authority. ΓÇö 4614: A notification package has been loaded by the Security Account Manager. ΓÇö 4622: A security package has been loaded by the Local Security Authority. ΓÇö 4697: A service was installed in the system. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9211-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
-|Audit System Integrity<br /><sub>(CCE-37132-8)</sub> |**Description**: This subcategory reports on violations of integrity of the security subsystem. Events for this subcategory include: ΓÇö 4612: Internal resources allocated for the queuing of audit messages have been exhausted, leading to the loss of some audits. ΓÇö 4615: Invalid use of LPC port. ΓÇö 4618: A monitored security event pattern has occurred. ΓÇö 4816 : RPC detected an integrity violation while decrypting an incoming message. ΓÇö 5038: Code integrity determined that the image hash of a file is not valid. The file could be corrupt due to unauthorized modification or the invalid hash could indicate a potential disk device error. ΓÇö 5056: A cryptographic self-test was performed. ΓÇö 5057: A cryptographic primitive operation failed. ΓÇö 5060: Verification operation failed. ΓÇö 5061: Cryptographic operation. ΓÇö 5062: A kernel-mode cryptographic self-test was performed. Refer to the Microsoft Knowledgebase article 'Description of security events in Windows Vista and in Windows Server 2008' for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9212-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit Security State Change<br /><sub>(CCE-38114-5)</sub> |**Description**: This subcategory reports changes in security state of the system, such as when the security subsystem starts and stops. Events for this subcategory include: ΓÇö 4608: Windows is starting up. ΓÇö 4609: Windows is shutting down. ΓÇö 4616: The system time was changed. ΓÇö 4621: Administrator recovered system from CrashOnAuditFail. Users who are not administrators will now be allowed to log on. Some auditable activity might not have been recorded. Refer to the Microsoft Knowledgebase article 'Description of security events in Windows Vista and in Windows Server 2008' for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9210-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit Security System Extension<br /><sub>(CCE-36144-4)</sub> |**Description**: This subcategory reports the loading of extension code such as authentication packages by the security subsystem. Events for this subcategory include: ΓÇö 4610: An authentication package has been loaded by the Local Security Authority. ΓÇö 4611: A trusted logon process has been registered with the Local Security Authority. ΓÇö 4614: A notification package has been loaded by the Security Account Manager. ΓÇö 4622: A security package has been loaded by the Local Security Authority. ΓÇö 4697: A service was installed in the system. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9211-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit System Integrity<br /><sub>(CCE-37132-8)</sub> |**Description**: This subcategory reports on violations of integrity of the security subsystem. Events for this subcategory include: ΓÇö 4612: Internal resources allocated for the queuing of audit messages have been exhausted, leading to the loss of some audits. ΓÇö 4615: Invalid use of LPC port. ΓÇö 4618: A monitored security event pattern has occurred. ΓÇö 4816 : RPC detected an integrity violation while decrypting an incoming message. ΓÇö 5038: Code integrity determined that the image hash of a file is not valid. The file could be corrupt due to unauthorized modification or the invalid hash could indicate a potential disk device error. ΓÇö 5056: A cryptographic self-test was performed. ΓÇö 5057: A cryptographic primitive operation failed. ΓÇö 5060: Verification operation failed. ΓÇö 5061: Cryptographic operation. ΓÇö 5062: A kernel-mode cryptographic self-test was performed. Refer to the Microsoft Knowledgebase article 'Description of security events in Windows Vista and in Windows Server 2008' for the most recent information about this setting: https://support.microsoft.com/topic/ms16-014-description-of-the-security-update-for-windows-vista-windows-server-2008-windows-7-windows-server-2008-r2-windows-server-2012-windows-8-1-and-windows-server-2012-r2-february-9-2016-1ff344d3-cd1c-cdbd-15b4-9344c7a7e6bd.<br />**Key Path**: {0CCE9212-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
## User Rights Assignment |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Access Credential Manager as a trusted caller<br /><sub>(CCE-37056-9)</sub> |**Description**: This security setting is used by Credential Manager during Backup and Restore. No accounts should have this user right, as it is only assigned to Winlogon. Users' saved credentials might be compromised if this user right is assigned to other entities. The recommended state for this setting is: `No One`.<br />**Key Path**: [Privilege Rights]SeTrustedCredManAccessPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= No One<br /><sub>(Policy)</sub> |Warning |
-|Access this computer from the network<br /><sub>(CCE-35818-4)</sub> |**Description**: <p><span>This policy setting allows other users on the network to connect to the computer and is required by various network protocols that include Server Message Block (SMB) based protocols, NetBIOS, Common Internet File System (CIFS), and Component Object Model Plus (COM+). - *Level 1 - Domain Controller.* The recommended state for this setting is: 'Administrators, Authenticated Users, ENTERPRISE DOMAIN CONTROLLERS'. - *Level 1 - Member Server.* The recommended state for this setting is: 'Administrators, Authenticated Users'.</span></p><br />**Key Path**: [Privilege Rights]SeNetworkLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member | Administrators, Authenticated Users<br /><sub>(Policy)</sub> |Critical |
-|Act as part of the operating system<br /><sub>(CCE-36876-1)</sub> |**Description**: This policy setting allows a process to assume the identity of any user and thus gain access to the resources that the user is authorized to access. The recommended state for this setting is: `No One`.<br />**Key Path**: [Privilege Rights]SeTcbPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= No One<br /><sub>(Policy)</sub> |Critical |
-|Allow log on locally<br /><sub>(CCE-37659-0)</sub> |**Description**: This policy setting determines which users can interactively log on to computers in your environment. Logons that are initiated by pressing the CTRL+ALT+DEL key sequence on the client computer keyboard require this user right. Users who attempt to log on through Terminal Services or IIS also require this user right. The Guest account is assigned this user right by default. Although this account is disabled by default, Microsoft recommends that you enable this setting through Group Policy. However, this user right should generally be restricted to the Administrators and Users groups. Assign this user right to the Backup Operators group if your organization requires that they have this capability. When configuring a user right in the SCM enter a comma delimited list of accounts. Accounts can be either local or located in Active Directory, they can be groups, users, or computers.<br />**Key Path**: [Privilege Rights]SeInteractiveLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Critical |
-|Allow log on through Remote Desktop Services<br /><sub>(CCE-37072-6)</sub> |**Description**: <p><span>This policy setting determines which users or groups have the right to log on as a Terminal Services client. Remote desktop users require this user right. If your organization uses Remote Assistance as part of its help desk strategy, create a group and assign it this user right through Group Policy. If the help desk in your organization does not use Remote Assistance, assign this user right only to the Administrators group or use the restricted groups feature to ensure that no user accounts are part of the Remote Desktop Users group. Restrict this user right to the Administrators group, and possibly the Remote Desktop Users group, to prevent unwanted users from gaining access to computers on your network by means of the Remote Assistance feature. - **Level 1 - Domain Controller.** The recommended state for this setting is: 'Administrators'. - **Level 1 - Member Server.** The recommended state for this setting is: 'Administrators, Remote Desktop Users'. **Note:** A Member Server that holds the _Remote Desktop Services_ Role with _Remote Desktop Connection Broker_ Role Service will require a special exception to this recommendation, to allow the 'Authenticated Users' group to be granted this user right. **Note 2:** The above lists are to be treated as allowlists, which implies that the above principals need not be present for assessment of this recommendation to pass.</span></p><br />**Key Path**: [Privilege Rights]SeRemoteInteractiveLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators, Remote Desktop Users<br /><sub>(Policy)</sub> |Critical |
-|Back up files and directories<br /><sub>(CCE-35912-5)</sub> |**Description**: This policy setting allows users to circumvent file and directory permissions to backup the system. This user right is enabled only when an application (such as NTBACKUP) attempts to access a file or directory through the NTFS file system backup application programming interface (API). Otherwise, the assigned file and directory permissions apply. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeBackupPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member | Administrators, Backup Operators, Server Operators<br /><sub>(Policy)</sub> |Critical |
-|Bypass traverse checking<br /><sub>(AZ-WIN-00184)</sub> |**Description**: This policy setting allows users who do not have the Traverse Folder access permission to pass through folders when they browse an object path in the NTFS file system or the registry. This user right does not allow users to list the contents of a folder. When configuring a user right in the SCM enter a comma delimited list of accounts. Accounts can be either local or located in Active Directory, they can be groups, users, or computers.<br />**Key Path**: [Privilege Rights]SeChangeNotifyPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member | Administrators, Authenticated Users, Backup Operators, Local Service, Network Service<br /><sub>(Policy)</sub> |Critical |
-|Change the system time<br /><sub>(CCE-37452-0)</sub> |**Description**: This policy setting determines which users and groups can change the time and date on the internal clock of the computers in your environment. Users who are assigned this user right can affect the appearance of event logs. When a computer's time setting is changed, logged events reflect the new time, not the actual time that the events occurred. When configuring a user right in the SCM enter a comma delimited list of accounts. Accounts can be either local or located in Active Directory, they can be groups, users, or computers. **Note:** Discrepancies between the time on the local computer and on the domain controllers in your environment may cause problems for the Kerberos authentication protocol, which could make it impossible for users to log on to the domain or obtain authorization to access domain resources after they are logged on. Also, problems will occur when Group Policy is applied to client computers if the system time is not synchronized with the domain controllers. The recommended state for this setting is: `Administrators, LOCAL SERVICE`.<br />**Key Path**: [Privilege Rights]SeSystemtimePrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member | Administrators, Server Operators, LOCAL SERVICE<br /><sub>(Policy)</sub> |Critical |
-|Change the time zone<br /><sub>(CCE-37700-2)</sub> |**Description**: This setting determines which users can change the time zone of the computer. This ability holds no great danger for the computer and may be useful for mobile workers. The recommended state for this setting is: `Administrators, LOCAL SERVICE`.<br />**Key Path**: [Privilege Rights]SeTimeZonePrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators, LOCAL SERVICE<br /><sub>(Policy)</sub> |Critical |
-|Create a pagefile<br /><sub>(CCE-35821-8)</sub> |**Description**: This policy setting allows users to change the size of the pagefile. By making the pagefile extremely large or extremely small, an attacker could easily affect the performance of a compromised computer. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeCreatePagefilePrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Critical |
-|Create a token object<br /><sub>(CCE-36861-3)</sub> |**Description**: This policy setting allows a process to create an access token, which may provide elevated rights to access sensitive data. The recommended state for this setting is: `No One`.<br />**Key Path**: [Privilege Rights]SeCreateTokenPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= No One<br /><sub>(Policy)</sub> |Warning |
-|Create global objects<br /><sub>(CCE-37453-8)</sub> |**Description**: This policy setting determines whether users can create global objects that are available to all sessions. Users can still create objects that are specific to their own session if they do not have this user right. Users who can create global objects could affect processes that run under other users' sessions. This capability could lead to a variety of problems, such as application failure or data corruption. The recommended state for this setting is: `Administrators, LOCAL SERVICE, NETWORK SERVICE, SERVICE`. **Note:** A Member Server with Microsoft SQL Server _and_ its optional "Integration Services" component installed will require a special exception to this recommendation for additional SQL-generated entries to be granted this user right.<br />**Key Path**: [Privilege Rights]SeCreateGlobalPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators, SERVICE, LOCAL SERVICE, NETWORK SERVICE<br /><sub>(Policy)</sub> |Warning |
-|Create permanent shared objects<br /><sub>(CCE-36532-0)</sub> |**Description**: This user right is useful to kernel-mode components that extend the object namespace. However, components that run in kernel mode have this user right inherently. Therefore, it is typically not necessary to specifically assign this user right. The recommended state for this setting is: `No One`.<br />**Key Path**: [Privilege Rights]SeCreatePermanentPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= No One<br /><sub>(Policy)</sub> |Warning |
-|Create symbolic links<br /><sub>(CCE-35823-4)</sub> |**Description**: <p><span>This policy setting determines which users can create symbolic links. In Windows Vista, existing NTFS file system objects, such as files and folders, can be accessed by referring to a new kind of file system object called a symbolic link. A symbolic link is a pointer (much like a shortcut or .lnk file) to another file system object, which can be a file, folder, shortcut or another symbolic link. The difference between a shortcut and a symbolic link is that a shortcut only works from within the Windows shell. To other programs and applications, shortcuts are just another file, whereas with symbolic links, the concept of a shortcut is implemented as a feature of the NTFS file system. Symbolic links can potentially expose security vulnerabilities in applications that are not designed to use them. For this reason, the privilege for creating symbolic links should only be assigned to trusted users. By default, only Administrators can create symbolic links. - **Level 1 - Domain Controller.** The recommended state for this setting is: 'Administrators'. - **Level 1 - Member Server.** The recommended state for this setting is: 'Administrators' and (when the _Hyper-V_ Role is installed) 'NT VIRTUAL MACHINE\Virtual Machines'.</span></p><br />**Key Path**: [Privilege Rights]SeCreateSymbolicLinkPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators, NT VIRTUAL MACHINE\Virtual Machines<br /><sub>(Policy)</sub> |Critical |
-|Deny access to this computer from the network<br /><sub>(CCE-37954-5)</sub> |**Description**: <p><span>This policy setting prohibits users from connecting to a computer from across the network, which would allow users to access and potentially modify data remotely. In high security environments, there should be no need for remote users to access data on a computer. Instead, file sharing should be accomplished through the use of network servers. - **Level 1 - Domain Controller.** The recommended state for this setting is to include: 'Guests, Local account'. - **Level 1 - Member Server.** The recommended state for this setting is to include: 'Guests, Local account and member of Administrators group'. **Caution:** Configuring a standalone (non-domain-joined) server as described above may result in an inability to remotely administer the server. **Note:** Configuring a member server or standalone server as described above may adversely affect applications that create a local service account and place it in the Administrators group - in which case you must either convert the application to use a domain-hosted service account, or remove Local account and member of Administrators group from this User Right Assignment. Using a domain-hosted service account is strongly preferred over making an exception to this rule, where possible.</span></p><br />**Key Path**: [Privilege Rights]SeDenyNetworkLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Guests<br /><sub>(Policy)</sub> |Critical |
-|Deny log on as a batch job<br /><sub>(CCE-36923-1)</sub> |**Description**: This policy setting determines which accounts will not be able to log on to the computer as a batch job. A batch job is not a batch (.bat) file, but rather a batch-queue facility. Accounts that use the Task Scheduler to schedule jobs need this user right. The **Deny log on as a batch job** user right overrides the **Log on as a batch job** user right, which could be used to allow accounts to schedule jobs that consume excessive system resources. Such an occurrence could cause a DoS condition. Failure to assign this user right to the recommended accounts can be a security risk. The recommended state for this setting is to include: `Guests`.<br />**Key Path**: [Privilege Rights]SeDenyBatchLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Guests<br /><sub>(Policy)</sub> |Critical |
-|Deny log on as a service<br /><sub>(CCE-36877-9)</sub> |**Description**: This security setting determines which service accounts are prevented from registering a process as a service. This policy setting supersedes the **Log on as a service** policy setting if an account is subject to both policies. The recommended state for this setting is to include: `Guests`. **Note:** This security setting does not apply to the System, Local Service, or Network Service accounts.<br />**Key Path**: [Privilege Rights]SeDenyServiceLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Guests<br /><sub>(Policy)</sub> |Critical |
-|Deny log on locally<br /><sub>(CCE-37146-8)</sub> |**Description**: This security setting determines which users are prevented from logging on at the computer. This policy setting supersedes the **Allow log on locally** policy setting if an account is subject to both policies. **Important:** If you apply this security policy to the Everyone group, no one will be able to log on locally. The recommended state for this setting is to include: `Guests`.<br />**Key Path**: [Privilege Rights]SeDenyInteractiveLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Guests<br /><sub>(Policy)</sub> |Critical |
-|Deny log on through Remote Desktop Services<br /><sub>(CCE-36867-0)</sub> |**Description**: This policy setting determines whether users can log on as Terminal Services clients. After the baseline member server is joined to a domain environment, there is no need to use local accounts to access the server from the network. Domain accounts can access the server for administration and end-user processing. The recommended state for this setting is to include: `Guests, Local account`. **Caution:** Configuring a standalone (non-domain-joined) server as described above may result in an inability to remotely administer the server.<br />**Key Path**: [Privilege Rights]SeDenyRemoteInteractiveLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Workgroup Member |\>\= Guests<br /><sub>(Policy)</sub> |Critical |
-|Enable computer and user accounts to be trusted for delegation<br /><sub>(CCE-36860-5)</sub> |**Description**: <p><span>This policy setting allows users to change the Trusted for Delegation setting on a computer object in Active Directory. Abuse of this privilege could allow unauthorized users to impersonate other users on the network. - **Level 1 - Domain Controller.** The recommended state for this setting is: 'Administrators' - **Level 1 - Member Server.** The recommended state for this setting is: 'No One'.</span></p><br />**Key Path**: [Privilege Rights]SeEnableDelegationPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member |\= No One<br /><sub>(Policy)</sub> |Critical |
-|Force shutdown from a remote system<br /><sub>(CCE-37877-8)</sub> |**Description**: This policy setting allows users to shut down Windows Vista-based computers from remote locations on the network. Anyone who has been assigned this user right can cause a denial of service (DoS) condition, which would make the computer unavailable to service user requests. Therefore, it is recommended that only highly trusted administrators be assigned this user right. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeRemoteShutdownPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Critical |
-|Generate security audits<br /><sub>(CCE-37639-2)</sub> |**Description**: This policy setting determines which users or processes can generate audit records in the Security log. The recommended state for this setting is: `LOCAL SERVICE, NETWORK SERVICE`. **Note:** A Member Server that holds the _Web Server (IIS)_ Role with _Web Server_ Role Service will require a special exception to this recommendation, to allow IIS application pool(s) to be granted this user right. **Note #2:** A Member Server that holds the _Active Directory Federation Services_ Role will require a special exception to this recommendation, to allow the `NT SERVICE\ADFSSrv` and `NT SERVICE\DRS` services, as well as the associated Active Directory Federation Services service account, to be granted this user right.<br />**Key Path**: [Privilege Rights]SeAuditPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Local Service, Network Service, IIS APPPOOL\DefaultAppPool<br /><sub>(Policy)</sub> |Critical |
-|Increase a process working set<br /><sub>(AZ-WIN-00185)</sub> |**Description**: This privilege determines which user accounts can increase or decrease the size of a processΓÇÖs working set. The working set of a process is the set of memory pages currently visible to the process in physical RAM memory. These pages are resident and available for an application to use without triggering a page fault. The minimum and maximum working set sizes affect the virtual memory paging behavior of a process. When configuring a user right in the SCM enter a comma delimited list of accounts. Accounts can be either local or located in Active Directory, they can be groups, users, or computers.<br />**Key Path**: [Privilege Rights]SeIncreaseWorkingSetPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member | Administrators, Local Service<br /><sub>(Policy)</sub> |Warning |
-|Increase scheduling priority<br /><sub>(CCE-38326-5)</sub> |**Description**: This policy setting determines whether users can increase the base priority class of a process. (It is not a privileged operation to increase relative priority within a priority class.) This user right is not required by administrative tools that are supplied with the operating system but might be required by software development tools. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeIncreaseBasePriorityPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Warning |
-|Load and unload device drivers<br /><sub>(CCE-36318-4)</sub> |**Description**: This policy setting allows users to dynamically load a new device driver on a system. An attacker could potentially use this capability to install malicious code that appears to be a device driver. This user right is required for users to add local printers or printer drivers in Windows Vista. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeLoadDriverPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators, Print Operators<br /><sub>(Policy)</sub> |Warning |
-|Lock pages in memory<br /><sub>(CCE-36495-0)</sub> |**Description**: This policy setting allows a process to keep data in physical memory, which prevents the system from paging the data to virtual memory on disk. If this user right is assigned, significant degradation of system performance can occur. The recommended state for this setting is: `No One`.<br />**Key Path**: [Privilege Rights]SeLockMemoryPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= No One<br /><sub>(Policy)</sub> |Warning |
-|Manage auditing and security log<br /><sub>(CCE-35906-7)</sub> |**Description**: <p><span>This policy setting determines which users can change the auditing options for files and directories and clear the Security log. For environments running Microsoft Exchange Server, the 'Exchange Servers' group must possess this privilege on Domain Controllers to properly function. Given this, DCs granting the 'Exchange Servers' group this privilege do conform with this benchmark. If the environment does not use Microsoft Exchange Server, then this privilege should be limited to only 'Administrators' on DCs. - **Level 1 - Domain Controller.** The recommended state for this setting is: 'Administrators and (when Exchange is running in the environment) 'Exchange Servers'. - **Level 1 - Member Server.** The recommended state for this setting is: 'Administrators'</span></p><br />**Key Path**: [Privilege Rights]SeSecurityPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Critical |
-|Modify an object label<br /><sub>(CCE-36054-5)</sub> |**Description**: This privilege determines which user accounts can modify the integrity label of objects, such as files, registry keys, or processes owned by other users. Processes running under a user account can modify the label of an object owned by that user to a lower level without this privilege. The recommended state for this setting is: `No One`.<br />**Key Path**: [Privilege Rights]SeRelabelPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= No One<br /><sub>(Policy)</sub> |Warning |
-|Modify firmware environment values<br /><sub>(CCE-38113-7)</sub> |**Description**: This policy setting allows users to configure the system-wide environment variables that affect hardware configuration. This information is typically stored in the Last Known Good Configuration. Modification of these values and could lead to a hardware failure that would result in a denial of service condition. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeSystemEnvironmentPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Warning |
-|Perform volume maintenance tasks<br /><sub>(CCE-36143-6)</sub> |**Description**: This policy setting allows users to manage the system's volume or disk configuration, which could allow a user to delete a volume and cause data loss as well as a denial-of-service condition. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeManageVolumePrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Warning |
-|Profile single process<br /><sub>(CCE-37131-0)</sub> |**Description**: This policy setting determines which users can use tools to monitor the performance of non-system processes. Typically, you do not need to configure this user right to use the Microsoft Management Console (MMC) Performance snap-in. However, you do need this user right if System Monitor is configured to collect data using Windows Management Instrumentation (WMI). Restricting the Profile single process user right prevents intruders from gaining additional information that could be used to mount an attack on the system. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeProfileSingleProcessPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Warning |
-|Profile system performance<br /><sub>(CCE-36052-9)</sub> |**Description**: This policy setting allows users to use tools to view the performance of different system processes, which could be abused to allow attackers to determine a system's active processes and provide insight into the potential attack surface of the computer. The recommended state for this setting is: `Administrators, NT SERVICE\WdiServiceHost`.<br />**Key Path**: [Privilege Rights]SeSystemProfilePrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators, NT SERVICE\WdiServiceHost<br /><sub>(Policy)</sub> |Warning |
-|Replace a process level token<br /><sub>(CCE-37430-6)</sub> |**Description**: This policy setting allows one process or service to start another service or process with a different security access token, which can be used to modify the security access token of that sub-process and result in the escalation of privileges. The recommended state for this setting is: `LOCAL SERVICE, NETWORK SERVICE`. **Note:** A Member Server that holds the _Web Server (IIS)_ Role with _Web Server_ Role Service will require a special exception to this recommendation, to allow IIS application pool(s) to be granted this user right. **Note #2:** A Member Server with Microsoft SQL Server installed will require a special exception to this recommendation for additional SQL-generated entries to be granted this user right.<br />**Key Path**: [Privilege Rights]SeAssignPrimaryTokenPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | LOCAL SERVICE, NETWORK SERVICE<br /><sub>(Policy)</sub> |Warning |
-|Restore files and directories<br /><sub>(CCE-37613-7)</sub> |**Description**: This policy setting determines which users can bypass file, directory, registry, and other persistent object permissions when restoring backed up files and directories on computers that run Windows Vista in your environment. This user right also determines which users can set valid security principals as object owners; it is similar to the Backup files and directories user right. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeRestorePrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member | Administrators, Backup Operators<br /><sub>(Policy)</sub> |Warning |
-|Shut down the system<br /><sub>(CCE-38328-1)</sub> |**Description**: This policy setting determines which users who are logged on locally to the computers in your environment can shut down the operating system with the Shut Down command. Misuse of this user right can result in a denial of service condition. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeShutdownPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Warning |
-|Take ownership of files or other objects<br /><sub>(CCE-38325-7)</sub> |**Description**: This policy setting allows users to take ownership of files, folders, registry keys, processes, or threads. This user right bypasses any permissions that are in place to protect objects to give ownership to the specified user. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeTakeOwnershipPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Critical |
+|Access Credential Manager as a trusted caller<br /><sub>(CCE-37056-9)</sub> |**Description**: This security setting is used by Credential Manager during Backup and Restore. No accounts should have this user right, as it is only assigned to Winlogon. Users' saved credentials might be compromised if this user right is assigned to other entities. The recommended state for this setting is: `No One`.<br />**Key Path**: [Privilege Rights]SeTrustedCredManAccessPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= No One<br /><sub>(Policy)</sub> |Warning |
+|Access this computer from the network<br /><sub>(CCE-35818-4)</sub> |**Description**: <p><span>This policy setting allows other users on the network to connect to the computer and is required by various network protocols that include Server Message Block (SMB) based protocols, NetBIOS, Common Internet File System (CIFS), and Component Object Model Plus (COM+). - *Level 1 - Domain Controller.* The recommended state for this setting is: 'Administrators, Authenticated Users, ENTERPRISE DOMAIN CONTROLLERS'. - *Level 1 - Member Server.* The recommended state for this setting is: 'Administrators, Authenticated Users'.</span></p><br />**Key Path**: [Privilege Rights]SeNetworkLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member | Administrators, Authenticated Users<br /><sub>(Policy)</sub> |Critical |
+|Act as part of the operating system<br /><sub>(CCE-36876-1)</sub> |**Description**: This policy setting allows a process to assume the identity of any user and thus gain access to the resources that the user is authorized to access. The recommended state for this setting is: `No One`.<br />**Key Path**: [Privilege Rights]SeTcbPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= No One<br /><sub>(Policy)</sub> |Critical |
+|Allow log on locally<br /><sub>(CCE-37659-0)</sub> |**Description**: This policy setting determines which users can interactively log on to computers in your environment. Logons that are initiated by pressing the CTRL+ALT+DEL key sequence on the client computer keyboard require this user right. Users who attempt to log on through Terminal Services or IIS also require this user right. The Guest account is assigned this user right by default. Although this account is disabled by default, Microsoft recommends that you enable this setting through Group Policy. However, this user right should generally be restricted to the Administrators and Users groups. Assign this user right to the Backup Operators group if your organization requires that they have this capability. When configuring a user right in the SCM enter a comma delimited list of accounts. Accounts can be either local or located in Active Directory, they can be groups, users, or computers.<br />**Key Path**: [Privilege Rights]SeInteractiveLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Critical |
+|Allow log on through Remote Desktop Services<br /><sub>(CCE-37072-6)</sub> |**Description**: <p><span>This policy setting determines which users or groups have the right to log on as a Terminal Services client. Remote desktop users require this user right. If your organization uses Remote Assistance as part of its help desk strategy, create a group and assign it this user right through Group Policy. If the help desk in your organization does not use Remote Assistance, assign this user right only to the Administrators group or use the restricted groups feature to ensure that no user accounts are part of the Remote Desktop Users group. Restrict this user right to the Administrators group, and possibly the Remote Desktop Users group, to prevent unwanted users from gaining access to computers on your network by means of the Remote Assistance feature. - **Level 1 - Domain Controller.** The recommended state for this setting is: 'Administrators'. - **Level 1 - Member Server.** The recommended state for this setting is: 'Administrators, Remote Desktop Users'. **Note:** A Member Server that holds the _Remote Desktop Services_ Role with _Remote Desktop Connection Broker_ Role Service will require a special exception to this recommendation, to allow the 'Authenticated Users' group to be granted this user right. **Note 2:** The above lists are to be treated as allowlists, which implies that the above principals need not be present for assessment of this recommendation to pass.</span></p><br />**Key Path**: [Privilege Rights]SeRemoteInteractiveLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators, Remote Desktop Users<br /><sub>(Policy)</sub> |Critical |
+|Back up files and directories<br /><sub>(CCE-35912-5)</sub> |**Description**: This policy setting allows users to circumvent file and directory permissions to backup the system. This user right is enabled only when an application (such as NTBACKUP) attempts to access a file or directory through the NTFS file system backup application programming interface (API). Otherwise, the assigned file and directory permissions apply. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeBackupPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member | Administrators, Backup Operators, Server Operators<br /><sub>(Policy)</sub> |Critical |
+|Bypass traverse checking<br /><sub>(AZ-WIN-00184)</sub> |**Description**: This policy setting allows users who do not have the Traverse Folder access permission to pass through folders when they browse an object path in the NTFS file system or the registry. This user right does not allow users to list the contents of a folder. When configuring a user right in the SCM enter a comma delimited list of accounts. Accounts can be either local or located in Active Directory, they can be groups, users, or computers.<br />**Key Path**: [Privilege Rights]SeChangeNotifyPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member | Administrators, Authenticated Users, Backup Operators, Local Service, Network Service<br /><sub>(Policy)</sub> |Critical |
+|Change the system time<br /><sub>(CCE-37452-0)</sub> |**Description**: This policy setting determines which users and groups can change the time and date on the internal clock of the computers in your environment. Users who are assigned this user right can affect the appearance of event logs. When a computer's time setting is changed, logged events reflect the new time, not the actual time that the events occurred. When configuring a user right in the SCM enter a comma delimited list of accounts. Accounts can be either local or located in Active Directory, they can be groups, users, or computers. **Note:** Discrepancies between the time on the local computer and on the domain controllers in your environment may cause problems for the Kerberos authentication protocol, which could make it impossible for users to log on to the domain or obtain authorization to access domain resources after they are logged on. Also, problems will occur when Group Policy is applied to client computers if the system time is not synchronized with the domain controllers. The recommended state for this setting is: `Administrators, LOCAL SERVICE`.<br />**Key Path**: [Privilege Rights]SeSystemtimePrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member | Administrators, Server Operators, LOCAL SERVICE<br /><sub>(Policy)</sub> |Critical |
+|Change the time zone<br /><sub>(CCE-37700-2)</sub> |**Description**: This setting determines which users can change the time zone of the computer. This ability holds no great danger for the computer and may be useful for mobile workers. The recommended state for this setting is: `Administrators, LOCAL SERVICE`.<br />**Key Path**: [Privilege Rights]SeTimeZonePrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators, LOCAL SERVICE<br /><sub>(Policy)</sub> |Critical |
+|Create a pagefile<br /><sub>(CCE-35821-8)</sub> |**Description**: This policy setting allows users to change the size of the pagefile. By making the pagefile extremely large or extremely small, an attacker could easily affect the performance of a compromised computer. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeCreatePagefilePrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Critical |
+|Create a token object<br /><sub>(CCE-36861-3)</sub> |**Description**: This policy setting allows a process to create an access token, which may provide elevated rights to access sensitive data. The recommended state for this setting is: `No One`.<br />**Key Path**: [Privilege Rights]SeCreateTokenPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= No One<br /><sub>(Policy)</sub> |Warning |
+|Create global objects<br /><sub>(CCE-37453-8)</sub> |**Description**: This policy setting determines whether users can create global objects that are available to all sessions. Users can still create objects that are specific to their own session if they do not have this user right. Users who can create global objects could affect processes that run under other users' sessions. This capability could lead to a variety of problems, such as application failure or data corruption. The recommended state for this setting is: `Administrators, LOCAL SERVICE, NETWORK SERVICE, SERVICE`. **Note:** A Member Server with Microsoft SQL Server _and_ its optional "Integration Services" component installed will require a special exception to this recommendation for additional SQL-generated entries to be granted this user right.<br />**Key Path**: [Privilege Rights]SeCreateGlobalPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators, SERVICE, LOCAL SERVICE, NETWORK SERVICE<br /><sub>(Policy)</sub> |Warning |
+|Create permanent shared objects<br /><sub>(CCE-36532-0)</sub> |**Description**: This user right is useful to kernel-mode components that extend the object namespace. However, components that run in kernel mode have this user right inherently. Therefore, it is typically not necessary to specifically assign this user right. The recommended state for this setting is: `No One`.<br />**Key Path**: [Privilege Rights]SeCreatePermanentPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= No One<br /><sub>(Policy)</sub> |Warning |
+|Create symbolic links<br /><sub>(CCE-35823-4)</sub> |**Description**: <p><span>This policy setting determines which users can create symbolic links. In Windows Vista, existing NTFS file system objects, such as files and folders, can be accessed by referring to a new kind of file system object called a symbolic link. A symbolic link is a pointer (much like a shortcut or .lnk file) to another file system object, which can be a file, folder, shortcut or another symbolic link. The difference between a shortcut and a symbolic link is that a shortcut only works from within the Windows shell. To other programs and applications, shortcuts are just another file, whereas with symbolic links, the concept of a shortcut is implemented as a feature of the NTFS file system. Symbolic links can potentially expose security vulnerabilities in applications that are not designed to use them. For this reason, the privilege for creating symbolic links should only be assigned to trusted users. By default, only Administrators can create symbolic links. - **Level 1 - Domain Controller.** The recommended state for this setting is: 'Administrators'. - **Level 1 - Member Server.** The recommended state for this setting is: 'Administrators' and (when the _Hyper-V_ Role is installed) 'NT VIRTUAL MACHINE\Virtual Machines'.</span></p><br />**Key Path**: [Privilege Rights]SeCreateSymbolicLinkPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators, NT VIRTUAL MACHINE\Virtual Machines<br /><sub>(Policy)</sub> |Critical |
+|Deny access to this computer from the network<br /><sub>(CCE-37954-5)</sub> |**Description**: <p><span>This policy setting prohibits users from connecting to a computer from across the network, which would allow users to access and potentially modify data remotely. In high security environments, there should be no need for remote users to access data on a computer. Instead, file sharing should be accomplished through the use of network servers. - **Level 1 - Domain Controller.** The recommended state for this setting is to include: 'Guests, Local account'. - **Level 1 - Member Server.** The recommended state for this setting is to include: 'Guests, Local account and member of Administrators group'. **Caution:** Configuring a standalone (non-domain-joined) server as described above may result in an inability to remotely administer the server. **Note:** Configuring a member server or standalone server as described above may adversely affect applications that create a local service account and place it in the Administrators group - in which case you must either convert the application to use a domain-hosted service account, or remove Local account and member of Administrators group from this User Right Assignment. Using a domain-hosted service account is strongly preferred over making an exception to this rule, where possible.</span></p><br />**Key Path**: [Privilege Rights]SeDenyNetworkLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Guests<br /><sub>(Policy)</sub> |Critical |
+|Deny log on as a batch job<br /><sub>(CCE-36923-1)</sub> |**Description**: This policy setting determines which accounts will not be able to log on to the computer as a batch job. A batch job is not a batch (.bat) file, but rather a batch-queue facility. Accounts that use the Task Scheduler to schedule jobs need this user right. The **Deny log on as a batch job** user right overrides the **Log on as a batch job** user right, which could be used to allow accounts to schedule jobs that consume excessive system resources. Such an occurrence could cause a DoS condition. Failure to assign this user right to the recommended accounts can be a security risk. The recommended state for this setting is to include: `Guests`.<br />**Key Path**: [Privilege Rights]SeDenyBatchLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Guests<br /><sub>(Policy)</sub> |Critical |
+|Deny log on as a service<br /><sub>(CCE-36877-9)</sub> |**Description**: This security setting determines which service accounts are prevented from registering a process as a service. This policy setting supersedes the **Log on as a service** policy setting if an account is subject to both policies. The recommended state for this setting is to include: `Guests`. **Note:** This security setting does not apply to the System, Local Service, or Network Service accounts.<br />**Key Path**: [Privilege Rights]SeDenyServiceLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Guests<br /><sub>(Policy)</sub> |Critical |
+|Deny log on locally<br /><sub>(CCE-37146-8)</sub> |**Description**: This security setting determines which users are prevented from logging on at the computer. This policy setting supersedes the **Allow log on locally** policy setting if an account is subject to both policies. **Important:** If you apply this security policy to the Everyone group, no one will be able to log on locally. The recommended state for this setting is to include: `Guests`.<br />**Key Path**: [Privilege Rights]SeDenyInteractiveLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Guests<br /><sub>(Policy)</sub> |Critical |
+|Deny log on through Remote Desktop Services<br /><sub>(CCE-36867-0)</sub> |**Description**: This policy setting determines whether users can log on as Terminal Services clients. After the baseline member server is joined to a domain environment, there is no need to use local accounts to access the server from the network. Domain accounts can access the server for administration and end-user processing. The recommended state for this setting is to include: `Guests, Local account`. **Caution:** Configuring a standalone (non-domain-joined) server as described above may result in an inability to remotely administer the server.<br />**Key Path**: [Privilege Rights]SeDenyRemoteInteractiveLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Workgroup Member |\>\= Guests<br /><sub>(Policy)</sub> |Critical |
+|Enable computer and user accounts to be trusted for delegation<br /><sub>(CCE-36860-5)</sub> |**Description**: <p><span>This policy setting allows users to change the Trusted for Delegation setting on a computer object in Active Directory. Abuse of this privilege could allow unauthorized users to impersonate other users on the network. - **Level 1 - Domain Controller.** The recommended state for this setting is: 'Administrators' - **Level 1 - Member Server.** The recommended state for this setting is: 'No One'.</span></p><br />**Key Path**: [Privilege Rights]SeEnableDelegationPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |\= No One<br /><sub>(Policy)</sub> |Critical |
+|Force shutdown from a remote system<br /><sub>(CCE-37877-8)</sub> |**Description**: This policy setting allows users to shut down Windows Vista-based computers from remote locations on the network. Anyone who has been assigned this user right can cause a denial of service (DoS) condition, which would make the computer unavailable to service user requests. Therefore, it is recommended that only highly trusted administrators be assigned this user right. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeRemoteShutdownPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Critical |
+|Generate security audits<br /><sub>(CCE-37639-2)</sub> |**Description**: This policy setting determines which users or processes can generate audit records in the Security log. The recommended state for this setting is: `LOCAL SERVICE, NETWORK SERVICE`. **Note:** A Member Server that holds the _Web Server (IIS)_ Role with _Web Server_ Role Service will require a special exception to this recommendation, to allow IIS application pool(s) to be granted this user right. **Note #2:** A Member Server that holds the _Active Directory Federation Services_ Role will require a special exception to this recommendation, to allow the `NT SERVICE\ADFSSrv` and `NT SERVICE\DRS` services, as well as the associated Active Directory Federation Services service account, to be granted this user right.<br />**Key Path**: [Privilege Rights]SeAuditPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Local Service, Network Service, IIS APPPOOL\DefaultAppPool<br /><sub>(Policy)</sub> |Critical |
+|Increase a process working set<br /><sub>(AZ-WIN-00185)</sub> |**Description**: This privilege determines which user accounts can increase or decrease the size of a processΓÇÖs working set. The working set of a process is the set of memory pages currently visible to the process in physical RAM memory. These pages are resident and available for an application to use without triggering a page fault. The minimum and maximum working set sizes affect the virtual memory paging behavior of a process. When configuring a user right in the SCM enter a comma delimited list of accounts. Accounts can be either local or located in Active Directory, they can be groups, users, or computers.<br />**Key Path**: [Privilege Rights]SeIncreaseWorkingSetPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member | Administrators, Local Service<br /><sub>(Policy)</sub> |Warning |
+|Increase scheduling priority<br /><sub>(CCE-38326-5)</sub> |**Description**: This policy setting determines whether users can increase the base priority class of a process. (It is not a privileged operation to increase relative priority within a priority class.) This user right is not required by administrative tools that are supplied with the operating system but might be required by software development tools. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeIncreaseBasePriorityPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Warning |
+|Load and unload device drivers<br /><sub>(CCE-36318-4)</sub> |**Description**: This policy setting allows users to dynamically load a new device driver on a system. An attacker could potentially use this capability to install malicious code that appears to be a device driver. This user right is required for users to add local printers or printer drivers in Windows Vista. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeLoadDriverPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators, Print Operators<br /><sub>(Policy)</sub> |Warning |
+|Lock pages in memory<br /><sub>(CCE-36495-0)</sub> |**Description**: This policy setting allows a process to keep data in physical memory, which prevents the system from paging the data to virtual memory on disk. If this user right is assigned, significant degradation of system performance can occur. The recommended state for this setting is: `No One`.<br />**Key Path**: [Privilege Rights]SeLockMemoryPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= No One<br /><sub>(Policy)</sub> |Warning |
+|Manage auditing and security log<br /><sub>(CCE-35906-7)</sub> |**Description**: <p><span>This policy setting determines which users can change the auditing options for files and directories and clear the Security log. For environments running Microsoft Exchange Server, the 'Exchange Servers' group must possess this privilege on Domain Controllers to properly function. Given this, DCs granting the 'Exchange Servers' group this privilege do conform with this benchmark. If the environment does not use Microsoft Exchange Server, then this privilege should be limited to only 'Administrators' on DCs. - **Level 1 - Domain Controller.** The recommended state for this setting is: 'Administrators and (when Exchange is running in the environment) 'Exchange Servers'. - **Level 1 - Member Server.** The recommended state for this setting is: 'Administrators'</span></p><br />**Key Path**: [Privilege Rights]SeSecurityPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Critical |
+|Modify an object label<br /><sub>(CCE-36054-5)</sub> |**Description**: This privilege determines which user accounts can modify the integrity label of objects, such as files, registry keys, or processes owned by other users. Processes running under a user account can modify the label of an object owned by that user to a lower level without this privilege. The recommended state for this setting is: `No One`.<br />**Key Path**: [Privilege Rights]SeRelabelPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= No One<br /><sub>(Policy)</sub> |Warning |
+|Modify firmware environment values<br /><sub>(CCE-38113-7)</sub> |**Description**: This policy setting allows users to configure the system-wide environment variables that affect hardware configuration. This information is typically stored in the Last Known Good Configuration. Modification of these values and could lead to a hardware failure that would result in a denial of service condition. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeSystemEnvironmentPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Warning |
+|Perform volume maintenance tasks<br /><sub>(CCE-36143-6)</sub> |**Description**: This policy setting allows users to manage the system's volume or disk configuration, which could allow a user to delete a volume and cause data loss as well as a denial-of-service condition. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeManageVolumePrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Warning |
+|Profile single process<br /><sub>(CCE-37131-0)</sub> |**Description**: This policy setting determines which users can use tools to monitor the performance of non-system processes. Typically, you do not need to configure this user right to use the Microsoft Management Console (MMC) Performance snap-in. However, you do need this user right if System Monitor is configured to collect data using Windows Management Instrumentation (WMI). Restricting the Profile single process user right prevents intruders from gaining additional information that could be used to mount an attack on the system. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeProfileSingleProcessPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Warning |
+|Profile system performance<br /><sub>(CCE-36052-9)</sub> |**Description**: This policy setting allows users to use tools to view the performance of different system processes, which could be abused to allow attackers to determine a system's active processes and provide insight into the potential attack surface of the computer. The recommended state for this setting is: `Administrators, NT SERVICE\WdiServiceHost`.<br />**Key Path**: [Privilege Rights]SeSystemProfilePrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators, NT SERVICE\WdiServiceHost<br /><sub>(Policy)</sub> |Warning |
+|Replace a process level token<br /><sub>(CCE-37430-6)</sub> |**Description**: This policy setting allows one process or service to start another service or process with a different security access token, which can be used to modify the security access token of that sub-process and result in the escalation of privileges. The recommended state for this setting is: `LOCAL SERVICE, NETWORK SERVICE`. **Note:** A Member Server that holds the _Web Server (IIS)_ Role with _Web Server_ Role Service will require a special exception to this recommendation, to allow IIS application pool(s) to be granted this user right. **Note #2:** A Member Server with Microsoft SQL Server installed will require a special exception to this recommendation for additional SQL-generated entries to be granted this user right.<br />**Key Path**: [Privilege Rights]SeAssignPrimaryTokenPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | LOCAL SERVICE, NETWORK SERVICE<br /><sub>(Policy)</sub> |Warning |
+|Restore files and directories<br /><sub>(CCE-37613-7)</sub> |**Description**: This policy setting determines which users can bypass file, directory, registry, and other persistent object permissions when restoring backed up files and directories on computers that run Windows Vista in your environment. This user right also determines which users can set valid security principals as object owners; it is similar to the Backup files and directories user right. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeRestorePrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member | Administrators, Backup Operators<br /><sub>(Policy)</sub> |Warning |
+|Shut down the system<br /><sub>(CCE-38328-1)</sub> |**Description**: This policy setting determines which users who are logged on locally to the computers in your environment can shut down the operating system with the Shut Down command. Misuse of this user right can result in a denial of service condition. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeShutdownPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Warning |
+|Take ownership of files or other objects<br /><sub>(CCE-38325-7)</sub> |**Description**: This policy setting allows users to take ownership of files, folders, registry keys, processes, or threads. This user right bypasses any permissions that are in place to protect objects to give ownership to the specified user. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeTakeOwnershipPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Critical |
## Windows Components |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Allow Basic authentication<br /><sub>(CCE-36254-1)</sub> |**Description**: This policy setting allows you to manage whether the Windows Remote Management (WinRM) service accepts Basic authentication from a remote client. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WinRM\Client\AllowBasic<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
-|Allow Cortana<br /><sub>(AZ-WIN-00131)</sub> |**Description**: This policy setting specifies whether Cortana is allowed on the device.   If you enable or don't configure this setting, Cortana will be allowed on the device. If you disable this setting, Cortana will be turned off.   When Cortana is off, users will still be able to use search to find things on the device and on the Internet.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Windows Search\AllowCortana<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
-|Allow Cortana above lock screen<br /><sub>(AZ-WIN-00130)</sub> |**Description**: This policy setting determines whether or not the user can interact with Cortana using speech while the system is locked. If you enable or don't configure this setting, the user can interact with Cortana using speech while the system is locked. If you disable this setting, the system will need to be unlocked for the user to interact with Cortana using speech.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Windows Search\AllowCortanaAboveLock<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
-|Allow indexing of encrypted files<br /><sub>(CCE-38277-0)</sub> |**Description**: This policy setting controls whether encrypted items are allowed to be indexed. When this setting is changed, the index is rebuilt completely. Full volume encryption (such as BitLocker Drive Encryption or a non-Microsoft solution) must be used for the location of the index to maintain security for encrypted files. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Windows Search\AllowIndexingEncryptedStoresOrItems<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
-|Allow Microsoft accounts to be optional<br /><sub>(CCE-38354-7)</sub> |**Description**: This policy setting lets you control whether Microsoft accounts are optional for Windows Store apps that require an account to sign in. This policy only affects Windows Store apps that support it. If you enable this policy setting, Windows Store apps that typically require a Microsoft account to sign in will allow users to sign in with an enterprise account instead. If you disable or do not configure this policy setting, users will need to sign in with a Microsoft account.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\MSAOptional<br />**OS**: WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Allow search and Cortana to use location<br /><sub>(AZ-WIN-00133)</sub> |**Description**: This policy setting specifies whether search and Cortana can provide location aware search and Cortana results.   If this is enabled, search and Cortana can access location information.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Windows Search\AllowSearchToUseLocation<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
-|Allow Telemetry<br /><sub>(AZ-WIN-00169)</sub> |**Description**: This policy setting determines the amount of diagnostic and usage data reported to Microsoft. A value of 0 will send minimal data to Microsoft. This data includes Malicious Software Removal Tool (MSRT) & Windows Defender data, if enabled, and telemetry client settings. Setting a value of 0 applies to enterprise, EDU, IoT and server devices only. Setting a value of 0 for other devices is equivalent to choosing a value of 1. A value of 1 sends only a basic amount of diagnostic and usage data. Note that setting values of 0 or 1 will degrade certain experiences on the device. A value of 2 sends enhanced diagnostic and usage data. A value of 3 sends the same data as a value of 2, plus additional diagnostics data, including the files and content that may have caused the problem. Windows 10 telemetry settings apply to the Windows operating system and some first party apps. This setting does not apply to third party apps running on Windows 10. The recommended state for this setting is: `Enabled: 0 - Security [Enterprise Only]`. **Note:** If the "Allow Telemetry" setting is configured to "0 - Security [Enterprise Only]", then the options in Windows Update to defer upgrades and updates will have no effect.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\DataCollection\AllowTelemetry<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 0<br /><sub>(Registry)</sub> |Warning |
-|Allow unencrypted traffic<br /><sub>(CCE-38223-4)</sub> |**Description**: This policy setting allows you to manage whether the Windows Remote Management (WinRM) service sends and receives unencrypted messages over the network. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WinRM\Client\AllowUnencryptedTraffic<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
-|Allow user control over installs<br /><sub>(CCE-36400-0)</sub> |**Description**: Permits users to change installation options that typically are available only to system administrators. The security features of Windows Installer prevent users from changing installation options typically reserved for system administrators, such as specifying the directory to which files are installed. If Windows Installer detects that an installation package has permitted the user to change a protected option, it stops the installation and displays a message. These security features operate only when the installation program is running in a privileged security context in which it has access to directories denied to the user. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Installer\EnableUserControl<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
-|Always install with elevated privileges<br /><sub>(CCE-37490-0)</sub> |**Description**: This setting controls whether or not Windows Installer should use system permissions when it installs any program on the system. **Note:** This setting appears both in the Computer Configuration and User Configuration folders. To make this setting effective, you must enable the setting in both folders. **Caution:** If enabled, skilled users can take advantage of the permissions this setting grants to change their privileges and gain permanent access to restricted files and folders. Note that the User Configuration version of this setting is not guaranteed to be secure. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Installer\AlwaysInstallElevated<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
-|Always prompt for password upon connection<br /><sub>(CCE-37929-7)</sub> |**Description**: This policy setting specifies whether Terminal Services always prompts the client computer for a password upon connection. You can use this policy setting to enforce a password prompt for users who log on to Terminal Services, even if they already provided the password in the Remote Desktop Connection client. By default, Terminal Services allows users to automatically log on if they enter a password in the Remote Desktop Connection client. Note If you do not configure this policy setting, the local computer administrator can use the Terminal Services Configuration tool to either allow or prevent passwords from being automatically sent.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fPromptForPassword<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Application: Control Event Log behavior when the log file reaches its maximum size<br /><sub>(CCE-37775-4)</sub> |**Description**: This policy setting controls Event Log behavior when the log file reaches its maximum size. If you enable this policy setting and a log file reaches its maximum size, new events are not written to the log and are lost. If you disable or do not configure this policy setting and a log file reaches its maximum size, new events overwrite old events. Note: Old events may or may not be retained according to the "Backup log automatically when full"  policy setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Application\Retention<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
-|Application: Specify the maximum log file size (KB)<br /><sub>(CCE-37948-7)</sub> |**Description**: This policy setting specifies the maximum size of the log file in kilobytes. If you enable this policy setting, you can configure the maximum log file size to be between 1 megabyte (1024 kilobytes) and 2 terabytes (2147483647 kilobytes) in kilobyte increments. If you disable or do not configure this policy setting, the maximum size of the log file will be set to the locally configured value. This value can be changed by the local administrator using the Log Properties dialog and it defaults to 20 megabytes.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Application\MaxSize<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 32768<br /><sub>(Registry)</sub> |Critical |
-|Configure local setting override for reporting to Microsoft MAPS<br /><sub>(AZ-WIN-00173)</sub> |**Description**: This policy setting configures a local override for the configuration to join Microsoft MAPS. This setting can only be set by Group Policy. If you enable this setting the local preference setting will take priority over Group Policy. If you disable or do not configure this setting Group Policy will take priority over the local preference setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\SpyNet\LocalSettingOverrideSpynetReporting<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
-|Configure Windows SmartScreen<br /><sub>(CCE-35859-8)</sub> |**Description**: This policy setting allows you to manage the behavior of Windows SmartScreen. Windows SmartScreen helps keep PCs safer by warning users before running unrecognized programs downloaded from the Internet. Some information is sent to Microsoft about files and programs run on PCs with this feature enabled. If you enable this policy setting, Windows SmartScreen behavior may be controlled by setting one of the following options: ΓÇó Give user a warning before running downloaded unknown software ΓÇó Turn off SmartScreen If you disable or do not configure this policy setting, Windows SmartScreen behavior is managed by administrators on the PC by using Windows SmartScreen Settings in Security and Maintenance. Options: ΓÇó Give user a warning before running downloaded unknown software ΓÇó Turn off SmartScreen<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\EnableSmartScreen<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 1-2<br /><sub>(Registry)</sub> |Warning |
-|Detect change from default RDP port<br /><sub>(AZ-WIN-00156)</sub> |**Description**: This setting determines whether the network port that listens for Remote Desktop Connections has been changed from the default 3389<br />**Key Path**: System\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp\PortNumber<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 3389<br /><sub>(Registry)</sub> |Critical |
-|Disable Windows Search Service<br /><sub>(AZ-WIN-00176)</sub> |**Description**: This registry setting disables the Windows Search Service<br />**Key Path**: System\CurrentControlSet\Services\Wsearch\Start<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 4<br /><sub>(Registry)</sub> |Critical |
-|Disallow Autoplay for non-volume devices<br /><sub>(CCE-37636-8)</sub> |**Description**: This policy setting disallows AutoPlay for MTP devices like cameras or phones. If you enable this policy setting, AutoPlay is not allowed for MTP devices like cameras or phones. If you disable or do not configure this policy setting, AutoPlay is enabled for non-volume devices.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Explorer\NoAutoplayfornonVolume<br />**OS**: WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Disallow Digest authentication<br /><sub>(CCE-38318-2)</sub> |**Description**: This policy setting allows you to manage whether the Windows Remote Management (WinRM) client will not use Digest authentication. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WinRM\Client\AllowDigest<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
-|Disallow WinRM from storing RunAs credentials<br /><sub>(CCE-36000-8)</sub> |**Description**: This policy setting allows you to manage whether the Windows Remote Management (WinRM) service will not allow RunAs credentials to be stored for any plug-ins. If you enable this policy setting, the WinRM service will not allow the RunAsUser or RunAsPassword configuration values to be set for any plug-ins. If a plug-in has already set the RunAsUser and RunAsPassword configuration values, the RunAsPassword configuration value will be erased from the credential store on this computer. If you disable or do not configure this policy setting, the WinRM service will allow the RunAsUser and RunAsPassword configuration values to be set for plug-ins and the RunAsPassword value will be stored securely. If you enable and then disable this policy setting, any values that were previously configured for RunAsPassword will need to be reset.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WinRM\Service\DisableRunAs<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Do not allow passwords to be saved<br /><sub>(CCE-36223-6)</sub> |**Description**: This policy setting helps prevent Terminal Services clients from saving passwords on a computer. Note If this policy setting was previously configured as Disabled or Not configured, any previously saved passwords will be deleted the first time a Terminal Services client disconnects from any server.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\DisablePasswordSaving<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Do not delete temp folders upon exit<br /><sub>(CCE-37946-1)</sub> |**Description**: This policy setting specifies whether Remote Desktop Services retains a user's per-session temporary folders at logoff. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\DeleteTempDirsOnExit<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning |
-|Do not display the password reveal button<br /><sub>(CCE-37534-5)</sub> |**Description**: This policy setting allows you to configure the display of the password reveal button in password entry user experiences. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\CredUI\DisablePasswordReveal<br />**OS**: WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Do not show feedback notifications<br /><sub>(AZ-WIN-00140)</sub> |**Description**: This policy setting allows an organization to prevent its devices from showing feedback questions from Microsoft. If you enable this policy setting, users will no longer see feedback notifications through the Windows Feedback app. If you disable or do not configure this policy setting, users may see notifications through the Windows Feedback app asking users for feedback. Note: If you disable or do not configure this policy setting, users can control how often they receive feedback questions.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\DataCollection\DoNotShowFeedbackNotifications<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Do not use temporary folders per session<br /><sub>(CCE-38180-6)</sub> |**Description**: By default, Remote Desktop Services creates a separate temporary folder on the RD Session Host server for each active session that a user maintains on the RD Session Host server. The temporary folder is created on the RD Session Host server in a Temp folder under the user's profile folder and is named with the "sessionid." This temporary folder is used to store individual temporary files. To reclaim disk space, the temporary folder is deleted when the user logs off from a session. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\PerSessionTempDir<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
-|Enumerate administrator accounts on elevation<br /><sub>(CCE-36512-2)</sub> |**Description**: This policy setting controls whether administrator accounts are displayed when a user attempts to elevate a running application. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\CredUI\EnumerateAdministrators<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
-|Prevent downloading of enclosures<br /><sub>(CCE-37126-0)</sub> |**Description**: This policy setting prevents the user from having enclosures (file attachments) downloaded from a feed to the user's computer. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Internet Explorer\Feeds\DisableEnclosureDownload<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Require secure RPC communication<br /><sub>(CCE-37567-5)</sub> |**Description**: Specifies whether a Remote Desktop Session Host server requires secure RPC communication with all clients or allows unsecured communication. You can use this setting to strengthen the security of RPC communication with clients by allowing only authenticated and encrypted requests. If the status is set to Enabled, Remote Desktop Services accepts requests from RPC clients that support secure requests, and does not allow unsecured communication with untrusted clients. If the status is set to Disabled, Remote Desktop Services always requests security for all RPC traffic. However, unsecured communication is allowed for RPC clients that do not respond to the request. If the status is set to Not Configured, unsecured communication is allowed. Note: The RPC interface is used for administering and configuring Remote Desktop Services.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fEncryptRPCTraffic<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Require user authentication for remote connections by using Network Level Authentication<br /><sub>(AZ-WIN-00149)</sub> |**Description**: Require user authentication for remote connections by using Network Level Authentication<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\UserAuthentication<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
-|Scan removable drives<br /><sub>(AZ-WIN-00177)</sub> |**Description**: This policy setting allows you to manage whether or not to scan for malicious software and unwanted software in the contents of removable drives such as USB flash drives when running a full scan. If you enable this setting removable drives will be scanned during any type of scan. If you disable or do not configure this setting removable drives will not be scanned during a full scan. Removable drives may still be scanned during quick scan and custom scan.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\Scan\DisableRemovableDriveScanning<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
-|Security: Control Event Log behavior when the log file reaches its maximum size<br /><sub>(CCE-37145-0)</sub> |**Description**: This policy setting controls Event Log behavior when the log file reaches its maximum size. If you enable this policy setting and a log file reaches its maximum size, new events are not written to the log and are lost. If you disable or do not configure this policy setting and a log file reaches its maximum size, new events overwrite old events. Note: Old events may or may not be retained according to the "Backup log automatically when full"  policy setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Security\Retention<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
-|Security: Specify the maximum log file size (KB)<br /><sub>(CCE-37695-4)</sub> |**Description**: This policy setting specifies the maximum size of the log file in kilobytes. If you enable this policy setting, you can configure the maximum log file size to be between 1 megabyte (1024 kilobytes) and 2 terabytes (2,147,483,647 kilobytes) in kilobyte increments. If you disable or do not configure this policy setting, the maximum size of the log file will be set to the locally configured value. This value can be changed by the local administrator using the Log Properties dialog and it defaults to 20 megabytes.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Security\MaxSize<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 196608<br /><sub>(Registry)</sub> |Critical |
-|Send file samples when further analysis is required<br /><sub>(AZ-WIN-00126)</sub> |**Description**: This policy setting configures behavior of samples submission when opt-in for MAPS telemetry is set. Possible options are: (0x0) Always prompt (0x1) Send safe samples automatically (0x2) Never send (0x3) Send all samples automatically<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\SpyNet\SubmitSamplesConsent<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Set client connection encryption level<br /><sub>(CCE-36627-8)</sub> |**Description**: This policy setting specifies whether the computer that is about to host the remote connection will enforce an encryption level for all data sent between it and the client computer for the remote session.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\MinEncryptionLevel<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 3<br /><sub>(Registry)</sub> |Critical |
-|Set the default behavior for AutoRun<br /><sub>(CCE-38217-6)</sub> |**Description**: This policy setting sets the default behavior for Autorun commands. Autorun commands are generally stored in autorun.inf files. They often launch the installation program or other routines. Prior to Windows Vista, when media containing an autorun command is inserted, the system will automatically execute the program without user intervention. This creates a major security concern as code may be executed without user's knowledge. The default behavior starting with Windows Vista is to prompt the user whether autorun command is to be run. The autorun command is represented as a handler in the Autoplay dialog. If you enable this policy setting, an Administrator can change the default Windows Vista or later behavior for autorun to: a) Completely disable autorun commands, or b) Revert back to pre-Windows Vista behavior of automatically executing the autorun command. If you disable or not configure this policy setting, Windows Vista or later will prompt the user whether autorun command is to be run.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\Explorer\NoAutorun<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Setup: Control Event Log behavior when the log file reaches its maximum size<br /><sub>(CCE-38276-2)</sub> |**Description**: This policy setting controls Event Log behavior when the log file reaches its maximum size. If you enable this policy setting and a log file reaches its maximum size, new events are not written to the log and are lost. If you disable or do not configure this policy setting and a log file reaches its maximum size, new events overwrite old events. Note: Old events may or may not be retained according to the "Backup log automatically when full"  policy setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Setup\Retention<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
-|Setup: Specify the maximum log file size (KB)<br /><sub>(CCE-37526-1)</sub> |**Description**: This policy setting specifies the maximum size of the log file in kilobytes. If you enable this policy setting, you can configure the maximum log file size to be between 1 megabyte (1024 kilobytes) and 2 terabytes (2,147,483,647 kilobytes) in kilobyte increments. If you disable or do not configure this policy setting, the maximum size of the log file will be set to the locally configured value. This value can be changed by the local administrator using the Log Properties dialog and it defaults to 20 megabytes.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Setup\MaxSize<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 32768<br /><sub>(Registry)</sub> |Critical |
-|Sign-in last interactive user automatically after a system-initiated restart<br /><sub>(CCE-36977-7)</sub> |**Description**: This policy setting controls whether a device will automatically sign-in the last interactive user after Windows Update restarts the system. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\DisableAutomaticRestartSignOn<br />**OS**: WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Allow Basic authentication<br /><sub>(CCE-36254-1)</sub> |**Description**: This policy setting allows you to manage whether the Windows Remote Management (WinRM) service accepts Basic authentication from a remote client. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WinRM\Client\AllowBasic<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Allow Cortana<br /><sub>(AZ-WIN-00131)</sub> |**Description**: This policy setting specifies whether Cortana is allowed on the device.   If you enable or don't configure this setting, Cortana will be allowed on the device. If you disable this setting, Cortana will be turned off.   When Cortana is off, users will still be able to use search to find things on the device and on the Internet.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Windows Search\AllowCortana<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Allow Cortana above lock screen<br /><sub>(AZ-WIN-00130)</sub> |**Description**: This policy setting determines whether or not the user can interact with Cortana using speech while the system is locked. If you enable or don't configure this setting, the user can interact with Cortana using speech while the system is locked. If you disable this setting, the system will need to be unlocked for the user to interact with Cortana using speech.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Windows Search\AllowCortanaAboveLock<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Allow indexing of encrypted files<br /><sub>(CCE-38277-0)</sub> |**Description**: This policy setting controls whether encrypted items are allowed to be indexed. When this setting is changed, the index is rebuilt completely. Full volume encryption (such as BitLocker Drive Encryption or a non-Microsoft solution) must be used for the location of the index to maintain security for encrypted files. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Windows Search\AllowIndexingEncryptedStoresOrItems<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Allow Microsoft accounts to be optional<br /><sub>(CCE-38354-7)</sub> |**Description**: This policy setting lets you control whether Microsoft accounts are optional for Windows Store apps that require an account to sign in. This policy only affects Windows Store apps that support it. If you enable this policy setting, Windows Store apps that typically require a Microsoft account to sign in will allow users to sign in with an enterprise account instead. If you disable or do not configure this policy setting, users will need to sign in with a Microsoft account.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\MSAOptional<br />**OS**: WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Allow search and Cortana to use location<br /><sub>(AZ-WIN-00133)</sub> |**Description**: This policy setting specifies whether search and Cortana can provide location aware search and Cortana results.   If this is enabled, search and Cortana can access location information.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Windows Search\AllowSearchToUseLocation<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Allow Telemetry<br /><sub>(AZ-WIN-00169)</sub> |**Description**: This policy setting determines the amount of diagnostic and usage data reported to Microsoft. A value of 0 will send minimal data to Microsoft. This data includes Malicious Software Removal Tool (MSRT) & Windows Defender data, if enabled, and telemetry client settings. Setting a value of 0 applies to enterprise, EDU, IoT and server devices only. Setting a value of 0 for other devices is equivalent to choosing a value of 1. A value of 1 sends only a basic amount of diagnostic and usage data. Note that setting values of 0 or 1 will degrade certain experiences on the device. A value of 2 sends enhanced diagnostic and usage data. A value of 3 sends the same data as a value of 2, plus additional diagnostics data, including the files and content that may have caused the problem. Windows 10 telemetry settings apply to the Windows operating system and some first party apps. This setting does not apply to third party apps running on Windows 10. The recommended state for this setting is: `Enabled: 0 - Security [Enterprise Only]`. **Note:** If the "Allow Telemetry" setting is configured to "0 - Security [Enterprise Only]", then the options in Windows Update to defer upgrades and updates will have no effect.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\DataCollection\AllowTelemetry<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 0<br /><sub>(Registry)</sub> |Warning |
+|Allow unencrypted traffic<br /><sub>(CCE-38223-4)</sub> |**Description**: This policy setting allows you to manage whether the Windows Remote Management (WinRM) service sends and receives unencrypted messages over the network. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WinRM\Client\AllowUnencryptedTraffic<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Allow user control over installs<br /><sub>(CCE-36400-0)</sub> |**Description**: Permits users to change installation options that typically are available only to system administrators. The security features of Windows Installer prevent users from changing installation options typically reserved for system administrators, such as specifying the directory to which files are installed. If Windows Installer detects that an installation package has permitted the user to change a protected option, it stops the installation and displays a message. These security features operate only when the installation program is running in a privileged security context in which it has access to directories denied to the user. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Installer\EnableUserControl<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Always install with elevated privileges<br /><sub>(CCE-37490-0)</sub> |**Description**: This setting controls whether or not Windows Installer should use system permissions when it installs any program on the system. **Note:** This setting appears both in the Computer Configuration and User Configuration folders. To make this setting effective, you must enable the setting in both folders. **Caution:** If enabled, skilled users can take advantage of the permissions this setting grants to change their privileges and gain permanent access to restricted files and folders. Note that the User Configuration version of this setting is not guaranteed to be secure. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Installer\AlwaysInstallElevated<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Always prompt for password upon connection<br /><sub>(CCE-37929-7)</sub> |**Description**: This policy setting specifies whether Terminal Services always prompts the client computer for a password upon connection. You can use this policy setting to enforce a password prompt for users who log on to Terminal Services, even if they already provided the password in the Remote Desktop Connection client. By default, Terminal Services allows users to automatically log on if they enter a password in the Remote Desktop Connection client. Note If you do not configure this policy setting, the local computer administrator can use the Terminal Services Configuration tool to either allow or prevent passwords from being automatically sent.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fPromptForPassword<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Application: Control Event Log behavior when the log file reaches its maximum size<br /><sub>(CCE-37775-4)</sub> |**Description**: This policy setting controls Event Log behavior when the log file reaches its maximum size. If you enable this policy setting and a log file reaches its maximum size, new events are not written to the log and are lost. If you disable or do not configure this policy setting and a log file reaches its maximum size, new events overwrite old events. Note: Old events may or may not be retained according to the "Backup log automatically when full"  policy setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Application\Retention<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Application: Specify the maximum log file size (KB)<br /><sub>(CCE-37948-7)</sub> |**Description**: This policy setting specifies the maximum size of the log file in kilobytes. If you enable this policy setting, you can configure the maximum log file size to be between 1 megabyte (1024 kilobytes) and 2 terabytes (2147483647 kilobytes) in kilobyte increments. If you disable or do not configure this policy setting, the maximum size of the log file will be set to the locally configured value. This value can be changed by the local administrator using the Log Properties dialog and it defaults to 20 megabytes.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Application\MaxSize<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 32768<br /><sub>(Registry)</sub> |Critical |
+|Configure local setting override for reporting to Microsoft MAPS<br /><sub>(AZ-WIN-00173)</sub> |**Description**: This policy setting configures a local override for the configuration to join Microsoft MAPS. This setting can only be set by Group Policy. If you enable this setting the local preference setting will take priority over Group Policy. If you disable or do not configure this setting Group Policy will take priority over the local preference setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\SpyNet\LocalSettingOverrideSpynetReporting<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Configure Windows SmartScreen<br /><sub>(CCE-35859-8)</sub> |**Description**: This policy setting allows you to manage the behavior of Windows SmartScreen. Windows SmartScreen helps keep PCs safer by warning users before running unrecognized programs downloaded from the Internet. Some information is sent to Microsoft about files and programs run on PCs with this feature enabled. If you enable this policy setting, Windows SmartScreen behavior may be controlled by setting one of the following options: ΓÇó Give user a warning before running downloaded unknown software ΓÇó Turn off SmartScreen If you disable or do not configure this policy setting, Windows SmartScreen behavior is managed by administrators on the PC by using Windows SmartScreen Settings in Security and Maintenance. Options: ΓÇó Give user a warning before running downloaded unknown software ΓÇó Turn off SmartScreen<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\EnableSmartScreen<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 1-2<br /><sub>(Registry)</sub> |Warning |
+|Detect change from default RDP port<br /><sub>(AZ-WIN-00156)</sub> |**Description**: This setting determines whether the network port that listens for Remote Desktop Connections has been changed from the default 3389<br />**Key Path**: System\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp\PortNumber<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 3389<br /><sub>(Registry)</sub> |Critical |
+|Disable Windows Search Service<br /><sub>(AZ-WIN-00176)</sub> |**Description**: This registry setting disables the Windows Search Service<br />**Key Path**: System\CurrentControlSet\Services\Wsearch\Start<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 4<br /><sub>(Registry)</sub> |Critical |
+|Disallow Autoplay for non-volume devices<br /><sub>(CCE-37636-8)</sub> |**Description**: This policy setting disallows AutoPlay for MTP devices like cameras or phones. If you enable this policy setting, AutoPlay is not allowed for MTP devices like cameras or phones. If you disable or do not configure this policy setting, AutoPlay is enabled for non-volume devices.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Explorer\NoAutoplayfornonVolume<br />**OS**: WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Disallow Digest authentication<br /><sub>(CCE-38318-2)</sub> |**Description**: This policy setting allows you to manage whether the Windows Remote Management (WinRM) client will not use Digest authentication. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WinRM\Client\AllowDigest<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Disallow WinRM from storing RunAs credentials<br /><sub>(CCE-36000-8)</sub> |**Description**: This policy setting allows you to manage whether the Windows Remote Management (WinRM) service will not allow RunAs credentials to be stored for any plug-ins. If you enable this policy setting, the WinRM service will not allow the RunAsUser or RunAsPassword configuration values to be set for any plug-ins. If a plug-in has already set the RunAsUser and RunAsPassword configuration values, the RunAsPassword configuration value will be erased from the credential store on this computer. If you disable or do not configure this policy setting, the WinRM service will allow the RunAsUser and RunAsPassword configuration values to be set for plug-ins and the RunAsPassword value will be stored securely. If you enable and then disable this policy setting, any values that were previously configured for RunAsPassword will need to be reset.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WinRM\Service\DisableRunAs<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Do not allow passwords to be saved<br /><sub>(CCE-36223-6)</sub> |**Description**: This policy setting helps prevent Terminal Services clients from saving passwords on a computer. Note If this policy setting was previously configured as Disabled or Not configured, any previously saved passwords will be deleted the first time a Terminal Services client disconnects from any server.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\DisablePasswordSaving<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Do not delete temp folders upon exit<br /><sub>(CCE-37946-1)</sub> |**Description**: This policy setting specifies whether Remote Desktop Services retains a user's per-session temporary folders at logoff. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\DeleteTempDirsOnExit<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning |
+|Do not display the password reveal button<br /><sub>(CCE-37534-5)</sub> |**Description**: This policy setting allows you to configure the display of the password reveal button in password entry user experiences. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\CredUI\DisablePasswordReveal<br />**OS**: WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Do not show feedback notifications<br /><sub>(AZ-WIN-00140)</sub> |**Description**: This policy setting allows an organization to prevent its devices from showing feedback questions from Microsoft. If you enable this policy setting, users will no longer see feedback notifications through the Windows Feedback app. If you disable or do not configure this policy setting, users may see notifications through the Windows Feedback app asking users for feedback. Note: If you disable or do not configure this policy setting, users can control how often they receive feedback questions.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\DataCollection\DoNotShowFeedbackNotifications<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Do not use temporary folders per session<br /><sub>(CCE-38180-6)</sub> |**Description**: By default, Remote Desktop Services creates a separate temporary folder on the RD Session Host server for each active session that a user maintains on the RD Session Host server. The temporary folder is created on the RD Session Host server in a Temp folder under the user's profile folder and is named with the "sessionid." This temporary folder is used to store individual temporary files. To reclaim disk space, the temporary folder is deleted when the user logs off from a session. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\PerSessionTempDir<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Enumerate administrator accounts on elevation<br /><sub>(CCE-36512-2)</sub> |**Description**: This policy setting controls whether administrator accounts are displayed when a user attempts to elevate a running application. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\CredUI\EnumerateAdministrators<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Prevent downloading of enclosures<br /><sub>(CCE-37126-0)</sub> |**Description**: This policy setting prevents the user from having enclosures (file attachments) downloaded from a feed to the user's computer. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Internet Explorer\Feeds\DisableEnclosureDownload<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Require secure RPC communication<br /><sub>(CCE-37567-5)</sub> |**Description**: Specifies whether a Remote Desktop Session Host server requires secure RPC communication with all clients or allows unsecured communication. You can use this setting to strengthen the security of RPC communication with clients by allowing only authenticated and encrypted requests. If the status is set to Enabled, Remote Desktop Services accepts requests from RPC clients that support secure requests, and does not allow unsecured communication with untrusted clients. If the status is set to Disabled, Remote Desktop Services always requests security for all RPC traffic. However, unsecured communication is allowed for RPC clients that do not respond to the request. If the status is set to Not Configured, unsecured communication is allowed. Note: The RPC interface is used for administering and configuring Remote Desktop Services.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fEncryptRPCTraffic<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Require user authentication for remote connections by using Network Level Authentication<br /><sub>(AZ-WIN-00149)</sub> |**Description**: Require user authentication for remote connections by using Network Level Authentication<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\UserAuthentication<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Scan removable drives<br /><sub>(AZ-WIN-00177)</sub> |**Description**: This policy setting allows you to manage whether or not to scan for malicious software and unwanted software in the contents of removable drives such as USB flash drives when running a full scan. If you enable this setting removable drives will be scanned during any type of scan. If you disable or do not configure this setting removable drives will not be scanned during a full scan. Removable drives may still be scanned during quick scan and custom scan.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\Scan\DisableRemovableDriveScanning<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Security: Control Event Log behavior when the log file reaches its maximum size<br /><sub>(CCE-37145-0)</sub> |**Description**: This policy setting controls Event Log behavior when the log file reaches its maximum size. If you enable this policy setting and a log file reaches its maximum size, new events are not written to the log and are lost. If you disable or do not configure this policy setting and a log file reaches its maximum size, new events overwrite old events. Note: Old events may or may not be retained according to the "Backup log automatically when full"  policy setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Security\Retention<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Security: Specify the maximum log file size (KB)<br /><sub>(CCE-37695-4)</sub> |**Description**: This policy setting specifies the maximum size of the log file in kilobytes. If you enable this policy setting, you can configure the maximum log file size to be between 1 megabyte (1024 kilobytes) and 2 terabytes (2,147,483,647 kilobytes) in kilobyte increments. If you disable or do not configure this policy setting, the maximum size of the log file will be set to the locally configured value. This value can be changed by the local administrator using the Log Properties dialog and it defaults to 20 megabytes.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Security\MaxSize<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 196608<br /><sub>(Registry)</sub> |Critical |
+|Send file samples when further analysis is required<br /><sub>(AZ-WIN-00126)</sub> |**Description**: This policy setting configures behavior of samples submission when opt-in for MAPS telemetry is set. Possible options are: (0x0) Always prompt (0x1) Send safe samples automatically (0x2) Never send (0x3) Send all samples automatically<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\SpyNet\SubmitSamplesConsent<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Set client connection encryption level<br /><sub>(CCE-36627-8)</sub> |**Description**: This policy setting specifies whether the computer that is about to host the remote connection will enforce an encryption level for all data sent between it and the client computer for the remote session.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\MinEncryptionLevel<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 3<br /><sub>(Registry)</sub> |Critical |
+|Set the default behavior for AutoRun<br /><sub>(CCE-38217-6)</sub> |**Description**: This policy setting sets the default behavior for Autorun commands. Autorun commands are generally stored in autorun.inf files. They often launch the installation program or other routines. Prior to Windows Vista, when media containing an autorun command is inserted, the system will automatically execute the program without user intervention. This creates a major security concern as code may be executed without user's knowledge. The default behavior starting with Windows Vista is to prompt the user whether autorun command is to be run. The autorun command is represented as a handler in the Autoplay dialog. If you enable this policy setting, an Administrator can change the default Windows Vista or later behavior for autorun to: a) Completely disable autorun commands, or b) Revert back to pre-Windows Vista behavior of automatically executing the autorun command. If you disable or not configure this policy setting, Windows Vista or later will prompt the user whether autorun command is to be run.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\Explorer\NoAutorun<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Setup: Control Event Log behavior when the log file reaches its maximum size<br /><sub>(CCE-38276-2)</sub> |**Description**: This policy setting controls Event Log behavior when the log file reaches its maximum size. If you enable this policy setting and a log file reaches its maximum size, new events are not written to the log and are lost. If you disable or do not configure this policy setting and a log file reaches its maximum size, new events overwrite old events. Note: Old events may or may not be retained according to the "Backup log automatically when full"  policy setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Setup\Retention<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Setup: Specify the maximum log file size (KB)<br /><sub>(CCE-37526-1)</sub> |**Description**: This policy setting specifies the maximum size of the log file in kilobytes. If you enable this policy setting, you can configure the maximum log file size to be between 1 megabyte (1024 kilobytes) and 2 terabytes (2,147,483,647 kilobytes) in kilobyte increments. If you disable or do not configure this policy setting, the maximum size of the log file will be set to the locally configured value. This value can be changed by the local administrator using the Log Properties dialog and it defaults to 20 megabytes.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Setup\MaxSize<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 32768<br /><sub>(Registry)</sub> |Critical |
+|Sign-in last interactive user automatically after a system-initiated restart<br /><sub>(CCE-36977-7)</sub> |**Description**: This policy setting controls whether a device will automatically sign-in the last interactive user after Windows Update restarts the system. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\DisableAutomaticRestartSignOn<br />**OS**: WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
|Specify the interval to check for definition updates<br /><sub>(AZ-WIN-00152)</sub> |**Description**: This policy setting allows you to specify an interval at which to check for definition updates. The time value is represented as the number of hours between update checks. Valid values range from 1 (every hour) to 24 (once per day). If you enable this setting, checking for definition updates will occur at the interval specified. If you disable or do not configure this setting, checking for definition updates will occur at the default interval.<br />**Key Path**: SOFTWARE\Microsoft\Microsoft Antimalware\Signature Updates\SignatureUpdateInterval<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 8<br /><sub>(Registry)</sub> |Critical |
-|System: Control Event Log behavior when the log file reaches its maximum size<br /><sub>(CCE-36160-0)</sub> |**Description**: This policy setting controls Event Log behavior when the log file reaches its maximum size. If you enable this policy setting and a log file reaches its maximum size, new events are not written to the log and are lost. If you disable or do not configure this policy setting and a log file reaches its maximum size, new events overwrite old events. Note: Old events may or may not be retained according to the "Backup log automatically when full"  policy setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\System\Retention<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
-|System: Specify the maximum log file size (KB)<br /><sub>(CCE-36092-5)</sub> |**Description**: This policy setting specifies the maximum size of the log file in kilobytes. If you enable this policy setting, you can configure the maximum log file size to be between 1 megabyte (1024 kilobytes) and 2 terabytes (2,147,483,647 kilobytes) in kilobyte increments. If you disable or do not configure this policy setting, the maximum size of the log file will be set to the locally configured value. This value can be changed by the local administrator using the Log Properties dialog and it defaults to 20 megabytes.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\System\MaxSize<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 32768<br /><sub>(Registry)</sub> |Critical |
-|Turn off Autoplay<br /><sub>(CCE-36875-3)</sub> |**Description**: Autoplay starts to read from a drive as soon as you insert media in the drive, which causes the setup file for programs or audio media to start immediately. An attacker could use this feature to launch a program to damage the computer or data on the computer. You can enable the Turn off Autoplay setting to disable the Autoplay feature. Autoplay is disabled by default on some removable drive types, such as floppy disk and network drives, but not on CD-ROM drives. Note You cannot use this policy setting to enable Autoplay on computer drives in which it is disabled by default, such as floppy disk and network drives.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\Explorer\NoDriveTypeAutoRun<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 255<br /><sub>(Registry)</sub> |Critical |
-|Turn off Data Execution Prevention for Explorer<br /><sub>(CCE-37809-1)</sub> |**Description**: Disabling data execution prevention can allow certain legacy plug-in applications to function without terminating Explorer. The recommended state for this setting is: `Disabled`. **Note:** Some legacy plug-in applications and other software may not function with Data Execution Prevention and will require an exception to be defined for that specific plug-in/software.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Explorer\NoDataExecutionPrevention<br />**OS**: WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
-|Turn off heap termination on corruption<br /><sub>(CCE-36660-9)</sub> |**Description**: Without heap termination on corruption, legacy plug-in applications may continue to function when a File Explorer session has become corrupt. Ensuring that heap termination on corruption is active will prevent this. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Explorer\NoHeapTerminationOnCorruption<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
-|Turn off Microsoft consumer experiences<br /><sub>(AZ-WIN-00144)</sub> |**Description**: This policy setting turns off experiences that help consumers make the most of their devices and Microsoft account. If you enable this policy setting, users will no longer see personalized recommendations from Microsoft and notifications about their Microsoft account. If you disable or do not configure this policy setting, users may see suggestions from Microsoft and notifications about their Microsoft account. Note: This setting only applies to Enterprise and Education SKUs.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\CloudContent\DisableWindowsConsumerFeatures<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning |
-|Turn off shell protocol protected mode<br /><sub>(CCE-36809-2)</sub> |**Description**: This policy setting allows you to configure the amount of functionality that the shell protocol can have. When using the full functionality of this protocol applications can open folders and launch files. The protected mode reduces the functionality of this protocol allowing applications to only open a limited set of folders. Applications are not able to open files with this protocol when it is in the protected mode. It is recommended to leave this protocol in the protected mode to increase the security of Windows. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\Explorer\PreXPSP2ShellProtocolBehavior<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
-|Turn on behavior monitoring<br /><sub>(AZ-WIN-00178)</sub> |**Description**: This policy setting allows you to configure behavior monitoring. If you enable or do not configure this setting behavior monitoring will be enabled. If you disable this setting behavior monitoring will be disabled.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\Real-Time Protection\DisableBehaviorMonitoring<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|System: Control Event Log behavior when the log file reaches its maximum size<br /><sub>(CCE-36160-0)</sub> |**Description**: This policy setting controls Event Log behavior when the log file reaches its maximum size. If you enable this policy setting and a log file reaches its maximum size, new events are not written to the log and are lost. If you disable or do not configure this policy setting and a log file reaches its maximum size, new events overwrite old events. Note: Old events may or may not be retained according to the "Backup log automatically when full"  policy setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\System\Retention<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|System: Specify the maximum log file size (KB)<br /><sub>(CCE-36092-5)</sub> |**Description**: This policy setting specifies the maximum size of the log file in kilobytes. If you enable this policy setting, you can configure the maximum log file size to be between 1 megabyte (1024 kilobytes) and 2 terabytes (2,147,483,647 kilobytes) in kilobyte increments. If you disable or do not configure this policy setting, the maximum size of the log file will be set to the locally configured value. This value can be changed by the local administrator using the Log Properties dialog and it defaults to 20 megabytes.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\System\MaxSize<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 32768<br /><sub>(Registry)</sub> |Critical |
+|Turn off Autoplay<br /><sub>(CCE-36875-3)</sub> |**Description**: Autoplay starts to read from a drive as soon as you insert media in the drive, which causes the setup file for programs or audio media to start immediately. An attacker could use this feature to launch a program to damage the computer or data on the computer. You can enable the Turn off Autoplay setting to disable the Autoplay feature. Autoplay is disabled by default on some removable drive types, such as floppy disk and network drives, but not on CD-ROM drives. Note You cannot use this policy setting to enable Autoplay on computer drives in which it is disabled by default, such as floppy disk and network drives.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\Explorer\NoDriveTypeAutoRun<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 255<br /><sub>(Registry)</sub> |Critical |
+|Turn off Data Execution Prevention for Explorer<br /><sub>(CCE-37809-1)</sub> |**Description**: Disabling data execution prevention can allow certain legacy plug-in applications to function without terminating Explorer. The recommended state for this setting is: `Disabled`. **Note:** Some legacy plug-in applications and other software may not function with Data Execution Prevention and will require an exception to be defined for that specific plug-in/software.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Explorer\NoDataExecutionPrevention<br />**OS**: WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Turn off heap termination on corruption<br /><sub>(CCE-36660-9)</sub> |**Description**: Without heap termination on corruption, legacy plug-in applications may continue to function when a File Explorer session has become corrupt. Ensuring that heap termination on corruption is active will prevent this. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Explorer\NoHeapTerminationOnCorruption<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Turn off Microsoft consumer experiences<br /><sub>(AZ-WIN-00144)</sub> |**Description**: This policy setting turns off experiences that help consumers make the most of their devices and Microsoft account. If you enable this policy setting, users will no longer see personalized recommendations from Microsoft and notifications about their Microsoft account. If you disable or do not configure this policy setting, users may see suggestions from Microsoft and notifications about their Microsoft account. Note: This setting only applies to Enterprise and Education SKUs.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\CloudContent\DisableWindowsConsumerFeatures<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning |
+|Turn off shell protocol protected mode<br /><sub>(CCE-36809-2)</sub> |**Description**: This policy setting allows you to configure the amount of functionality that the shell protocol can have. When using the full functionality of this protocol applications can open folders and launch files. The protected mode reduces the functionality of this protocol allowing applications to only open a limited set of folders. Applications are not able to open files with this protocol when it is in the protected mode. It is recommended to leave this protocol in the protected mode to increase the security of Windows. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\Explorer\PreXPSP2ShellProtocolBehavior<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Turn on behavior monitoring<br /><sub>(AZ-WIN-00178)</sub> |**Description**: This policy setting allows you to configure behavior monitoring. If you enable or do not configure this setting behavior monitoring will be enabled. If you disable this setting behavior monitoring will be disabled.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\Real-Time Protection\DisableBehaviorMonitoring<br />**OS**: WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
## Windows Firewall Properties |Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity | |||||
-|Windows Firewall: Domain: Allow unicast response<br /><sub>(AZ-WIN-00088)</sub> |**Description**: <p><span>This option is useful if you need to control whether this computer receives unicast responses to its outgoing multicast or broadcast messages.  </span></p><p><span>We recommend this setting to ‘Yes’ for Private and Domain profiles, this will set the registry value to 0.</span></p><br />**Key Path**: Software\Policies\Microsoft\WindowsFirewall\DomainProfile\DisableUnicastResponsesToMulticastBroadcast<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning |
-|Windows Firewall: Domain: Firewall state<br /><sub>(CCE-36062-8)</sub> |**Description**: Select On (recommended) to have Windows Firewall with Advanced Security use the settings for this profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\EnableFirewall<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Domain: Outbound connections<br /><sub>(CCE-36146-9)</sub> |**Description**: This setting determines the behavior for outbound connections that do not match an outbound firewall rule. In Windows Vista, the default behavior is to allow connections unless there are firewall rules that block the connection.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\DefaultOutboundAction<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Domain: Settings: Apply local connection security rules<br /><sub>(CCE-38040-2)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local connection rules that apply together with firewall rules configured by Group Policy. The recommended state for this setting is ‘Yes’, this will set the registry value to 1.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\AllowLocalIPsecPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Domain: Settings: Apply local firewall rules<br /><sub>(CCE-37860-4)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy.</span></p><p><span>The recommended state for this setting is Yes, this will set the registry value to 1. </span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\AllowLocalPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Domain: Settings: Display a notification<br /><sub>(CCE-38041-0)</sub> |**Description**: <p><span>By selecting this option, no notification is displayed to the user when a program is blocked from receiving inbound connections. In a server environment, the popups are not useful as the users is not logged in, popups are not necessary and can add confusion for the administrator.  </span></p><p><span>Configure this policy setting to ‘No’, this will set the registry value to 1.  Windows Firewall will not display a notification when a program is blocked from receiving inbound connections.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\DisableNotifications<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Windows Firewall: Private: Allow unicast response<br /><sub>(AZ-WIN-00089)</sub> |**Description**: <p><span>This option is useful if you need to control whether this computer receives unicast responses to its outgoing multicast or broadcast messages.  </span></p><p><span>We recommend this setting to ‘Yes’ for Private and Domain profiles, this will set the registry value to 0.</span></p><br />**Key Path**: Software\Policies\Microsoft\WindowsFirewall\PrivateProfile\DisableUnicastResponsesToMulticastBroadcast<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
-|Windows Firewall: Private: Firewall state<br /><sub>(CCE-38239-0)</sub> |**Description**: Select On (recommended) to have Windows Firewall with Advanced Security use the settings for this profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\EnableFirewall<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Private: Outbound connections<br /><sub>(CCE-38332-3)</sub> |**Description**: This setting determines the behavior for outbound connections that do not match an outbound firewall rule. The default behavior is to allow connections unless there are firewall rules that block the connection. Important If you set Outbound connections to Block and then deploy the firewall policy by using a GPO, computers that receive the GPO settings cannot receive subsequent Group Policy updates unless you create and deploy an outbound rule that enables Group Policy to work. Predefined rules for Core Networking include outbound rules that enable Group Policy to work. Ensure that these outbound rules are active, and thoroughly test firewall profiles before deploying.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\DefaultOutboundAction<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Private: Settings: Apply local connection security rules<br /><sub>(CCE-36063-6)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local connection rules that apply together with firewall rules configured by Group Policy. The recommended state for this setting is ‘Yes’, this will set the registry value to 1.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\AllowLocalIPsecPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Private: Settings: Apply local firewall rules<br /><sub>(CCE-37438-9)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy.</span></p><p><span>The recommended state for this setting is Yes, this will set the registry value to 1. </span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\AllowLocalPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Private: Settings: Display a notification<br /><sub>(CCE-37621-0)</sub> |**Description**: <p><span>By selecting this option, no notification is displayed to the user when a program is blocked from receiving inbound connections. In a server environment, the popups are not useful as the users is not logged in, popups are not necessary and can add confusion for the administrator.  </span></p><p><span> Configure this policy setting to ‘No’, this will set the registry value to 1.  Windows Firewall will not display a notification when a program is blocked from receiving inbound connections.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\DisableNotifications<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Windows Firewall: Public: Allow unicast response<br /><sub>(AZ-WIN-00090)</sub> |**Description**: <p><span>This option is useful if you need to control whether this computer receives unicast responses to its outgoing multicast or broadcast messages. This can be done by changing the state for this setting to ‘No’, this will set the registry value to 1.</span></p><br />**Key Path**: Software\Policies\Microsoft\WindowsFirewall\PublicProfile\DisableUnicastResponsesToMulticastBroadcast<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
-|Windows Firewall: Public: Firewall state<br /><sub>(CCE-37862-0)</sub> |**Description**: Select On (recommended) to have Windows Firewall with Advanced Security use the settings for this profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\EnableFirewall<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Public: Outbound connections<br /><sub>(CCE-37434-8)</sub> |**Description**: This setting determines the behavior for outbound connections that do not match an outbound firewall rule. The default behavior is to allow connections unless there are firewall rules that block the connection. Important If you set Outbound connections to Block and then deploy the firewall policy by using a GPO, computers that receive the GPO settings cannot receive subsequent Group Policy updates unless you create and deploy an outbound rule that enables Group Policy to work. Predefined rules for Core Networking include outbound rules that enable Group Policy to work. Ensure that these outbound rules are active, and thoroughly test firewall profiles before deploying.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\DefaultOutboundAction<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Public: Settings: Apply local connection security rules<br /><sub>(CCE-36268-1)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local connection rules that apply together with firewall rules configured by Group Policy. The recommended state for this setting is ‘Yes’, this will set the registry value to 1.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\AllowLocalIPsecPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Public: Settings: Apply local firewall rules<br /><sub>(CCE-37861-2)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy.</span></p><p><span>The recommended state for this setting is Yes, this will set the registry value to 1. </span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\AllowLocalPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
-|Windows Firewall: Public: Settings: Display a notification<br /><sub>(CCE-38043-6)</sub> |**Description**: <p><span>By selecting this option, no notification is displayed to the user when a program is blocked from receiving inbound connections. In a server environment, the popups are not useful as the users is not logged in, popups are not necessary and can add confusion for the administrator.  </span></p><p><span>Configure this policy setting to ‘No’, this will set the registry value to 1.  Windows Firewall will not display a notification when a program is blocked from receiving inbound connections.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\DisableNotifications<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Windows Firewall: Domain: Allow unicast response<br /><sub>(AZ-WIN-00088)</sub> |**Description**: <p><span>This option is useful if you need to control whether this computer receives unicast responses to its outgoing multicast or broadcast messages.  </span></p><p><span>We recommend this setting to ‘Yes’ for Private and Domain profiles, this will set the registry value to 0.</span></p><br />**Key Path**: Software\Policies\Microsoft\WindowsFirewall\DomainProfile\DisableUnicastResponsesToMulticastBroadcast<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Windows Firewall: Domain: Firewall state<br /><sub>(CCE-36062-8)</sub> |**Description**: Select On (recommended) to have Windows Firewall with Advanced Security use the settings for this profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\EnableFirewall<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Domain: Outbound connections<br /><sub>(CCE-36146-9)</sub> |**Description**: This setting determines the behavior for outbound connections that do not match an outbound firewall rule. In Windows Vista, the default behavior is to allow connections unless there are firewall rules that block the connection.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\DefaultOutboundAction<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Domain: Settings: Apply local connection security rules<br /><sub>(CCE-38040-2)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local connection rules that apply together with firewall rules configured by Group Policy. The recommended state for this setting is ‘Yes’, this will set the registry value to 1.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\AllowLocalIPsecPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Domain: Settings: Apply local firewall rules<br /><sub>(CCE-37860-4)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy.</span></p><p><span>The recommended state for this setting is Yes, this will set the registry value to 1. </span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\AllowLocalPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Domain: Settings: Display a notification<br /><sub>(CCE-38041-0)</sub> |**Description**: <p><span>By selecting this option, no notification is displayed to the user when a program is blocked from receiving inbound connections. In a server environment, the popups are not useful as the users is not logged in, popups are not necessary and can add confusion for the administrator.  </span></p><p><span>Configure this policy setting to ‘No’, this will set the registry value to 1.  Windows Firewall will not display a notification when a program is blocked from receiving inbound connections.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\DisableNotifications<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Windows Firewall: Private: Allow unicast response<br /><sub>(AZ-WIN-00089)</sub> |**Description**: <p><span>This option is useful if you need to control whether this computer receives unicast responses to its outgoing multicast or broadcast messages.  </span></p><p><span>We recommend this setting to ‘Yes’ for Private and Domain profiles, this will set the registry value to 0.</span></p><br />**Key Path**: Software\Policies\Microsoft\WindowsFirewall\PrivateProfile\DisableUnicastResponsesToMulticastBroadcast<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Windows Firewall: Private: Firewall state<br /><sub>(CCE-38239-0)</sub> |**Description**: Select On (recommended) to have Windows Firewall with Advanced Security use the settings for this profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\EnableFirewall<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Private: Outbound connections<br /><sub>(CCE-38332-3)</sub> |**Description**: This setting determines the behavior for outbound connections that do not match an outbound firewall rule. The default behavior is to allow connections unless there are firewall rules that block the connection. Important If you set Outbound connections to Block and then deploy the firewall policy by using a GPO, computers that receive the GPO settings cannot receive subsequent Group Policy updates unless you create and deploy an outbound rule that enables Group Policy to work. Predefined rules for Core Networking include outbound rules that enable Group Policy to work. Ensure that these outbound rules are active, and thoroughly test firewall profiles before deploying.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\DefaultOutboundAction<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Private: Settings: Apply local connection security rules<br /><sub>(CCE-36063-6)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local connection rules that apply together with firewall rules configured by Group Policy. The recommended state for this setting is ‘Yes’, this will set the registry value to 1.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\AllowLocalIPsecPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Private: Settings: Apply local firewall rules<br /><sub>(CCE-37438-9)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy.</span></p><p><span>The recommended state for this setting is Yes, this will set the registry value to 1. </span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\AllowLocalPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Private: Settings: Display a notification<br /><sub>(CCE-37621-0)</sub> |**Description**: <p><span>By selecting this option, no notification is displayed to the user when a program is blocked from receiving inbound connections. In a server environment, the popups are not useful as the users is not logged in, popups are not necessary and can add confusion for the administrator.  </span></p><p><span> Configure this policy setting to ‘No’, this will set the registry value to 1.  Windows Firewall will not display a notification when a program is blocked from receiving inbound connections.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\DisableNotifications<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Windows Firewall: Public: Allow unicast response<br /><sub>(AZ-WIN-00090)</sub> |**Description**: <p><span>This option is useful if you need to control whether this computer receives unicast responses to its outgoing multicast or broadcast messages. This can be done by changing the state for this setting to ‘No’, this will set the registry value to 1.</span></p><br />**Key Path**: Software\Policies\Microsoft\WindowsFirewall\PublicProfile\DisableUnicastResponsesToMulticastBroadcast<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Windows Firewall: Public: Firewall state<br /><sub>(CCE-37862-0)</sub> |**Description**: Select On (recommended) to have Windows Firewall with Advanced Security use the settings for this profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\EnableFirewall<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Public: Outbound connections<br /><sub>(CCE-37434-8)</sub> |**Description**: This setting determines the behavior for outbound connections that do not match an outbound firewall rule. The default behavior is to allow connections unless there are firewall rules that block the connection. Important If you set Outbound connections to Block and then deploy the firewall policy by using a GPO, computers that receive the GPO settings cannot receive subsequent Group Policy updates unless you create and deploy an outbound rule that enables Group Policy to work. Predefined rules for Core Networking include outbound rules that enable Group Policy to work. Ensure that these outbound rules are active, and thoroughly test firewall profiles before deploying.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\DefaultOutboundAction<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Public: Settings: Apply local connection security rules<br /><sub>(CCE-36268-1)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local connection rules that apply together with firewall rules configured by Group Policy. The recommended state for this setting is ‘Yes’, this will set the registry value to 1.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\AllowLocalIPsecPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Public: Settings: Apply local firewall rules<br /><sub>(CCE-37861-2)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy.</span></p><p><span>The recommended state for this setting is Yes, this will set the registry value to 1. </span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\AllowLocalPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Public: Settings: Display a notification<br /><sub>(CCE-38043-6)</sub> |**Description**: <p><span>By selecting this option, no notification is displayed to the user when a program is blocked from receiving inbound connections. In a server environment, the popups are not useful as the users is not logged in, popups are not necessary and can add confusion for the administrator.  </span></p><p><span>Configure this policy setting to ‘No’, this will set the registry value to 1.  Windows Firewall will not display a notification when a program is blocked from receiving inbound connections.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\DisableNotifications<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016, WS2019, WS2022<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
> [!NOTE] > Availability of specific Azure Policy guest configuration settings may vary in Azure Government
governance Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Policy description: Sample Azure Resource Graph queries for Azure Policy showing use of resource types and tables to access Azure Policy related resources and properties. Previously updated : 01/20/2022 Last updated : 02/16/2022
governance Work With Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/concepts/work-with-data.md
consumer if there are more records not returned in the response. This condition
identified when the **count** property is less than the **totalRecords** property. **totalRecords** defines how many records that match the query.
-**resultTruncated** is **true** when either paging is disabled or not possible because no `id`
-column or when there are less resources available than a query is requesting. When
-**resultTruncated** is **true**, the **$skipToken** property isn't set.
+**resultTruncated** is **true** when there are less resources available than a query is requesting or when paging is disabled or when paging is not possible because:
+
+- The query contains a `limit` or `sample`/`take` operator.
+- **All** output columns are either `dynamic` or `null` type.
+
+When **resultTruncated** is **true**, the **$skipToken** property isn't set.
The following examples show how to **skip** the first 3,000 records and return the **first** 1,000 records after those records skipped with Azure CLI and Azure PowerShell:
Search-AzGraph -Query "Resources | project id, name | order by id asc" -First 10
``` > [!IMPORTANT]
-> The query must **project** the **id** field in order for pagination to work. If it's missing from
-> the query, the response won't include the **$skipToken**.
+> The response won't include the **$skipToken** if:
+> - The query contains a `limit` or `sample`/`take` operator.
+> - **All** output columns are either `dynamic` or `null` type.
For an example, see [Next page query](/rest/api/azureresourcegraph/resourcegraph(2021-03-01)/resources/resources#next-page-query)
governance Supported Tables Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/reference/supported-tables-resources.md
Title: Supported Azure Resource Manager resource types description: Provide a list of the Azure Resource Manager resource types supported by Azure Resource Graph and Change History. Previously updated : 01/20/2022 Last updated : 02/16/2022
For sample queries for this table, see [Resource Graph sample queries for resour
For sample queries for this table, see [Resource Graph sample queries for resources](../samples/samples-by-table.md#resources).
+- /datascanners/{scannername}
- 84codes.cloudamqp/servers - Citrix.Services/XenAppEssentials (Citrix Virtual Apps Essentials) - Citrix.Services/XenDesktopEssentials (Citrix Virtual Desktops Essentials)-- conexlink.mycloudit/accounts - crypteron.datasecurity/apps - Dynatrace.Observability/monitors (Dynatrace) - GitHub.Enterprise/accounts (GitHub AE)
For sample queries for this table, see [Resource Graph sample queries for resour
- gridpro.evops/accounts/eventrules - gridpro.evops/accounts/requesttemplates - gridpro.evops/accounts/views-- hive.streaming/services - incapsula.waf/accounts - livearena.broadcast/services - mailjet.email/services
For sample queries for this table, see [Resource Graph sample queries for resour
- Microsoft.AnalysisServices/servers (Analysis Services) - Microsoft.AnyBuild/clusters (AnyBuild clusters) - Microsoft.ApiManagement/service (API Management services)
+- microsoft.app/containerapps
- microsoft.app/managedenvironments - microsoft.appassessment/migrateprojects - Microsoft.AppConfiguration/configurationStores (App Configuration)
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.azuresphere/catalogs/products/devicegroups - microsoft.azurestack/edgesubscriptions - microsoft.azurestack/linkedsubscriptions-- Microsoft.Azurestack/registrations (Azure Stack Hubs)
+- microsoft.azurestack/registrations
- Microsoft.AzureStackHCI/clusters (Azure Stack HCI) - microsoft.azurestackhci/galleryimages - microsoft.azurestackhci/networkinterfaces
For sample queries for this table, see [Resource Graph sample queries for resour
- Microsoft.Cache/Redis (Azure Cache for Redis) - Microsoft.Cache/RedisEnterprise (Redis Enterprise) - microsoft.cascade/sites-- Microsoft.Cdn/CdnWebApplicationFirewallPolicies (Web application firewall policies (WAF))
+- Microsoft.Cdn/CdnWebApplicationFirewallPolicies (Content Delivery Network WAF policies)
- microsoft.cdn/profiles (Front Doors Standard/Premium (Preview)) - Microsoft.Cdn/Profiles/AfdEndpoints (Endpoints) - microsoft.cdn/profiles/endpoints (Endpoints)
For sample queries for this table, see [Resource Graph sample queries for resour
- Microsoft.Experimentation/experimentWorkspaces (Experiment Workspaces) - Microsoft.ExtendedLocation/CustomLocations (Custom locations) - Sample query: [List Azure Arc-enabled custom locations with VMware or SCVMM enabled](../samples/samples-by-category.md#list-azure-arc-enabled-custom-locations-with-vmware-or-scvmm-enabled)
+- microsoft.extendedlocation/customlocations/resourcesyncrules
- microsoft.falcon/namespaces - Microsoft.Fidalgo/devcenters (Fidalgo DevCenters) - microsoft.fidalgo/machinedefinitions
For sample queries for this table, see [Resource Graph sample queries for resour
- Microsoft.HealthcareApis/workspaces/fhirservices (FHIR services) - Microsoft.HealthcareApis/workspaces/iotconnectors (IoT connectors) - Microsoft.HpcWorkbench/instances (HPC Workbenches (preview))
+- Microsoft.HpcWorkbench/instances/chambers (Chambers (preview))
+- Microsoft.HpcWorkbench/instances/chambers/accessProfiles (Chamber Profiles (preview))
+- Microsoft.HpcWorkbench/instances/chambers/workloads (Chamber VMs (preview))
+- Microsoft.HpcWorkbench/instances/consortiums (Consortiums (preview))
- Microsoft.HybridCompute/machines (Servers - Azure Arc) - Sample query: [Get count and percentage of Arc-enabled servers by domain](../samples/samples-by-category.md#get-count-and-percentage-of-arc-enabled-servers-by-domain) - Sample query: [List all extensions installed on an Azure Arc-enabled server](../samples/samples-by-category.md#list-all-extensions-installed-on-an-azure-arc-enabled-server)
For sample queries for this table, see [Resource Graph sample queries for resour
- Sample query: [List all extensions installed on an Azure Arc-enabled server](../samples/samples-by-category.md#list-all-extensions-installed-on-an-azure-arc-enabled-server) - Microsoft.HybridCompute/privateLinkScopes (Azure Arc Private Link Scopes) - microsoft.hybridcontainerservice/provisionedclusters
+- microsoft.hybridcontainerservice/provisionedclusters/agentpools
- Microsoft.HybridData/dataManagers (StorSimple Data Managers) - Microsoft.HybridNetwork/devices (Azure Network Function Manager ΓÇô Devices) - Microsoft.HybridNetwork/networkFunctions (Azure Network Function Manager ΓÇô Network Functions)
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.insights/webtests (Availability tests) - microsoft.insights/workbooks (Azure Workbooks) - microsoft.insights/workbooktemplates (Azure Workbook Templates)-- Microsoft.IntelligentITDigitalTwin/digitalTwins (Minervas)-- Microsoft.IntelligentITDigitalTwin/digitalTwins/assets (Assets)-- Microsoft.IntelligentITDigitalTwin/digitalTwins/executionPlans (Deployments)-- Microsoft.IntelligentITDigitalTwin/digitalTwins/testPlans (Suites)-- Microsoft.IntelligentITDigitalTwin/digitalTwins/tests (Scripts)
+- microsoft.intelligentitdigitaltwin/digitaltwins
+- microsoft.intelligentitdigitaltwin/digitaltwins/assets
+- microsoft.intelligentitdigitaltwin/digitaltwins/executionplans
+- microsoft.intelligentitdigitaltwin/digitaltwins/testplans
+- microsoft.intelligentitdigitaltwin/digitaltwins/tests
- Microsoft.IoTCentral/IoTApps (IoT Central Applications) - microsoft.iotspaces/graph - microsoft.keyvault/hsmpools
For sample queries for this table, see [Resource Graph sample queries for resour
- Microsoft.MixedReality/spatialAnchorsAccounts (Spatial Anchors Accounts) - microsoft.mixedreality/surfacereconstructionaccounts - Microsoft.MobileNetwork/mobileNetworks (Mobile Networks)-- microsoft.mobilenetwork/mobilenetworks/datanetworks
+- Microsoft.MobileNetwork/mobileNetworks/dataNetworks (Data Networks)
- Microsoft.MobileNetwork/mobileNetworks/services (Services)-- microsoft.mobilenetwork/mobilenetworks/simpolicies
+- Microsoft.MobileNetwork/mobileNetworks/simPolicies (Sim Policies)
- Microsoft.MobileNetwork/mobileNetworks/sites (Mobile Network Sites)-- microsoft.mobilenetwork/mobilenetworks/slices
+- Microsoft.MobileNetwork/mobileNetworks/slices (Slices)
- microsoft.mobilenetwork/networks - microsoft.mobilenetwork/networks/sites-- Microsoft.MobileNetwork/packetCoreControlPlanes (Arc for network functions ΓÇô Packet Cores)-- microsoft.mobilenetwork/packetcorecontrolplanes/packetcoredataplanes-- microsoft.mobilenetwork/packetcorecontrolplanes/packetcoredataplanes/attacheddatanetworks
+- Microsoft.MobileNetwork/packetCoreControlPlanes (Packet Core Control Planes)
+- Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes (Packet Core Data Planes)
+- Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/attachedDataNetworks (Attached Data Networks)
- Microsoft.MobileNetwork/sims (Sims) - microsoft.mobilenetwork/sims/simprofiles - microsoft.monitor/accounts
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.netapp/netappaccounts/capacitypools/volumes/subvolumes - Microsoft.NetApp/netAppAccounts/snapshotPolicies (Snapshot policies) - Microsoft.Network/applicationGateways (Application gateways)-- Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies (Web application firewall policies (WAF))
+- Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies (Application Gateway WAF policies)
- Microsoft.Network/applicationSecurityGroups (Application security groups) - Microsoft.Network/azureFirewalls (Firewalls) - Microsoft.Network/bastionHosts (Bastions)
For sample queries for this table, see [Resource Graph sample queries for resour
- Microsoft.Network/ddosProtectionPlans (DDoS protection plans) - Microsoft.Network/dnsForwardingRulesets (Dns Forwarding Rulesets) - Microsoft.Network/dnsResolvers (DNS Private Resolvers)
+- microsoft.network/dnsresolvers/inboundendpoints
+- microsoft.network/dnsresolvers/outboundendpoints
- Microsoft.Network/dnsZones (DNS zones) - microsoft.network/dscpconfigurations - Microsoft.Network/expressRouteCircuits (ExpressRoute circuits)
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.powerplatform/enterprisepolicies - microsoft.projectbabylon/accounts - microsoft.providerhubdevtest/regionalstresstests-- Microsoft.Purview/Accounts (Purview accounts)
+- Microsoft.Purview/Accounts (Azure Purview accounts)
- Microsoft.Quantum/Workspaces (Quantum Workspaces) - Microsoft.RecommendationsService/accounts (Intelligent Recommendations Accounts) - Microsoft.RecommendationsService/accounts/modeling (Modeling) - Microsoft.RecommendationsService/accounts/serviceEndpoints (Service Endpoints) - Microsoft.RecoveryServices/vaults (Recovery Services vaults)
+- microsoft.recoveryservices/vaults/backupstorageconfig
+- microsoft.recoveryservices/vaults/replicationfabrics
+- microsoft.recoveryservices/vaults/replicationfabrics/replicationprotectioncontainers
- microsoft.recoveryservices/vaults/replicationfabrics/replicationprotectioncontainers/replicationprotecteditems
+- microsoft.recoveryservices/vaults/replicationfabrics/replicationprotectioncontainers/replicationprotectioncontainermappings
- microsoft.recoveryservices/vaults/replicationfabrics/replicationrecoveryservicesproviders - Microsoft.RedHatOpenShift/OpenShiftClusters (Azure Red Hat OpenShift) - Microsoft.Relay/namespaces (Relays)
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.resources/templatespecs/versions - Microsoft.SaaS/applications (Software as a Service (classic)) - Microsoft.SaaS/resources (SaaS)-- Microsoft.Scheduler/jobCollections (Scheduler Job Collections)
+- microsoft.scheduler/jobcollections
- Microsoft.Scom/managedInstances (Aquila Instances) - microsoft.scvmm/availabilitysets - microsoft.scvmm/clouds
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.security/automations - microsoft.security/customassessmentautomations - microsoft.security/customentitystoreassignments
+- microsoft.security/datascanners
- microsoft.security/iotsecuritysolutions - microsoft.security/securityconnectors - microsoft.security/standards - Microsoft.SecurityDetonation/chambers (Security Detonation Chambers)
+- microsoft.securitydevops/githubconnectors
- Microsoft.ServiceBus/namespaces (Service Bus Namespaces) - Microsoft.ServiceFabric/clusters (Service Fabric clusters) - microsoft.servicefabric/containergroupsets
For sample queries for this table, see [Resource Graph sample queries for resour
- pokitdok.platform/services - private.arsenv1/resourcetype1 - private.autonomousdevelopmentplatform/accounts
+- private.connectedvehicle/platformaccounts
- private.contoso/employees - private.flows/flows-- Providers.Test/statefulIbizaEngines (My Resources)
+- Providers.Test/statefulIbizaEngines (VLCentral Help)
- ravenhq.db/databases - raygun.crashreporting/apps - sendgrid.email/accounts
For sample queries for this table, see [Resource Graph sample queries for resour
- test.shoebox/testresources - test.shoebox/testresources2 - trendmicro.deepsecurity/accounts-- u2uconsult.theidentityhub/services - Wandisco.Fusion/fusionGroups (LiveData Planes) - Wandisco.Fusion/fusionGroups/azureZones (Azure Zones) - Wandisco.Fusion/fusionGroups/azureZones/plugins (Plugins)
For sample queries for this table, see [Resource Graph sample queries for resour
For sample queries for this table, see [Resource Graph sample queries for securityresources](../samples/samples-by-table.md#securityresources).
+- microsoft.authorization/locks/providers/assessments/governanceassignments
+- microsoft.authorization/roleassignments/providers/assessments/governanceassignments
- microsoft.security/assessments - Sample query: [Count healthy, unhealthy, and not applicable resources per recommendation](../samples/samples-by-category.md#count-healthy-unhealthy-and-not-applicable-resources-per-recommendation) - Sample query: [List Azure Security Center recommendations](../samples/samples-by-category.md#list-azure-security-center-recommendations) - Sample query: [List Container Registry vulnerability assessment results](../samples/samples-by-category.md#list-container-registry-vulnerability-assessment-results) - Sample query: [List Qualys vulnerability assessment results](../samples/samples-by-category.md#list-qualys-vulnerability-assessment-results)
+- microsoft.security/assessments/governanceassignments
- microsoft.security/assessments/subassessments - Sample query: [List Container Registry vulnerability assessment results](../samples/samples-by-category.md#list-container-registry-vulnerability-assessment-results) - Sample query: [List Qualys vulnerability assessment results](../samples/samples-by-category.md#list-qualys-vulnerability-assessment-results)
+- microsoft.security/governancerules
- microsoft.security/insights/classification (Data Sensitivity Security Insights (Preview)) - Sample query: [Get sensitivity insight of a specific resource](../samples/samples-by-category.md#get-sensitivity-insight-of-a-specific-resource) - microsoft.security/iotalerts
governance Samples By Category https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/samples-by-category.md
Title: List of sample Azure Resource Graph queries by category description: List sample queries for Azure Resource-Graph. Categories include Tags, Azure Advisor, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 01/20/2022 Last updated : 02/16/2022
governance Samples By Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/samples-by-table.md
Title: List of sample Azure Resource Graph queries by table description: List sample queries for Azure Resource-Graph. Tables include Resources, ResourceContainers, PolicyResources, and more. Previously updated : 01/20/2022 Last updated : 02/16/2022
iot-central Howto Create Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-analytics.md
# This article applies to operators, builders, and administrators.
-# How to use analytics to analyze device data
+# How to use data explorer to analyze device data
-Azure IoT Central provides rich analytics capabilities to analyze historical trends and correlate telemetry from your devices. To get started, select **Analytics** on the left pane.
+Azure IoT Central provides rich analytics capabilities to analyze historical trends and correlate telemetry from your devices. To get started, select **Data explorer** on the left pane.
## Understand the data explorer UI
The analytics user interface has three main components:
- **Chart control:** The chart control visualizes the data as a line chart. You can toggle the visibility of specific lines by interacting with the chart legend.
- :::image type="content" source="media/howto-create-analytics/analytics-ui.png" alt-text="Screenshot that shows the three areas of the analytics UI.":::
+ :::image type="content" source="media/howto-create-analytics/analytics-ui.png" alt-text="Screenshot that shows the three areas of the data explorer UI.":::
## Query your data
Select **Save** to save an analytics query. Later, you can retrieve any queries
- **Interval-size slider**: Use the slider to zoom in and out of intervals over the same time span. This control gives more precise control of movement between large slices of time. You can use it to see granular, high-resolution views of your data, even down to milliseconds. The default start point of the slider gives you an optimal view of the data from your selection. This view balances resolution, query speed, and granularity.
- - **Date range picker**: Use this control, to select the date and time ranges you want. You can also use the control to switch between different time zones. After you make the changes to apply to your current workspace, select **Save**.
+ - **Timeframe**: Use this control, to select the date and time ranges you want. You can also use the control to switch between different time zones. After you make the changes to apply to your current workspace, select **Save**.
> [!TIP] > Interval size is determined dynamically based on the selected time span. Smaller time spans let you aggregate the data into very granular intervals of up to a few seconds.
Select **Save** to save an analytics query. Later, you can retrieve any queries
:::image type="content" source="media/howto-create-analytics/y-axis-control.png" alt-text="A screenshot that highlights the y-axis control."::: -- **Zoom control:** The zoom control lets you drill further into your data. If you find a time period you'd like to focus on within your result set, use your mouse pointer to highlight the area. Then right-click on the selected area and select **Zoom**.
+- **Zoom control:** The zoom control lets you drill further into your data. If you find a time period you'd like to focus on within your result set, use your mouse pointer to highlight the area. Then select **Zoom in**.
:::image type="content" source="media/howto-create-analytics/zoom.png" alt-text="A Screenshot that shows the use of the zoom control."::: Select the ellipsis, for more chart controls: -- **Display Grid:** Display your results in a table format that lets you view the value for each data point.
+- **View Data as Table:** Display your results in a table format that lets you view the value for each data point.
- **Download as CSV:** Export your results as a comma-separated values (CSV) file. The CSV file contains data for each device. Results are exported by using the interval and timeframe specified.
iot-central Howto Map Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-map-data.md
By default, data exports from IoT Central include mapped data. To exclude mapped
## Next steps
-Now that you've learned how to map data for your device, a suggested next step is to learn [How to use analytics to analyze device data](howto-create-analytics.md).
+Now that you've learned how to map data for your device, a suggested next step is to learn [How to use data explorer to analyze device data](howto-create-analytics.md).
iot-central Howto Transform Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-transform-data.md
In this article, you learned about the different options for transforming device
- Use an IoT Edge module to transform data from downstream devices before the data is sent to your IoT Central application. - Use Azure Functions to transform data outside of IoT Central. In this scenario, IoT Central uses a data export to send incoming data to an Azure function to be transformed. The function sends the transformed data back to your IoT Central application.
-Now that you've learned how to transform device data outside of your Azure IoT Central application, you can learn [How to use analytics to analyze device data in IoT Central](howto-create-analytics.md).
+Now that you've learned how to transform device data outside of your Azure IoT Central application, you can learn [How to use data explorer to analyze device data in IoT Central](howto-create-analytics.md).
iot-central Overview Iot Central Operator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-operator.md
IoT Central lets you complete device management tasks such as:
To monitor devices, use the custom device views defined by a solution builder. These views can show device telemetry and property values. An example is the **Overview** view shown in the previous screenshot.
-For more detailed information, use device groups and the built-in analytics features. To learn more, see [How to use analytics to analyze device data](howto-create-analytics.md).
+For more detailed information, use device groups and the built-in analytics features. To learn more, see [How to use data explorer to analyze device data](howto-create-analytics.md).
To manage individual devices, use device views to set device and cloud properties, and call device commands. Examples include the **Manage device** and **Commands** views in the previous screenshot.
iot-central Overview Iot Central Solution Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-solution-builder.md
Built-in features of IoT Central you can use to extract business value include:
- [Tutorial: Create a rule and set up notifications in your Azure IoT Central application](tutorial-create-telemetry-rules.md) - [Configure rules](howto-configure-rules.md)
- IoT Central has built-in analytics capabilities that an operator can use to analyze the data flowing from the connected devices. To learn more, see [How to use analytics to analyze device data](howto-create-analytics.md).
+ IoT Central has built-in analytics capabilities that an operator can use to analyze the data flowing from the connected devices. To learn more, see [How to use data explorer to analyze device data](howto-create-analytics.md).
Scenarios that process IoT data outside of IoT Central to extract business value include:
iot-central Overview Iot Central Tour https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-tour.md
The device templates page is where you can view and create device templates in t
### Data Explorer Data explorer exposes rich capabilities to analyze historical trends and correlate various telemetries from your devices. To learn more, see the [Create analytics for your Azure IoT Central application](howto-create-analytics.md) article.
iot-central Tutorial Use Device Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-use-device-groups.md
To analyze the telemetry for a device group:
:::image type="content" source="media/tutorial-use-device-groups/export-data.png" alt-text="Screenshot that shows how to export data for the Contoso devices":::
-To learn more about analytics, see [How to use analytics to analyze device data](howto-create-analytics.md).
+To learn more about analytics, see [How to use data explorer to analyze device data](howto-create-analytics.md).
## Clean up resources
iot-develop Quickstart Devkit Espressif Esp32 Freertos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-espressif-esp32-freertos.md
+
+ Title: Connect an ESPRESSIF ESP-32 to Azure IoT Central quickstart
+description: Use Azure IoT middleware for FreeRTOS to connect an ESPRESSIF ESP32-Azure IoT Kit device to Azure IoT and send telemetry.
+++
+ms.devlang: c
+ Last updated : 12/02/2021+
+#Customer intent: As a device builder, I want to see a working IoT device sample connecting to Azure IoT, sending properties and telemetry, and responding to commands. As a solution builder, I want to use a tool to view the properties, commands, and telemetry an IoT Plug and Play device reports to the IoT hub it connects to.
++
+# Quickstart: Connect an ESPRESSIF ESP32-Azure IoT Kit to IoT Central
+
+**Applies to**: [Embedded device development](about-iot-develop.md#embedded-device-development)<br>
+**Total completion time**: 30 minutes
+
+In this quickstart, you use the Azure IoT middleware for FreeRTOS to connect the ESPRESSIF ESP32-Azure IoT Kit (from now on, the ESP32 DevKit) to Azure IoT.
+
+You'll complete the following tasks:
+
+* Install a set of embedded development tools for programming an ESP32 DevKit
+* Build an image and flash it onto the ESP32 DevKit
+* Use Azure IoT Central to create cloud components, view properties, view device telemetry, and call direct commands
+
+## Prerequisites
+
+Operating system: Windows 10 or Windows 11
+
+Hardware:
+- ESPRESSIF [ESP32-Azure IoT Kit](https://www.espressif.com/products/devkits/esp32-azure-kit/overview)
+- USB 2.0 A male to Micro USB male cable
+- Wi-Fi 2.4 GHz
+
+## Prepare the development environment
+
+To set up your development environment, first you install the ESPRESSIF ESP-IDF build environment. The installer includes all the tools required to clone, build, flash, and monitor your device.
+
+To install the ESP-IDF tools:
+1. Download and launch the [ESP-IDF Online installer](https://dl.espressif.com/dl/esp-idf).
+1. When the installer prompts for a version, select version ESP-IDF v4.3.
+1. When the installer prompts for the components to install, select all components.
++
+## Prepare the device
+To connect the ESP32 DevKit to Azure, you'll modify configuration settings, build the image, and flash the image to the device. You can run all the commands in this section within the ESP-IDF command line.
+
+### Set up the environment
+To start the ESP-IDF PowerShell and clone the repo:
+1. Select Windows **Start**, and launch **ESP-IDF PowerShell**.
+1. Navigate to a working folder where you want to clone the repo.
+1. Clone the repo. This repo contains the Azure FreeRTOS middleware and sample code that you'll use to build an image for the ESP32 DevKit.
+
+ ```shell
+ git clone --recursive https://github.com/Azure-Samples/iot-middleware-freertos-samples
+ ```
+
+To launch the ESP-IDF configuration settings:
+1. In **ESP-IDF PowerShell**, navigate to the *iot-middleware-freertos-samples* directory that you cloned previously.
+1. Navigate to the ESP32-Azure IoT Kit project directory *demos\projects\ESPRESSIF\aziotkit*.
+1. Run the following command to launch the configuration menu:
+
+ ```shell
+ idf.py menuconfig
+ ```
+
+### Add configuration
+
+To add configuration to connect to Azure IoT Central:
+1. In **ESP-IDF PowerShell**, select **Azure IoT middleware for FreeRTOS Main Task Configuration >**, and press Enter.
+1. Select **Enable Device Provisioning Sample**, and press Enter to enable it.
+1. Set the following Azure IoT configuration settings to the values that you saved after you created Azure resources.
+
+ |Setting|Value|
+ |-|--|
+ |**Azure IoT Device Symmetric Key** |{*Your primary key value*}|
+ |**Azure Device Provisioning Service Registration ID** |{*Your Device ID value*}|
+ |**Azure Device Provisioning Service ID Scope** |{*Your ID scope value*}|
+
+1. Press Esc to return to the previous menu.
+
+To add wireless network configuration:
+1. Select **Azure IoT middleware for FreeRTOS Sample Configuration >**, and press Enter.
+1. Set the following configuration settings using your local wireless network credentials.
+
+ |Setting|Value|
+ |-|--|
+ |**WiFi SSID** |{*Your Wi-Fi SSID*}|
+ |**WiFi Password** |{*Your Wi-Fi password*}|
+
+1. Press Esc to return to the previous menu.
+
+To save the configuration:
+1. Press **S** to open the save options, then press Enter to save the configuration.
+1. Press Enter to dismiss the acknowledgment message.
+1. Press **Q** to quit the configuration menu.
++
+### Build and flash the image
+In this section, you use the ESP-IDF tools to build, flash, and monitor the ESP32 DevKit as it connects to Azure IoT.
+
+> [!NOTE]
+> In the following commands in this section, use a short build output path near your root directory. Specify the build path after the `-B` parameter in each command that requires it. The short path helps to avoid a current issue in the ESPRESSIF ESP-IDF tools that can cause errors with long build path names. The following commands use a local path *C:\espbuild* as an example.
+
+To build the image:
+1. In **ESP-IDF PowerShell**, from the *iot-middleware-freertos-samples\demos\projects\ESPRESSIF\aziotkit* directory, run the following command to build the image.
+
+ ```shell
+ idf.py --no-ccache -B "C:\espbuild" build
+ ```
+
+1. After the build completes, confirm that the binary image file was created in the build path that you specified previously.
+
+ *C:\espbuild\azure_iot_freertos_esp32.bin*
+
+To flash the image:
+1. On the ESP32 DevKit, locate the Micro USB port, which is highlighted in the following image:
+
+ :::image type="content" source="media/quickstart-devkit-espressif-esp32/esp-azure-iot-kit.png" alt-text="Photo of the ESP32-Azure IoT Kit board.":::
+
+1. Connect the Micro USB cable to the Micro USB port on the ESP32 DevKit, and then connect it to your computer.
+1. Open Windows **Device Manager**, and view **Ports** to find out which COM port the ESP32 DevKit is connected to.
+
+ :::image type="content" source="media/quickstart-devkit-espressif-esp32/esp-device-manager.png" alt-text="Screenshot of Windows Device Manager displaying COM port for a connected device.":::
+
+1. In **ESP-IDF PowerShell**, run the following command, replacing the *\<Your-COM-port\>* placeholder and brackets with the correct COM port from the previous step. For example, replace the placeholder with `COM3`.
+
+ ```shell
+ idf.py --no-ccache -B "C:\espbuild" -p <Your-COM-port> flash
+ ```
+
+1. Confirm that the output completes with the following text for a successful flash:
+
+ ```output
+ Hash of data verified
+
+ Leaving...
+ Hard resetting via RTS pin...
+ Done
+ ```
+
+To confirm that the device connects to Azure IoT Central:
+1. In **ESP-IDF PowerShell**, run the following command to start the monitoring tool. As you did in a previous command, replace the *\<Your-COM-port\>* placeholder and brackets with the COM port that the device is connected to.
+
+ ```shell
+ idf.py -B "C:\espbuild" -p <Your-COM-port> monitor
+ ```
+
+1. Check for repeating blocks of output similar to the following example. This output confirms that the device connects to Azure IoT and sends telemetry.
+
+ ```output
+ I (50807) AZ IOT: Successfully sent telemetry message
+ I (50807) AZ IOT: Attempt to receive publish message from IoT Hub.
+
+ I (51057) MQTT: Packet received. ReceivedBytes=2.
+ I (51057) MQTT: Ack packet deserialized with result: MQTTSuccess.
+ I (51057) MQTT: State record updated. New state=MQTTPublishDone.
+ I (51067) AZ IOT: Puback received for packet id: 0x00000008
+ I (53067) AZ IOT: Keeping Connection Idle...
+ ```
+
+## Verify the device status
+
+To view the device status in the IoT Central portal:
+1. From the application dashboard, select **Devices** on the side navigation menu.
+1. Confirm that the **Device status** of the device is updated to **Provisioned**.
+1. Confirm that the **Device template** of the device has updated to **Espressif ESP32 Azure IoT Kit**.
+
+ :::image type="content" source="media/quickstart-devkit-espressif-esp32/esp-device-status.png" alt-text="Screenshot of ESP32 DevKit device status in IoT Central.":::
+
+## View telemetry
+
+In IoT Central, you can view the flow of telemetry from your device to the cloud.
+
+To view telemetry in IoT Central:
+1. From the application dashboard, select **Devices** on the side navigation menu.
+1. Select the device from the device list.
+1. Select the **Overview** tab on the device page, and view the telemetry as the device sends messages to the cloud.
+
+ :::image type="content" source="media/quickstart-devkit-espressif-esp32/esp-telemetry.png" alt-text="Screenshot of the ESP32 DevKit device sending telemetry to IoT Central.":::
+
+## Send a command to the device
+
+You can also use IoT Central to send a command to your device. In this section, you run commands to send a message to the screen and toggle LED lights.
+
+To write to the screen:
+1. In IoT Central, select the **Commands** tab on the device page.
+1. Locate the **Espressif ESP32 Azure IoT Kit / Display Text** command.
+1. In the **Content** textbox, enter the text you want to send to the device screen.
+1. Select **Run**.
+1. Confirm that the device screen updates with the text.
+
+To toggle an LED:
+1. Select the **Command** tab on the device page.
+1. Locate the **Toggle LED 1** or **Toggle LED 2** commands.
+1. Select **Run**.
+1. Confirm that an LED light on the device toggles on or off.
+
+ :::image type="content" source="media/quickstart-devkit-espressif-esp32/esp-direct-commands.png" alt-text="Screenshot of entering directs commands for the device in IoT Central.":::
+
+## View device information
+
+You can view the device information from IoT Central.
+
+Select the **About** tab on the device page.
++
+## Clean up resources
+
+If you no longer need the Azure resources created in this tutorial, you can delete them from the IoT Central portal. Optionally, if you continue to another article in this Getting Started content, you can keep the resources you've already created and reuse them.
+
+To keep the Azure IoT Central sample application but remove only specific devices:
+
+1. Select the **Devices** tab for your application.
+1. Select the device from the device list.
+1. Select **Delete**.
+
+To remove the entire Azure IoT Central sample application and all its devices and resources:
+
+1. Select **Administration** > **Your application**.
+1. Select **Delete**.
+
+## Next Steps
+
+In this quickstart, you built a custom image that contains the Azure IoT middleware for FreeRTOS sample code, and then you flashed the image to the ESP32 DevKit device. You also used the IoT Central portal to create Azure resources, connect the ESP32 DevKit securely to Azure, view telemetry, and send messages.
+
+As a next step, explore the following articles to learn more about working with embedded devices and connecting them to Azure IoT.
+
+> [!div class="nextstepaction"]
+> [Azure IoT middleware for FreeRTOS samples](https://github.com/Azure-Samples/iot-middleware-freertos-samples)
+> [!div class="nextstepaction"]
+> [Azure RTOS embedded development quickstarts](quickstart-devkit-mxchip-az3166.md)
+> [!div class="nextstepaction"]
+> [Azure IoT device development documentation](./index.yml)
iot-develop Quickstart Devkit Stm B L475e Freertos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l475e-freertos.md
Title: Connect an STMicroelectronics B-L475E to Azure IoT Central quickstart
-description: Use Azure FreeRTOS device middleware to connect an STMicroelectronics B-L475E-IOT01A Discovery kit to Azure IoT and send telemetry.
+description: Use Azure IoT middleware for FreeRTOS to connect an STMicroelectronics B-L475E-IOT01A Discovery kit to Azure IoT and send telemetry.
**Applies to**: [Embedded device development](about-iot-develop.md#embedded-device-development)<br> **Total completion time**: 30 minutes
-In this quickstart, you use the Azure FreeRTOS middleware to connect the STMicroelectronics B-L475E-IOT01A Discovery kit (from now on, the STM DevKit) to Azure IoT Central.
+In this quickstart, you use the Azure IoT middleware for FreeRTOS to connect the STMicroelectronics B-L475E-IOT01A Discovery kit (from now on, the STM DevKit) to Azure IoT Central.
You complete the following tasks:
To remove the entire Azure IoT Central sample application and all its devices an
## Next Steps
-In this quickstart, you built a custom image that contains the Azure FreeRTOS middleware sample code. Then you flashed the image to the STM DevKit device. You also used the IoT Central portal to create Azure resources, connect the STM DevKit securely to Azure, view telemetry, and send messages.
+In this quickstart, you built a custom image that contains the Azure IoT middleware for FreeRTOS sample code. Then you flashed the image to the STM DevKit device. You also used the IoT Central portal to create Azure resources, connect the STM DevKit securely to Azure, view telemetry, and send messages.
As a next step, explore the following articles to learn how to work with embedded devices and connect them to Azure IoT. > [!div class="nextstepaction"]
-> [Azure FreeRTOS middleware samples](https://github.com/Azure-Samples/iot-middleware-freertos-samples)
+> [Azure IoT middleware for FreeRTOS samples](https://github.com/Azure-Samples/iot-middleware-freertos-samples)
> [!div class="nextstepaction"] > [Azure RTOS embedded development quickstarts](quickstart-devkit-mxchip-az3166.md) > [!div class="nextstepaction"]
iot-dps Iot Dps Understand Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/iot-dps-understand-ip-address.md
Previously updated : 03/12/2020 Last updated : 02/22/2022 # IoT Hub DPS IP addresses
-The IP address prefixes for the public endpoints of an IoT Hub Device Provisioning Service (DPS) are published periodically under the _AzureIoTHub_ [service tag](../virtual-network/service-tags-overview.md). You may use these IP address prefixes to control connectivity between an IoT DPS instance and devices or network assets in order to implement a variety of network isolation goals:
+The IP address prefixes for the public endpoints of an IoT Hub Device Provisioning Service (DPS) are published periodically under the _AzureIoTHub_ [service tag](../virtual-network/service-tags-overview.md). You may use these IP address prefixes to control connectivity between an IoT DPS instance and devices or network assets to implement a variety of network isolation goals:
| Goal | Approach | ||-|
-| Ensure your devices and services communicate with IoT Hub DPS endpoints only | Use the _AzureIoTHub_ service tag to discover IoT Hub DPS instances. Configure ALLOW rules on your devices' and services' firewall setting for those IP address prefixes accordingly. Configure rules to drop traffic to other destination IP addresses that you do not want devices or services to communicate with. |
-| Ensure your IoT Hub DPS endpoint receives connections only from your devices and network assets | Use IoT DPS [IP filter feature](iot-dps-ip-filtering.md) to create filter rules for the device and DPS service APIs. These filter rules can be used to allow connections only from your devices and network asset IP addresses (see [limitations](#limitations-and-workarounds) section). |
---
+| Ensure your devices and services communicate with IoT Hub DPS endpoints only | Use the _AzureIoTHub_ service tag to discover IoT Hub DPS instances. Configure ALLOW rules on your devices' and services' firewall setting for those IP address prefixes accordingly. Configure rules to drop traffic to other destination IP addresses that you don't want devices or services to communicate with. |
+| Ensure your IoT Hub DPS endpoint receives connections only from your devices and network assets | Use IoT DPS [IP filter feature](iot-dps-ip-filtering.md) to create filter rules for the device and DPS service APIs. These filter rules can be used to allow connections only from your devices and network asset IP addresses (see [limitations](#limitations-and-workarounds) section). |
## Best practices
-* When adding ALLOW rules in your devices' firewall configuration, it is best to provide specific [ports used by applicable protocols](../iot-hub/iot-hub-devguide-protocols.md#port-numbers).
+* When adding ALLOW rules in your devices' firewall configuration, it's best to provide specific [ports used by applicable protocols](../iot-hub/iot-hub-devguide-protocols.md#port-numbers).
-* The IP address prefixes of IoT DPS instances are subject to change. These changes are published periodically via service tags before taking effect. It is therefore important that you develop processes to regularly retrieve and use the latest service tags. This process can be automated via the [service tags discovery API](../virtual-network/service-tags-overview.md#service-tags-on-premises). The Service tags discovery API is still in preview and in some cases may not produce the full list of tags and IP addresses. Until discovery API is generally available, consider using the [service tags in downloadable JSON format](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files).
+* The IP address prefixes of IoT DPS instances are subject to change. These changes are published periodically via service tags before taking effect. It's therefore important that you develop processes to regularly retrieve and use the latest service tags. This process can be automated via the [service tags discovery API](../virtual-network/service-tags-overview.md#service-tags-on-premises). The Service tags discovery API is still in preview and in some cases may not produce the full list of tags and IP addresses. Until discovery API is generally available, consider using the [service tags in downloadable JSON format](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files).
* Use the *AzureIoTHub.[region name]* tag to identify IP prefixes used by DPS endpoints in a specific region. To account for datacenter disaster recovery, or [regional failover](iot-dps-ha-dr.md), ensure connectivity to IP prefixes of your DPS instance's geo-pair region is also enabled.
The IP address prefixes for the public endpoints of an IoT Hub Device Provisioni
## Limitations and workarounds
-* The DPS IP filter feature has a limit of 100 rules. This limit and can be raised via requests through Azure Customer Support.
+* The DPS IP filter feature has a limit of 100 rules.
* Your configured [IP filtering rules](iot-dps-ip-filtering.md) are only applied on your DPS endpoints and not on the linked IoT Hub endpoints. IP filtering for linked IoT Hubs must be configured separately. For more information, see, [IoT Hub IP filtering rules](../iot-hub/iot-hub-ip-filtering.md).
-## Support for IPv6
+## Support for IPv6
IPv6 is currently not supported on IoT Hub or DPS.
iot-hub-device-update Deploy Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/deploy-update.md
Title: Deploy an update using Device Update for Azure IoT Hub | Microsoft Docs
-description: Deploy an update using Device Update for Azure IoT Hub.
+ Title: Deploy an update by using Device Update for Azure IoT Hub | Microsoft Docs
+description: Deploy an update by using Device Update for Azure IoT Hub.
Last updated 2/11/2021
-# Deploy an Update using Device Update for IoT Hub
+# Deploy an update by using Device Update for Azure IoT Hub
-Learn how to deploy an update to an IoT device using Device Update for IoT Hub.
+Learn how to deploy an update to an IoT device by using Device Update for Azure IoT Hub.
## Prerequisites
-* [Access to an IoT Hub with Device Update for IoT Hub enabled](create-device-update-account.md). It is recommended that you use a S1 (Standard) tier or above for your IoT Hub.
-* [At least one update has been successfully imported for the provisioned device.](import-update.md)
+* [Access to an IoT hub with Device Update for IoT Hub enabled](create-device-update-account.md). We recommend that you use an S1 (Standard) tier or above for your IoT Hub instance.
+* [At least one update has been successfully imported for the provisioned device](import-update.md).
* An IoT device (or simulator) provisioned for Device Update within IoT Hub.
-* [The device is part of at least one default group or user-created update group.](create-update-group.md)
+* [The device is part of at least one default group or user-created update group](create-update-group.md).
* Supported browsers: * [Microsoft Edge](https://www.microsoft.com/edge) * Google Chrome
-## Deploy an update
+## Deploy the update
-1. Go to [Azure portal](https://portal.azure.com)
+1. Go to the [Azure portal](https://portal.azure.com).
-2. Navigate to the Device Update blade of your IoT Hub.
+1. Go to the **Device Update** pane of your IoT Hub instance.
- :::image type="content" source="media/deploy-update/device-update-iot-hub.png" alt-text="IoT Hub" lightbox="media/deploy-update/device-update-iot-hub.png":::
+ :::image type="content" source="media/deploy-update/device-update-iot-hub.png" alt-text="Screenshot that shows the Get started with the Device Update for IoT Hub page." lightbox="media/deploy-update/device-update-iot-hub.png":::
-3. Select the Groups and Deployments tab at the top of the page. [Learn More](device-update-groups.md) about device groups.
+1. Select the **Groups and Deployments** tab at the top of the page. [Learn more](device-update-groups.md) about device groups.
- :::image type="content" source="media/deploy-update/updated-view.png" alt-text="Groups and Deployments tab" lightbox="media/deploy-update/updated-view.png":::
+ :::image type="content" source="media/deploy-update/updated-view.png" alt-text="Screenshot that shows the Groups and Deployments tab." lightbox="media/deploy-update/updated-view.png":::
-4. View the update compliance chart and groups list. You should see a new update available for your device group listed under Best Update (you may need to Refresh once). [Learn More about update compliance.](device-update-compliance.md)
+1. View the update compliance chart and groups list. You should see a new update available for your device group listed under **Best update**. You might need to refresh once. [Learn more about update compliance](device-update-compliance.md).
-5. Select the target group by clicking on the group name. You will be directed to the group details under Group basics.
+1. Select the target group by selecting the group name. You're directed to the group details under **Group basics**.
- :::image type="content" source="media/deploy-update/group-basics.png" alt-text="Group details" lightbox="media/deploy-update/group-basics.png":::
+ :::image type="content" source="media/deploy-update/group-basics.png" alt-text="Screenshot that shows the Group details." lightbox="media/deploy-update/group-basics.png":::
-6. To initiate the deployment, go to the Current deployment tab. Click the deploy link next to the desired update from the Available updates section. The best, available update for a given group will be denoted with a "Best" highlight.
+1. To start the deployment, go to the **Current deployment** tab. Select the deploy link next to the desired update from the **Available updates** section. The best available update for a given group is denoted with a **Best** highlight.
- :::image type="content" source="media/deploy-update/select-update.png" alt-text="Select update" lightbox="media/deploy-update/select-update.png":::
+ :::image type="content" source="media/deploy-update/select-update.png" alt-text="Screenshot that shows Best highlighted." lightbox="media/deploy-update/select-update.png":::
+
+1. Schedule your deployment to start immediately or in the future. Then select **Create**.
-7. Schedule your deployment to start immediately or in the future, then select Create.
> [!TIP]
- > By default the Start date/time is 24 hrs from your current time. Be sure to select a different date/time if you want the deployment to begin earlier.
- :::image type="content" source="media/deploy-update/create-deployment.png" alt-text="Create deployment" lightbox="media/deploy-update/create-deployment.png":::
+ > By default, the **Start** date and time is 24 hours from your current time. Be sure to select a different date and time if you want the deployment to begin earlier.
-8. The Status under Deployment details should turn to Active, and the deployed update should be marked with "(deploying)".
+ :::image type="content" source="media/deploy-update/create-deployment.png" alt-text="Screenshot that shows the Create deployment screen" lightbox="media/deploy-update/create-deployment.png":::
- :::image type="content" source="media/deploy-update/deployment-active.png" alt-text="Deployment active" lightbox="media/deploy-update/deployment-active.png":::
+1. Under **Deployment details**, **Status** turns to **Active**. The deployed update is marked with **(deploying)**.
-9. View the compliance chart. You should see the update is now in progress.
+ :::image type="content" source="media/deploy-update/deployment-active.png" alt-text="Screenshot that shows deployment as Active." lightbox="media/deploy-update/deployment-active.png":::
- :::image type="content" source="media/deploy-update/update-in-progress.png" alt-text="Update in progress" lightbox="media/deploy-update/update-in-progress.png":::
+1. View the compliance chart to see that the update is now in progress.
-10. After your device is successfully updated, you should see your compliance chart and deployment details update to reflect the same.
+ :::image type="content" source="media/deploy-update/update-in-progress.png" alt-text="Screenshot that shows Updates in progress." lightbox="media/deploy-update/update-in-progress.png":::
- :::image type="content" source="media/deploy-update/update-succeeded.png" alt-text="Update succeeded" lightbox="media/deploy-update/update-succeeded.png":::
+1. After your device is successfully updated, you see that your compliance chart and deployment details updated to reflect the same.
-## Monitor an update deployment
+ :::image type="content" source="media/deploy-update/update-succeeded.png" alt-text="Screenshot that shows the update succeeded." lightbox="media/deploy-update/update-succeeded.png":::
-1. Select the Deployment history tab at the top of the page.
+## Monitor the update deployment
- :::image type="content" source="media/deploy-update/deployments-history.png" alt-text="Deployment History" lightbox="media/deploy-update/deployments-history.png":::
+1. Select the **Deployment history** tab at the top of the page.
-2. Select the details link next to the deployment you created.
+ :::image type="content" source="media/deploy-update/deployments-history.png" alt-text="Screenshot that shows the Deployment history tab." lightbox="media/deploy-update/deployments-history.png":::
- :::image type="content" source="media/deploy-update/deployment-details.png" alt-text="Deployment details" lightbox="media/deploy-update/deployment-details.png":::
+1. Select **Details** next to the deployment you created.
-3. Select Refresh to view the latest status details.
+ :::image type="content" source="media/deploy-update/deployment-details.png" alt-text="Screenshot that shows deployment details." lightbox="media/deploy-update/deployment-details.png":::
+1. Select **Refresh** to view the latest status details.
## Retry an update deployment
-If your deployment fails for some reason, you can retry the deployment for failed devices.
+If your deployment fails for some reason, you can retry the deployment for failed devices.
-1. Go to the Current deployment tab from the group details.
+1. Go to the **Current deployment** tab on the **Group details** screen.
- :::image type="content" source="media/deploy-update/deployment-active.png" alt-text="Deployment active" lightbox="media/deploy-update/deployment-active.png":::
+ :::image type="content" source="media/deploy-update/deployment-active.png" alt-text="Screenshot that shows the deployment as Active." lightbox="media/deploy-update/deployment-active.png":::
-2. Click on "Retry failed devices" and acknowledge the confirmation notification.
+1. Select **Retry failed devices** and acknowledge the confirmation notification.
## Next steps
iot-hub-device-update Device Update Agent Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-agent-provisioning.md
Follow these instructions to provision the Device Update agent on [IoT Edge enab
1. Install the Device Update image update agent.
- We provide sample images in the [Assets here](https://github.com/Azure/iot-hub-device-update/releases) repository. The swUpdate file is the base image that you can flash onto a Raspberry Pi B3+ board. The .gz file is the update you would import through Device Update for IoT Hub. For an example, see [How to flash the image to your IoT Hub device](./device-update-raspberry-pi.md#flash-sd-card-with-image).
+ We provide sample images in the [Assets here](https://github.com/Azure/iot-hub-device-update/releases) repository. The swUpdate file is the base image that you can flash onto a Raspberry Pi B3+ board. The .gz file is the update you would import through Device Update for IoT Hub. For an example, see [How to flash the image to your IoT Hub device](./device-update-raspberry-pi.md#flash-an-sd-card-with-the-image).
1. Install the Device Update package update agent.
Follow these instructions to provision the Device Update agent on your IoT Linux
2. Configure the IoT Identity Service by following the instructions in [Configuring the Azure IoT Identity Service](https://azure.github.io/iot-identity-service/configuration.html).
-3. Finally install the Device Update agent. We provide sample images in [Assets here](https://github.com/Azure/iot-hub-device-update/releases), the swUpdate file is the base image that you can flash onto a Raspberry Pi B3+ board, and the .gz file is the update you would import through Device Update for IoT Hub. See example of [how to flash the image to your IoT Hub device](./device-update-raspberry-pi.md#flash-sd-card-with-image).
+3. Finally install the Device Update agent. We provide sample images in [Assets here](https://github.com/Azure/iot-hub-device-update/releases), the swUpdate file is the base image that you can flash onto a Raspberry Pi B3+ board, and the .gz file is the update you would import through Device Update for IoT Hub. See example of [how to flash the image to your IoT Hub device](./device-update-raspberry-pi.md#flash-an-sd-card-with-the-image).
4. After you've installed the device update agent, you will need to edit the configuration file for Device Update by running the command below.
Follow these instructions to provision the Device Update agent on your IoT Linux
The Device Update agent can also be configured without the IoT Identity service for testing or on constrained devices. Follow the below steps to provision the Device Update agent using a connection string (from the Module or Device).
-1. We provide sample images in the [Assets here](https://github.com/Azure/iot-hub-device-update/releases) repository. The swUpdate file is the base image that you can flash onto a Raspberry Pi B3+ board. The .gz file is the update you would import through Device Update for IoT Hub. For an example, see [How to flash the image to your IoT Hub device](./device-update-raspberry-pi.md#flash-sd-card-with-image).
+1. We provide sample images in the [Assets here](https://github.com/Azure/iot-hub-device-update/releases) repository. The swUpdate file is the base image that you can flash onto a Raspberry Pi B3+ board. The .gz file is the update you would import through Device Update for IoT Hub. For an example, see [How to flash the image to your IoT Hub device](./device-update-raspberry-pi.md#flash-an-sd-card-with-the-image).
1. Log onto the machine or IoT Edge device/IoT device.
iot-hub-device-update Device Update Azure Real Time Operating System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-azure-real-time-operating-system.md
Title: Device Update for Azure Real-time-operating-system | Microsoft Docs
-description: Get started with Device Update for Azure Real-time-operating-system
+ Title: Device Update for Azure RTOS | Microsoft Docs
+description: Get started with Device Update for Azure RTOS.
Last updated 3/18/2021
-# Device Update for Azure IoT Hub tutorial using Azure Real Time Operating System (RTOS)
+# Tutorial: Device Update for Azure IoT Hub using Azure RTOS
-This tutorial walks through how to create the Device Update for IoT Hub Agent in Azure RTOS NetX Duo. It also provides simple APIs for developers to integrate the Device Update capability in their application. Explore [samples](https://github.com/azure-rtos/samples/tree/PublicPreview/ADU) of key semiconductors evaluation boards that include the get started guides to learn configure, build, and deploy the over-the-air (OTA) updates to the devices.
+This tutorial shows you how to create the Device Update for Azure IoT Hub agent in Azure RTOS NetX Duo. It also provides simple APIs for developers to integrate the Device Update capability in their application. Explore [samples](https://github.com/azure-rtos/samples/tree/PublicPreview/ADU) of key semiconductors evaluation boards that include the get-started guides to learn how to configure, build, and deploy over-the-air updates to the devices.
-In this tutorial you learn how to:
+In this tutorial, you'll learn how to:
> [!div class="checklist"]
-> * Get started
-> * Tag your device
-> * Create a device group
-> * Deploy an image update
-> * Monitor the update deployment
+> * Get started.
+> * Tag your device.
+> * Create a device group.
+> * Deploy an image update.
+> * Monitor the update deployment.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites
-* Access to an IoT Hub. It is recommended that you use a S1 (Standard) tier or higher.
-* A Device Update instance and account linked to your IoT Hub. Follow the guide to [create and link](./create-device-update-account.md) a device update account if you have not done so previously.
+
+* Access to an IoT Hub instance. We recommend that you use an S1 (Standard) tier or higher.
+* A Device Update instance and account linked to your IoT hub. Follow the guide to [create and link](./create-device-update-account.md) a device update account if you haven't done so previously.
## Get started
-Each board-specific sample Azure RTOS project contains code and documentation on how to use Device Update for IoT Hub on it.
+Each board-specific sample Azure real-time operating system (RTOS) project contains code and documentation on how to use Device Update for IoT Hub on it. You will:
+ 1. Download the board-specific sample files from [Azure RTOS and Device Update samples](https://github.com/azure-rtos/samples/tree/PublicPreview/ADU).
-2. Find the docs folder from the downloaded sample.
-3. From the docs, follow the steps for how to prepare Azure Resources, Account, and register IoT devices to it.
-5. Next follow the docs to build a new firmware image and import manifest for your board.
-6. Next publish firmware image and manifest to Device Update for IoT Hub.
-7. Finally download and run the project on your device.
+1. Find the docs folder from the downloaded sample.
+1. From the docs, follow the steps for how to prepare Azure resources and an account and register IoT devices to it.
+1. Follow the docs to build a new firmware image and import manifest for your board.
+1. Publish the firmware image and manifest to Device Update for IoT Hub.
+1. Download and run the project on your device.
-Learn more about [Azure RTOS](/azure/rtos/).
+Learn more about [Azure RTOS](/azure/rtos/).
## Tag your device 1. Keep the device application running from the previous step.
-2. Log into [Azure portal](https://portal.azure.com) and navigate to the IoT Hub.
-3. From 'IoT Devices' on the left navigation pane, find your IoT device and navigate to the Device Twin.
-4. In the Device Twin, delete any existing Device Update tag value by setting them to null.
-5. Add a new Device Update tag value to the root JSON object as shown below.
+1. Sign in to the [Azure portal](https://portal.azure.com) and go to the IoT hub.
+1. On the left pane, under **IoT Devices**, find your IoT device and go to the device twin.
+1. In the device twin, delete any existing Device Update tag values by setting them to null.
+1. Add a new Device Update tag value to the root JSON object, as shown:
+
+ ```JSON
+ "tags": {
+ "ADUGroup": "<CustomTagValue>"
+ }
+ ```
-```JSON
- "tags": {
- "ADUGroup": "<CustomTagValue>"
- }
-```
+## Create an update group
+1. Go to the **Groups and Deployments** tab at the top of the page.
-## Create update group
+ :::image type="content" source="media/create-update-group/ungrouped-devices.png" alt-text="Screenshot that shows ungrouped devices." lightbox="media/create-update-group/ungrouped-devices.png":::
-1. Go to the Groups and Deployments tab at the top of the page.
- :::image type="content" source="media/create-update-group/ungrouped-devices.png" alt-text="Screenshot of ungrouped devices." lightbox="media/create-update-group/ungrouped-devices.png":::
+1. Select **Add group** to create a new group.
-2. Select the "Add group" button to create a new group.
- :::image type="content" source="media/create-update-group/add-group.png" alt-text="Screenshot of device group addition." lightbox="media/create-update-group/add-group.png":::
+ :::image type="content" source="media/create-update-group/add-group.png" alt-text="Screenshot that shows a device group addition." lightbox="media/create-update-group/add-group.png":::
-3. Select an IoT Hub tag and Device Class from the list and then select Create group.
- :::image type="content" source="media/create-update-group/select-tag.png" alt-text="Screenshot of tag selection." lightbox="media/create-update-group/select-tag.png":::
+1. Select an **IoT Hub** tag and **Device class** from the list. Then select **Create group**.
-4. Once the group is created, you will see that the update compliance chart and groups list are updated. Update compliance chart shows the count of devices in various states of compliance: On latest update, New updates available, and Updates in Progress. [Learn about update compliance.](device-update-compliance.md)
- :::image type="content" source="media/create-update-group/updated-view.png" alt-text="Screenshot of update compliance view." lightbox="media/create-update-group/updated-view.png":::
+ :::image type="content" source="media/create-update-group/select-tag.png" alt-text="Screenshot that shows tag selection." lightbox="media/create-update-group/select-tag.png":::
-5. You should see your newly created group and any available updates for the devices in the new group. If there are devices that don't meet the device class requirements of the group, they will show up in a corresponding invalid group. You can deploy the best available update to the new user-defined group from this view by clicking on the "Deploy" button next to the group.
+1. After the group is created, you see that the update compliance chart and groups list are updated. The update compliance chart shows the count of devices in various states of compliance: **On latest update**, **New updates available**, and **Updates in progress**. [Learn about update compliance](device-update-compliance.md).
-[Learn more](create-update-group.md) about adding tags and creating update groups
+ :::image type="content" source="media/create-update-group/updated-view.png" alt-text="Screenshot that shows the update compliance view." lightbox="media/create-update-group/updated-view.png":::
+1. You should see your newly created group and any available updates for the devices in the new group. If there are devices that don't meet the device class requirements of the group, they show up in a corresponding invalid group. To deploy the best available update to the new user-defined group from this view, select **Deploy** next to the group.
+
+[Learn more](create-update-group.md) about how to add tags and create update groups.
## Deploy new firmware
-1. Once the group is created, you should see a new update available for your device group, with a link to the update under Best Update (you may need to Refresh once). [Learn More about update compliance.](device-update-compliance.md)
+1. After the group is created, you should see a new update available for your device group with a link to the update under **Best update**. You might need to refresh once. [Learn more about update compliance](device-update-compliance.md).
+
+1. Select the target group by selecting the group name. You're directed to the group details under **Group basics**.
-2. Select the target group by clicking on the group name. You will be directed to the group details under Group basics.
+ :::image type="content" source="media/deploy-update/group-basics.png" alt-text="Screenshot that shows Group details." lightbox="media/deploy-update/group-basics.png":::
- :::image type="content" source="media/deploy-update/group-basics.png" alt-text="Group details" lightbox="media/deploy-update/group-basics.png":::
+1. To start the deployment, go to the **Current deployment** tab. Select the deploy link next to the desired update from the **Available updates** section. The best available update for a given group is denoted with a **Best** highlight.
-3. To initiate the deployment, go to the Current deployment tab. Click the deploy link next to the desired update from the Available updates section. The best, available update for a given group will be denoted with a "Best" highlight.
+ :::image type="content" source="media/deploy-update/select-update.png" alt-text="Screenshot that shows selecting an update." lightbox="media/deploy-update/select-update.png":::
- :::image type="content" source="media/deploy-update/select-update.png" alt-text="Select update" lightbox="media/deploy-update/select-update.png":::
+1. Schedule your deployment to start immediately or in the future. Then select **Create**.
-4. Schedule your deployment to start immediately or in the future, then select Create.
> [!TIP]
- > By default the Start date/time is 24 hrs from your current time. Be sure to select a different date/time if you want the deployment to begin earlier.
- :::image type="content" source="media/deploy-update/create-deployment.png" alt-text="Create deployment" lightbox="media/deploy-update/create-deployment.png":::
+ > By default, the **Start** date and time is 24 hours from your current time. Be sure to select a different date and time if you want the deployment to begin earlier.
-5. The Status under Deployment details should turn to Active, and the deployed update should be marked with "(deploying)".
+ :::image type="content" source="media/deploy-update/create-deployment.png" alt-text="Screenshot that shows the Create deployment screen." lightbox="media/deploy-update/create-deployment.png":::
- :::image type="content" source="media/deploy-update/deployment-active.png" alt-text="Deployment active" lightbox="media/deploy-update/deployment-active.png":::
+1. Under **Deployment details**, **Status** turns to **Active**. The deployed update is marked with **(deploying)**.
-6. View the compliance chart. You should see the update is now in progress.
+ :::image type="content" source="media/deploy-update/deployment-active.png" alt-text="Screenshot that shows deployment as Active." lightbox="media/deploy-update/deployment-active.png":::
-7. After your device is successfully updated, you should see your compliance chart and deployment details update to reflect the same.
+1. View the compliance chart to see that the update is now in progress.
- :::image type="content" source="media/deploy-update/update-succeeded.png" alt-text="Update succeeded" lightbox="media/deploy-update/update-succeeded.png":::
+1. After your device is successfully updated, you see that your compliance chart and deployment details updated to reflect the same.
-## Monitor an update deployment
+ :::image type="content" source="media/deploy-update/update-succeeded.png" alt-text="Screenshot that shows the update succeeded." lightbox="media/deploy-update/update-succeeded.png":::
-1. Select the Deployment history tab at the top of the page.
+## Monitor the update deployment
- :::image type="content" source="media/deploy-update/deployments-history.png" alt-text="Deployment History" lightbox="media/deploy-update/deployments-history.png":::
+1. Select the **Deployment history** tab at the top of the page.
-2. Select the details link next to the deployment you created.
+ :::image type="content" source="media/deploy-update/deployments-history.png" alt-text="Screenshot that shows the deployment history." lightbox="media/deploy-update/deployments-history.png":::
- :::image type="content" source="media/deploy-update/deployment-details.png" alt-text="Deployment details" lightbox="media/deploy-update/deployment-details.png":::
+1. Select **Details** next to the deployment you created.
-3. Select Refresh to view the latest status details.
+ :::image type="content" source="media/deploy-update/deployment-details.png" alt-text="Screenshot that shows deployment details." lightbox="media/deploy-update/deployment-details.png":::
+1. Select **Refresh** to view the latest status details.
-You have now completed a successful end-to-end image update using Device Update for IoT Hub on an Azure RTOS embedded device.
+You've now completed a successful end-to-end image update by using Device Update for IoT Hub on an Azure RTOS embedded device.
-## Cleanup resources
+## Clean up resources
-When no longer neededn clean up your device update account, instance, IoT Hub, and IoT device.
+When no longer needed, clean up your device update account, instance, IoT hub, and IoT device.
## Next steps
-To learn more about Azure RTOS and how it works with Azure IoT, view https://azure.com/rtos.
+To learn more about Azure RTOS and how it works with IoT Hub, see the [Azure RTOS webpage](https://azure.com/rtos).
iot-hub-device-update Device Update Raspberry Pi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-raspberry-pi.md
Title: Device Update for Azure IoT Hub tutorial using the Raspberry Pi 3 B+ Reference Yocto Image | Microsoft Docs
-description: Get started with Device Update for Azure IoT Hub using the Raspberry Pi 3 B+ Reference Yocto Image.
+ Title: Device Update for IoT Hub tutorial using the Raspberry Pi 3 B+ reference Yocto image | Microsoft Docs
+description: Get started with Device Update for Azure IoT Hub by using the Raspberry Pi 3 B+ reference Yocto image.
Last updated 1/26/2022
-# Device Update for Azure IoT Hub tutorial using the Raspberry Pi 3 B+ Reference Image
+# Tutorial: Device Update for Azure IoT Hub using the Raspberry Pi 3 B+ reference image
-Device Update for IoT Hub supports image-based, package-based, and script-based updates.
+Device Update for Azure IoT Hub supports image-based, package-based, and script-based updates.
-Image updates provide a higher level of confidence in the end-state of the device. It is typically easier to replicate the results of an image-update between a pre-production environment and a production environment, since it doesnΓÇÖt pose the same challenges as packages and their dependencies. Due to their atomic nature, one can also adopt an A/B failover model easily.
+Image updates provide a higher level of confidence in the end state of the device. It's typically easier to replicate the results of an image update between a preproduction environment and a production environment because it doesn't pose the same challenges as packages and their dependencies. Because of their atomic nature, you can also adopt an A/B failover model easily.
-This tutorial walks you through the steps to complete an end-to-end image-based update using Device Update for IoT Hub on a Raspberry Pi 3 B+ board.
+This tutorial walks you through the steps to complete an end-to-end image-based update by using Device Update for IoT Hub on a Raspberry Pi 3 B+ board.
-In this tutorial you will learn how to:
+In this tutorial, you'll learn how to:
> [!div class="checklist"]
-> * Download image
-> * Add a tag to your IoT device
-> * Import an update
-> * Create a device group
-> * Deploy an image update
-> * Monitor the update deployment
-Note: Image updates in this tutorial have been validated on the Raspberry Pi B3 board.
+> * Download an image.
+> * Add a tag to your IoT device.
+> * Import an update.
+> * Create a device group.
+> * Deploy an image update.
+> * Monitor the update deployment.
+
+> [!NOTE]
+> Image updates in this tutorial were validated on the Raspberry Pi B3 board.
## Prerequisites
-* If you haven't already done so, create a [Device Update account and instance](create-device-update-account.md), including configuring an IoT Hub.
-## Download image
+If you haven't already done so, create a [Device Update account and instance](create-device-update-account.md) and configure an IoT hub.
+
+## Download the image
-We provide sample images in "Assets" on the [Device Update GitHub releases page](https://github.com/Azure/iot-hub-device-update/releases). The .gz file is the base image that you can flash onto a Raspberry Pi B3+ board, and the swUpdate file is the update you would import through Device Update for IoT Hub.
+We provide sample images in **Assets** on the [Device Update GitHub releases page](https://github.com/Azure/iot-hub-device-update/releases). The .gz file is the base image that you can flash on to a Raspberry Pi 3 B+ board. The swUpdate file is the update you would import through Device Update for IoT Hub.
-## Flash SD card with image
+## Flash an SD card with the image
-Using your favorite OS flashing tool, install the Device Update base image
-(adu-base-image) on the SD card that will be used in the Raspberry Pi 3 B+
-device.
+Use your favorite OS flashing tool to install the Device Update base image (adu-base-image) on the SD card that will be used in the Raspberry Pi 3 B+ device.
-### Using bmaptool to flash SD card
+### Use bmaptool to flash the SD card
-1. If you have not already, install the `bmaptool` utility.
+1. Install the `bmaptool` utility, if you haven't done so already.
```shell sudo apt-get install bmap-tools ```
-2. Locate the path for the SD card in `/dev`. The path should look something
- like `/dev/sd*` or `/dev/mmcblk*`. You can use the `dmesg` utility to help
- locate the correct path.
-
-3. You will need to unmount all mounted partitions before flashing.
+1. Locate the path for the SD card in `/dev`. The path should look something like `/dev/sd*` or `/dev/mmcblk*`. You can use the `dmesg` utility to help locate the correct path.
+1. Unmount all mounted partitions before flashing.
```shell sudo umount /dev/<device> ```
-4. Make sure you have write permissions to the device.
+1. Make sure you have write permissions to the device.
```shell sudo chmod a+rw /dev/<device> ```
-5. Optional. For faster flashing, download the bimap file along with the image
- file and place them in the same directory.
-
-6. Flash the SD card.
+1. Optional: For faster flashing, download the bimap file and the image file and put them in the same directory.
+1. Flash the SD card.
```shell sudo bmaptool copy <path to image> /dev/<device> ```
-
+ Device Update for Azure IoT Hub software is subject to the following license terms:+ * [Device update for IoT Hub license](https://github.com/Azure/iot-hub-device-update/blob/main/LICENSE) * [Delivery optimization client license](https://github.com/microsoft/do-client/blob/main/LICENSE)
-
-Read the license terms prior to using the agent. Your installation and use constitutes your acceptance of these terms. If you do not agree with the license terms, do not use the Device Update for IoT Hub agent.
-
-## Create device or module in IoT Hub and get connection string
-Now, the device needs to be added to the Azure IoT Hub. From within Azure
-IoT Hub, a connection string will be generated for the device.
+Read the license terms prior to using the agent. Your installation and use constitutes your acceptance of these terms. If you don't agree with the license terms, don't use the Device Update for IoT Hub agent.
-1. From the Azure portal, launch the Azure IoT Hub.
+## Create a device or module in IoT Hub and get a connection string
-3. Create a new device.
+Now, add the device to IoT Hub. From within IoT Hub, a connection string is generated for the device.
-5. On the left-hand side of the page, navigate to 'IoT Devices' > Select "New".
+1. From the Azure portal, start IoT Hub.
+1. Create a new device.
+1. On the left pane, select **IoT Devices**. Then select **New**.
+1. Under **Device ID**, enter a name for the device. Ensure that the **Autogenerate keys** checkbox is selected.
+1. Select **Save**. On the **Devices** page, the device you created should be in the list.
+1. Get the device connection string by using one of two options:
-7. Provide a name for the device under 'Device ID'--Ensure that "Autogenerate keys" is checkbox is selected.
+ - Option 1: Use the Device Update agent with a module identity: On the same **Devices** page, select **Add Module Identity** at the top. Create a new Device Update module with the name **IoTHubDeviceUpdate**. Choose other options as they apply to your use case and then select **Save**. Select the newly created module. In the module view, select the **Copy** icon next to **Primary Connection String**.
+ - Option 2: Use the Device Update agent with the device identity: In the device view, select the **Copy** icon next to **Primary Connection String**.
-9. Select 'Save'. Now you will be returned to the 'Devices' page and the device you created should be in the list.
-
-13. Get the device connection string:
- - Option 1 Using Device Update agent with a module identity: From the same 'Devices' page click on '+ Add Module Identity' on the top. Create a new Device Update module with the name 'IoTHubDeviceUpdate', choose other options as it applies to your use case and then click 'Save'. Click on the newly created 'Module' and in the module view, select the 'Copy' icon next to 'Primary Connection String'.
+1. Paste the copied characters somewhere for later use in the following steps:
- - Option 2 Using Device Update agent with the device identity: In the device view, select the 'Copy' icon next to 'Primary Connection
- String'.
-
-8. Paste the copied characters somewhere for later use in the steps below.
**This copied string is your device connection string**.
-## Prepare On-Device Configurations for Device Update for IotHub
+## Prepare on-device configurations for Device Update for IoT Hub
-There are two configuration files that are required to be on the device for Device Update for IotHub to properly be configured. The first is the `du-config.json` file which must exist at `/adu/du-config.json`. The second is the `du-diagnostics-config.json` which must exist at `/adu/du-diagnostics-config.json`.
+Two configuration files must be on the device so that Device Update for IoT Hub configures properly. The first file is the `du-config.json` file, which must exist at `/adu/du-config.json`. The second file is the `du-diagnostics-config.json` file, which must exist at `/adu/du-diagnostics-config.json`.
Here are two examples for the `du-config.json` and the `du-diagnostics-config.json` files:
-### Example du-config.json
+### Example du-config.json
+ ```JSON { "schemaVersion": "1.0",
Here are two examples for the `du-config.json` and the `du-diagnostics-config.js
} ```
-### Example du-diagnostics-config.json
+### Example du-diagnostics-config.json
+ ```JSON { "logComponents":[
Here are two examples for the `du-config.json` and the `du-diagnostics-config.js
} ```
-## Instructions for Configuring the Device Update Agent on the RaspberryPi
-1. Make sure that the Raspberry Pi3 is connected to the network.
+## Configure the Device Update agent on Raspberry Pi
-2. Follow the instruction below to add the configuration details:
+1. Make sure that Raspberry Pi 3 is connected to the network.
+1. Follow these instructions to add the configuration details:
- 1. First ssh into the machine using the following command in the PowerShell window
+ 1. First, SSH in to the machine by using the following command in the PowerShell window:
```shell ssh raspberrypi3 -l root ```
- 1. Once logged into the device you can create/open the du-config.json file for editing using
+
+ 1. Create or open the `du-config.jso` file for editing by using:
```bash nano /adu/du-config.json ```
- 2. After running the command you should see an open editor with the file. If you have never created the file it will be empty. Now copy the above example du-config.json contents and substitute the configurations required for your device. You will also need to replace the example connection string with the one for the device you created in the steps above.
+
+ 1. After you run the command, you should see an open editor with the file. If you've never created the file, it will be empty. Now copy the preceding example du-config.json contents, and substitute the configurations required for your device. Then replace the example connection string with the one for the device you created in the preceding steps.
- 4. Once you have completed your changes press `Ctrl+X` to exit the editor and then enter `y` to confirm you want to save the changes.
+ 1. After you finish your changes, select **Ctrl+X** to exit the editor. Then enter **y** to save the changes.
- 6. Now we need to create the du-diagnostics-config.json file using similar commands. Start by creating/openning the du-diagnostics-config.json file for editing using:
+ 1. Now you need to create the `du-diagnostics-config.json` file by using similar commands. Start by creating or opening the `du-diagnostics-config.json` file for editing by using:
+
```bash nano /adu/du-diagnostics-config.json ```
- 5. Copy the above example du-diagnostics-config.json contents and substitute any configurations which differ from the default build. Please note the example du-diagnostics-config.json file represents the default log locations for Device Update for IotHub. You will only need to change these if your implementation differs.
-
- 7. Once you have completed your changes press `Ctrl+X` to exit the editor and then enter `y` to confirm you want to save the changes.
-
- 9. Now use the following command to show the files located in the `/adu/` directory. You should see both of your configuration files.du-diagnostics-config.json file for editing using:
+
+ 1. Copy the preceding example du-diagnostics-config.json contents, and substitute any configurations that differ from the default build. The example du-diagnostics-config.json file represents the default log locations for Device Update for IoT Hub. You only need to change these if your implementation differs.
+ 1. After you finish your changes, select **Ctrl+X** to exit the editor. Then enter **y** to save the changes.
+ 1. Use the following command to show the files located in the `/adu/` directory. You should see both of your configuration files.du-diagnostics-config.json files for editing by using:
```bash ls -la /adu/ ```
-
-3. You will need to restart the Device Update system daemon to make sure that the configurations have been applied. You can do so using the following command within the terminal logged into the raspberrypi.
-```markdown
- systemctl start adu-agent
-```
+1. Restart the Device Update system daemon to make sure that the configurations were applied. Use the following command within the terminal logged in to the `raspberrypi`:
+
+ ```markdown
+ systemctl start adu-agent
+ ```
-4. You now need to check that the agent is live using the following command:
+1. Check that the agent is live by using the following command:
-```markdown
- systemctl status adu-agent
-```
- You should see the status come back as alive and green.
+ ```markdown
+ systemctl status adu-agent
+ ```
-## Connect the device in Device Update IoT Hub
+ You should see the status come back as alive and green.
-1. On the left-hand side of the page, select 'IoT Devices'.
-2. Select the link with your device name.
-3. At the top of the page, select 'Device Twin' if directly connecting to Device Update using the IoT device identity. Otherwise select the module you created above and click on its ΓÇÿModule TwinΓÇÖ.
-4. Under the 'reported' section of the Device Twin properties, look for the Linux kernel version.
+## Connect the device in Device Update for IoT Hub
+
+1. On the left pane, select **IoT Devices**.
+1. Select the link with your device name.
+1. At the top of the page, select **Device Twin** if you're connecting directly to Device Update by using the IoT device identity. Otherwise, select the module you created and select its module twin.
+1. Under the **reported** section of the **Device Twin** properties, look for the Linux kernel version.
For a new device, which hasn't received an update from Device Update, the
-[DeviceManagement:DeviceInformation:1.swVersion](device-update-plug-and-play.md) value will represent
-the firmware version running on the device. Once an update has been applied to a device, Device Update will
-use [AzureDeviceUpdateCore:ClientMetadata:4.installedUpdateId](device-update-plug-and-play.md) property
+[DeviceManagement:DeviceInformation:1.swVersion](device-update-plug-and-play.md) value represents
+the firmware version running on the device. After an update has been applied to a device, Device Update
+uses the [AzureDeviceUpdateCore:ClientMetadata:4.installedUpdateId](device-update-plug-and-play.md) property
value to represent the firmware version running on the device.
-5. The base and update image files have a version number in the filename.
+1. The base and update image files have a version number in the file name.
```markdown adu-<image type>-image-<machine>-<version number>.<extension> ```
-Use that version number in the Import Update step below.
+Use that version number in the later "Import the update" section.
## Add a tag to your device
-1. Log into [Azure portal](https://portal.azure.com) and navigate to the IoT Hub.
-
-2. From 'IoT Devices' or 'IoT Edge' on the left navigation pane find your IoT device and navigate to the Device Twin or Module Twin.
-
-3. In the Module Twin of the Device Update agent module, delete any existing Device Update tag value by setting them to null. If you are using Device identity with Device Update agent make these changes on the Device Twin.
-
-4. Add a new Device Update tag value as shown below.
-
-```JSON
- "tags": {
- "ADUGroup": "<CustomTagValue>"
- }
-```
-
-## Import update
-
-1. Download the Download the sample tutorial manifest (Tutorial Import Manifest_Pi.json) and sample update (adu-update-image-raspberrypi3-0.6.5073.1.swu) from [Release Assets](https://github.com/Azure/iot-hub-device-update/releases) for the latest agent.
-
-2. Log in to the [Azure portal](https://portal.azure.com/) and navigate to your IoT Hub with Device Update. Then, select the Updates option under Automatic Device Management from the left-hand navigation bar.
+1. Sign in to the [Azure portal](https://portal.azure.com) and go to the IoT hub.
+1. On the left pane, under **IoT Devices** or **IoT Edge**, find your IoT device and go to the device twin or module twin.
+1. In the module twin of the Device Update agent module, delete any existing Device Update tag values by setting them to null. If you're using the device identity with the Device Update agent, make these changes on the device twin.
+1. Add a new Device Update tag value, as shown:
+
+ ```JSON
+ "tags": {
+ "ADUGroup": "<CustomTagValue>"
+ }
+ ```
+
+## Import the update
+
+1. Download the sample tutorial manifest (Tutorial Import Manifest_Pi.json) and sample update (adu-update-image-raspberrypi3-0.6.5073.1.swu) from [Release Assets](https://github.com/Azure/iot-hub-device-update/releases) for the latest agent.
+1. Sign in to the [Azure portal](https://portal.azure.com/) and go to your IoT hub with Device Update. On the left pane, under **Automatic Device Management**, select **Updates**.
+1. Select the **Updates** tab.
+1. Select **+ Import New Update**.
+1. Select **+ Select from storage container**. Select an existing account or create a new account by using **+ Storage account**. Then select an existing container or create a new container by using **+ Container**. This container will be used to stage your update files for importing.
-3. Select the Updates tab.
-
-4. Select "+ Import New Update".
-
-5. Select "+ Select from storage container". Select an existing account or create a new account using "+ Storage account". Then select an existing container or create a new container using "+ Container". This container will be used to stage your update files for importing.
> [!NOTE]
- > We recommend using a new container each time you import an update to avoid accidentally importing files from previous updates. If you don't use a new container, be sure to delete any files from the existing container before completing this step.
+ > We recommend that you use a new container each time you import an update to avoid accidentally importing files from previous updates. If you don't use a new container, be sure to delete any files from the existing container before you finish this step.
- :::image type="content" source="media/import-update/storage-account-ppr.png" alt-text="Storage Account" lightbox="media/import-update/storage-account-ppr.png":::
+ :::image type="content" source="media/import-update/storage-account-ppr.png" alt-text="Screenshot that shows Storage accounts and Containers." lightbox="media/import-update/storage-account-ppr.png":::
+
+1. In your container, select **Upload** and go to the files you downloaded in step 1. After you've selected all your update files, select **Upload**. Then select the **Select** button to return to the **Import update** page.
+
+ :::image type="content" source="media/import-update/import-select-ppr.png" alt-text="Screenshot that shows selecting uploaded files." lightbox="media/import-update/import-select-ppr.png":::
-6. In your container, select "Upload" and navigate to files downloaded in **Step 1**. When you've selected all your update files, select "Upload" Then click the "Select" button to return to the "Import update" page.
+ *This screenshot shows the import step. File names might not match the ones used in the example.*
- :::image type="content" source="media/import-update/import-select-ppr.png" alt-text="Select Uploaded Files" lightbox="media/import-update/import-select-ppr.png":::
- _This screenshot shows the import step and file names may not match the ones used in the example_
+1. On the **Import update** page, review the files to be imported. Then select **Import update** to start the import process.
-8. On the Import update page, review the files to be imported. Then select "Import update" to start the import process.
+ :::image type="content" source="media/import-update/import-start-2-ppr.png" alt-text="Screenshot that shows Import update." lightbox="media/import-update/import-start-2-ppr.png":::
- :::image type="content" source="media/import-update/import-start-2-ppr.png" alt-text="Import Start" lightbox="media/import-update/import-start-2-ppr.png":::
+1. The import process begins, and the screen switches to the **Import history** section. When the **Status** column indicates the import has succeeded, select the **Available updates** header. You should see your imported update in the list now.
-9. The import process begins, and the screen switches to the "Import History" section. When the `Status` column indicates the import has succeeded, select the "Available Updates" header. You should see your imported update in the list now.
+ :::image type="content" source="media/import-update/update-ready-ppr.png" alt-text="Screenshot that shows job status." lightbox="media/import-update/update-ready-ppr.png":::
- :::image type="content" source="media/import-update/update-ready-ppr.png" alt-text="Job Status" lightbox="media/import-update/update-ready-ppr.png":::
-
-[Learn more](import-update.md) about importing updates.
+[Learn more](import-update.md) about how to import updates.
-## Create update group
+## Create an update group
-1. Go to the Groups and Deployments tab at the top of the page.
- :::image type="content" source="media/create-update-group/ungrouped-devices.png" alt-text="Screenshot of ungrouped devices." lightbox="media/create-update-group/ungrouped-devices.png":::
+1. Go to the **Groups and Deployments** tab at the top of the page.
-2. Select the "Add group" button to create a new group.
- :::image type="content" source="media/create-update-group/add-group.png" alt-text="Screenshot of device group addition." lightbox="media/create-update-group/add-group.png":::
+ :::image type="content" source="media/create-update-group/ungrouped-devices.png" alt-text="Screenshot that shows ungrouped devices." lightbox="media/create-update-group/ungrouped-devices.png":::
-3. Select an IoT Hub tag and Device Class from the list and then select Create group.
- :::image type="content" source="media/create-update-group/select-tag.png" alt-text="Screenshot of tag selection." lightbox="media/create-update-group/select-tag.png":::
+1. Select **Add group** to create a new group.
-4. Once the group is created, you will see that the update compliance chart and groups list are updated. Update compliance chart shows the count of devices in various states of compliance: On latest update, New updates available, and Updates in Progress. [Learn about update compliance.](device-update-compliance.md)
- :::image type="content" source="media/create-update-group/updated-view.png" alt-text="Screenshot of update compliance view." lightbox="media/create-update-group/updated-view.png":::
+ :::image type="content" source="media/create-update-group/add-group.png" alt-text="Screenshot that shows device group addition." lightbox="media/create-update-group/add-group.png":::
-5. You should see your newly created group and any available updates for the devices in the new group. If there are devices that don't meet the device class requirements of the group, they will show up in a corresponding invalid group. You can deploy the best available update to the new user-defined group from this view by clicking on the "Deploy" button next to the group.
+1. Select an **IoT Hub** tag and **Device class** from the list. Then select **Create group**.
-[Learn more](create-update-group.md) about adding tags and creating update groups
+ :::image type="content" source="media/create-update-group/select-tag.png" alt-text="Screenshot that shows tag selection." lightbox="media/create-update-group/select-tag.png":::
+1. After the group is created, the update compliance chart and groups list are updated. The update compliance chart shows the count of devices in various states of compliance: **On latest update**, **New updates available**, and **Updates in progress**. [Learn about update compliance](device-update-compliance.md).
-## Deploy update
+ :::image type="content" source="media/create-update-group/updated-view.png" alt-text="Screenshot that shows the update compliance view." lightbox="media/create-update-group/updated-view.png":::
-1. Once the group is created, you should see a new update available for your device group, with a link to the update under Best Update (you may need to Refresh once). [Learn More about update compliance.](device-update-compliance.md)
+1. You should see your newly created group and any available updates for the devices in the new group. If there are devices that don't meet the device class requirements of the group, they show up in a corresponding invalid group. To deploy the best available update to the new user-defined group from this view, select **Deploy** next to the group.
-2. Select the target group by clicking on the group name. You will be directed to the group details under Group basics.
+[Learn more](create-update-group.md) about how to add tags and create update groups.
- :::image type="content" source="media/deploy-update/group-basics.png" alt-text="Group details" lightbox="media/deploy-update/group-basics.png":::
+## Deploy the update
-3. To initiate the deployment, go to the Current deployment tab. Click the deploy link next to the desired update from the Available updates section. The best, available update for a given group will be denoted with a "Best" highlight.
+1. After the group is created, you should see a new update available for your device group. A link to the update should be under **Best update**. You might need to refresh once. [Learn more about update compliance](device-update-compliance.md).
+1. Select the target group by selecting the group name. You're directed to the group details under **Group basics**.
- :::image type="content" source="media/deploy-update/select-update.png" alt-text="Select update" lightbox="media/deploy-update/select-update.png":::
+ :::image type="content" source="media/deploy-update/group-basics.png" alt-text="Screenshot that shows Group details." lightbox="media/deploy-update/group-basics.png":::
-4. Schedule your deployment to start immediately or in the future, then select Create.
+1. To start the deployment, go to the **Current deployment** tab. Select the **deploy** link next to the desired update from the **Available updates** section. The best available update for a given group is denoted with a **Best** highlight.
- :::image type="content" source="media/deploy-update/create-deployment.png" alt-text="Create deployment" lightbox="media/deploy-update/create-deployment.png":::
+ :::image type="content" source="media/deploy-update/select-update.png" alt-text="Screenshot that shows selecting an update." lightbox="media/deploy-update/select-update.png":::
-5. The Status under Deployment details should turn to Active, and the deployed update should be marked with "(deploying)".
+1. Schedule your deployment to start immediately or in the future. Then select **Create**.
- :::image type="content" source="media/deploy-update/deployment-active.png" alt-text="Deployment active" lightbox="media/deploy-update/deployment-active.png":::
+ :::image type="content" source="media/deploy-update/create-deployment.png" alt-text="Screenshot that shows the Create button." lightbox="media/deploy-update/create-deployment.png":::
-6. View the compliance chart. You should see the update is now in progress.
+1. Under **Deployment details**, **Status** turns to **Active**. The deployed update is marked with **(deploying)**.
-7. After your device is successfully updated, you should see your compliance chart and deployment details update to reflect the same.
+ :::image type="content" source="media/deploy-update/deployment-active.png" alt-text="Screenshot that shows Deployment active." lightbox="media/deploy-update/deployment-active.png":::
- :::image type="content" source="media/deploy-update/update-succeeded.png" alt-text="Update succeeded" lightbox="media/deploy-update/update-succeeded.png":::
+1. View the compliance chart to see that the update is now in progress.
+1. After your device is successfully updated, you see that your compliance chart and deployment details updated to reflect the same.
-## Monitor an update deployment
+ :::image type="content" source="media/deploy-update/update-succeeded.png" alt-text="Screenshot that shows Update succeeded." lightbox="media/deploy-update/update-succeeded.png":::
-1. Select the Deployment history tab at the top of the page.
+## Monitor the update deployment
- :::image type="content" source="media/deploy-update/deployments-history.png" alt-text="Deployment History" lightbox="media/deploy-update/deployments-history.png":::
+1. Select the **Deployment history** tab at the top of the page.
-2. Select the details link next to the deployment you created.
+ :::image type="content" source="media/deploy-update/deployments-history.png" alt-text="Screenshot that shows Deployment history." lightbox="media/deploy-update/deployments-history.png":::
- :::image type="content" source="media/deploy-update/deployment-details.png" alt-text="Deployment details" lightbox="media/deploy-update/deployment-details.png":::
+1. Select **Details** next to the deployment you created.
-3. Select Refresh to view the latest status details.
+ :::image type="content" source="media/deploy-update/deployment-details.png" alt-text="Screenshot that shows Deployment details." lightbox="media/deploy-update/deployment-details.png":::
+1. Select **Refresh** to view the latest status details.
-You have now completed a successful end-to-end image update using Device Update for IoT Hub on a Raspberry Pi 3 B+ device.
+You've now completed a successful end-to-end image update by using Device Update for IoT Hub on a Raspberry Pi 3 B+ device.
## Clean up resources
-When no longer needed, clean up your Device Update account, instance, IoT Hub and IoT device.
+When no longer needed, clean up your Device Update account, instance, IoT hub, and IoT device.
## Next steps > [!div class="nextstepaction"]
-> [Simulator Reference Agent](device-update-simulator.md)
+> [Simulator reference agent](device-update-simulator.md)
iot-hub-device-update Device Update Simulator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-simulator.md
Title: Device Update for Azure IoT Hub tutorial using the Ubuntu (18.04 x64) Simulator Reference Agent | Microsoft Docs
-description: Get started with Device Update for Azure IoT Hub using the Ubuntu (18.04 x64) Simulator Reference Agent.
+ Title: Device Update for Azure IoT Hub tutorial using the Ubuntu (18.04 x64) simulator reference agent | Microsoft Docs
+description: Get started with Device Update for Azure IoT Hub using the Ubuntu (18.04 x64) simulator reference agent.
Last updated 1/26/2022
-# Device Update for Azure IoT Hub tutorial using the Ubuntu (18.04 x64) Simulator Reference Agent
+# Tutorial: Device Update for Azure IoT Hub using the Ubuntu (18.04 x64) simulator reference agent
-Device Update for IoT Hub supports image-based, package-based and script-based updates.
+Device Update for Azure IoT Hub supports image-based, package-based, and script-based updates.
-Image updates provide a higher level of confidence in the end-state of the device. It is typically easier to replicate the results of an image-update between a pre-production environment and a production environment, since it doesnΓÇÖt pose the same challenges as packages and their dependencies. Due to their atomic nature, one can also adopt an A/B failover model easily.
+Image updates provide a higher level of confidence in the end state of the device. It's typically easier to replicate the results of an image update between a preproduction environment and a production environment because it doesn't pose the same challenges as packages and their dependencies. Because of their atomic nature, you can also adopt an A/B failover model easily.
-This tutorial walks you through the steps to complete an end-to-end image-based update using Device Update for IoT Hub.
+This tutorial walks you through the steps to complete an end-to-end image-based update by using Device Update for IoT Hub.
-In this tutorial you will learn how to:
+In this tutorial, you'll learn how to:
> [!div class="checklist"]
-> * Download and install image
-> * Add a tag to your IoT device
-> * Import an update
-> * Create a device group
-> * Deploy an image update
-> * Monitor the update deployment
+> * Download and install an image.
+> * Add a tag to your IoT device.
+> * Import an update.
+> * Create a device group.
+> * Deploy an image update.
+> * Monitor the update deployment.
+ ## Prerequisites
-* If you haven't already done so, create a [Device Update account and instance](create-device-update-account.md), including configuring an IoT Hub.
-## Add device to Azure IoT Hub
+If you haven't already done so, create a [Device Update account and instance](create-device-update-account.md) and configure an IoT hub.
+
+## Add a device to Azure IoT Hub
+
+After the Device Update agent is running on an IoT device, you must add the device to IoT Hub. From within IoT Hub, a connection string is generated for a particular device.
-Once the Device Update agent is running on an IoT device, the device needs to be added to the Azure IoT Hub. From within Azure IoT Hub, a connection string will be generated for a particular device.
+1. From the Azure portal, start the Device Update for IoT Hub.
+1. Create a new device.
+1. On the left pane, go to **IoT Devices**. Then select **New**.
+1. Under **Device ID**, enter a name for the device. Ensure that the **Autogenerate keys** checkbox is selected.
+1. Select **Save**.
+1. Now, you're returned to the **Devices** page and the device you created should be in the list. Select that device.
+1. In the device view, select the **Copy** icon next to **Primary Connection String**.
+1. Paste the copied characters somewhere for later use in the following steps:
-1. From the Azure portal, launch the Device Update IoT Hub.
-2. Create a new device.
-3. On the left-hand side of the page, navigate to 'IoT Devices' > Select "New".
-4. Provide a name for the device under 'Device ID'--Ensure that "Autogenerate keys" is checkbox is selected.
-5. Select 'Save'.
-6. Now, you'll be returned to the 'Devices' page and the device you created should be in the list. Select that device.
-7. In the device view, select the 'Copy' icon next to 'Primary Connection String'.
-8. Paste the copied characters somewhere for later use in the steps below. **This copied string is your device connection string**.
+ **This copied string is your device connection string**.
-## Install Device Update agent to test it as a simulator
+## Install a Device Update agent to test it as a simulator
-1. Follow the instructions to [Install the Azure IoT Edge runtime](../iot-edge/how-to-provision-single-device-linux-symmetric.md?view=iotedge-2020-11&preserve-view=true).
+1. Follow the instructions to [install the Azure IoT Edge runtime](../iot-edge/how-to-provision-single-device-linux-symmetric.md?view=iotedge-2020-11&preserve-view=true).
> [!NOTE]
- > The Device Update agent doesn't depend on IoT Edge. But, it does rely on the IoT Identity Service daemon that is installed with IoT Edge (1.2.0 and higher) to obtain an identity and connect to IoT Hub.
+ > The Device Update agent doesn't depend on IoT Edge. But it does rely on the IoT Identity Service daemon that's installed with IoT Edge (1.2.0 and higher) to obtain an identity and connect to IoT Hub.
>
- > Although not covered in this tutorial, the [IoT Identity Service daemon can be installed standalone on Linux-based IoT devices](https://azure.github.io/iot-identity-service/installation.html). The sequence of installation matters. The Device Update package agent must be installed _after_ the IoT Identity Service. Otherwise, the package agent will not be registered as an authorized component to establish a connection to IoT Hub.
+ > Although not covered in this tutorial, the [IoT Identity Service daemon can be installed standalone on Linux-based IoT devices](https://azure.github.io/iot-identity-service/installation.html). The sequence of installation matters. The Device Update package agent must be installed _after_ the IoT Identity Service. Otherwise, the package agent won't be registered as an authorized component to establish a connection to IoT Hub.
1. Then, install the Device Update agent .deb packages. ```bash sudo apt-get install deviceupdate-agent deliveryoptimization-plugin-apt ```
-
-2. Enter your IoT device's module (or device, depending on how you [provisioned the device with Device Update](device-update-agent-provisioning.md)) primary connection string in the configuration file by running the command below.
+
+1. Enter your IoT device's module (or device, depending on how you [provisioned the device with Device Update](device-update-agent-provisioning.md)) primary connection string in the configuration file by running the following command:
```bash sudo nano /etc/adu/du-config.json ```
-
-3. Set up the agent to run as a simulator. Run following command on the IoT device so that the Device Update agent will invoke the simulator handler to process an package update with APT ('microsoft/apt:1').
+
+1. Set up the agent to run as a simulator. Run the following command on the IoT device so that the Device Update agent invokes the simulator handler to process a package update with APT ('microsoft/apt:1'):
```sh sudo /usr/bin/AducIotAgent --register-content-handler /var/lib/adu/extensions/sources/libmicrosoft_simulator_1.so --update-type 'microsoft/apt:1'
Once the Device Update agent is running on an IoT device, the device needs to be
`sudo /usr/bin/AducIotAgent --register--content-handler <full path to the handler file> --update-type <update type name>`
-4. Download the sample-du-simulator-data.json from [Release Assets](https://github.com/Azure/iot-hub-device-update/releases). Run the command below to create and edit the du-simulator-data.json in the tmp folder.
+1. Download the `sample-du-simulator-data.json` from [Release Assets](https://github.com/Azure/iot-hub-device-update/releases). Run the following command to create and edit the `du-simulator-data.json` file in the tmp folder:
```sh sudo nano /tmp/du-simulator-data.json sudo chown adu:adu /tmp/du-simulator-data.json sudo chmod 664 /tmp/du-simulator-data.json ```
- Copy the contents from the downloaded file into the du-simulator-data.json. Press Ctrl + X to save the changes.
+
+ Copy the contents from the downloaded file into the `du-simulator-data.json` file. Select **Ctrl+X** to save the changes.
- If /tmp doesn't exist then
+ If /tmp doesn't exist, then:
```sh sudo mkdir/tmp sudo chown root:root/tmp sudo chmod 1777/tmp ```
-
-5. Restart the Device Update agent by running the command below.
+
+1. Restart the Device Update agent by running the following command:
```bash sudo systemctl restart adu-agent ```
-
+ Device Update for Azure IoT Hub software is subject to the following license terms:+ * [Device Update for IoT Hub license](https://github.com/Azure/iot-hub-device-update/blob/main/LICENSE) * [Delivery optimization client license](https://github.com/microsoft/do-client/blob/main/LICENSE)
-
-Read the license terms prior to using the agent. Your installation and use constitutes your acceptance of these terms. If you do not agree with the license terms, do not use the Device Update for IoT Hub agent.
-> [!NOTE]
-> After your testing with the simulator run the below command to invoke the APT handler and [deploy over-the-air Package Updates](device-update-ubuntu-agent.md)
+Read the license terms prior to using the agent. Your installation and use constitutes your acceptance of these terms. If you don't agree with the license terms, don't use the Device Update for IoT Hub agent.
+
+> [!NOTE]
+> After your testing with the simulator, run the following command to invoke the APT handler and [deploy over-the-air package updates](device-update-ubuntu-agent.md):
```sh # sudo /usr/bin/AducIotAgent --register-content-handler /var/lib/adu/extensions/sources/libmicrosoft_apt_1.so --update-type 'microsoft/a pt:1' ``` - ## Add a tag to your device
-1. Log into [Azure portal](https://portal.azure.com) and navigate to the IoT Hub.
-
-2. From 'IoT Devices' or 'IoT Edge' on the left navigation pane find your IoT device and navigate to the Device Twin or Module Twin.
-
-3. In the Module Twin of the Device Update agent module, delete any existing Device Update tag value by setting them to null. If you are using Device identity with Device Update agent make these changes on the Device Twin.
-
-4. Add a new Device Update tag value as shown below.
+1. Sign in to the [Azure portal](https://portal.azure.com) and go to the IoT hub.
+1. From **IoT Devices** or **IoT Edge** on the left pane, find your IoT device and go to the device twin or module twin.
+1. In the module twin of the Device Update agent module, delete any existing Device Update tag values by setting them to null. If you're using the device identity with a Device Update agent, make these changes on the device twin.
+1. Add a new Device Update tag value, as shown:
```JSON "tags": {
Read the license terms prior to using the agent. Your installation and use const
} ```
-## Import update
+## Import the update
-1. Download the sample tutorial manifest (Tutorial Import Manifest_Sim.json) and sample update (adu-update-image-raspberrypi3-0.6.5073.1.swu) from [Release Assets](https://github.com/Azure/iot-hub-device-update/releases) for the latest agent. _Note_: the update file is re-used update files from the Raspberry Pi tutorial, because the update in this tutorial will be simulated and therefore the specific file content doesn't matter.
+1. Download the sample tutorial manifest (Tutorial Import Manifest_Sim.json) and sample update (adu-update-image-raspberrypi3-0.6.5073.1.swu) from [Release Assets](https://github.com/Azure/iot-hub-device-update/releases) for the latest agent. The update file is reused from the Raspberry Pi tutorial. Because the update in this tutorial is simulated, the specific file content doesn't matter.
+1. Sign in to the [Azure portal](https://portal.azure.com/) and go to your IoT hub with Device Update. On the left pane, under **Automatic Device Management**, select **Updates**.
+1. Select the **Updates** tab.
+1. Select **+ Import New Update**.
+1. Select **+ Select from storage container**. Select an existing account or create a new account by using **+ Storage account**. Then select an existing container or create a new container by using **+ Container**. This container will be used to stage your update files for importing.
-2. Log in to the [Azure portal](https://portal.azure.com/) and navigate to your IoT Hub with Device Update. Then, select the Updates option under Automatic Device Management from the left-hand navigation bar.
+ > [!NOTE]
+ > We recommend that you use a new container each time you import an update to avoid accidentally importing files from previous updates. If you don't use a new container, be sure to delete any files from the existing container before you finish this step.
-3. Select the Updates tab.
+ :::image type="content" source="media/import-update/storage-account-ppr.png" alt-text="Screenshot that shows Storage accounts and Containers." lightbox="media/import-update/storage-account-ppr.png":::
-4. Select "+ Import New Update".
+1. In your container, select **Upload** and go to the files you downloaded in step 1. After you've selected all your update files, select **Upload**. Then select the **Select** button to return to the **Import update** page.
-5. Select "+ Select from storage container". Select an existing account or create a new account using "+ Storage account". Then select an existing container or create a new container using "+ Container". This container will be used to stage your update files for importing.
- > [!NOTE]
- > We recommend using a new container each time you import an update to avoid accidentally importing files from previous updates. If you don't use a new container, be sure to delete any files from the existing container before completing this step.
+ :::image type="content" source="media/import-update/import-select-ppr.png" alt-text="Screenshot that shows selecting uploaded files." lightbox="media/import-update/import-select-ppr.png":::
- :::image type="content" source="media/import-update/storage-account-ppr.png" alt-text="Storage Account" lightbox="media/import-update/storage-account-ppr.png":::
+ _This screenshot shows the import step. File names might not match the ones used in the example._
-6. In your container, select "Upload" and navigate to files downloaded in **Step 1**. When you've selected all your update files, select "Upload" Then click the "Select" button to return to the "Import update" page.
+1. On the **Import update** page, review the files to be imported. Then select **Import update** to start the import process.
- :::image type="content" source="media/import-update/import-select-ppr.png" alt-text="Select Uploaded Files" lightbox="media/import-update/import-select-ppr.png":::
- _This screenshot shows the import step and file names may not match the ones used in the example_
+ :::image type="content" source="media/import-update/import-start-2-ppr.png" alt-text="Screenshot that shows Import update." lightbox="media/import-update/import-start-2-ppr.png":::
-8. On the Import update page, review the files to be imported. Then select "Import update" to start the import process.
+1. The import process begins, and the screen switches to the **Import History** section. When the **Status** column indicates the import has succeeded, select the **Available updates** header. You should see your imported update in the list now.
- :::image type="content" source="media/import-update/import-start-2-ppr.png" alt-text="Import Start" lightbox="media/import-update/import-start-2-ppr.png":::
+ :::image type="content" source="media/import-update/update-ready-ppr.png" alt-text="Screenshot that shows the job status." lightbox="media/import-update/update-ready-ppr.png":::
-9. The import process begins, and the screen switches to the "Import History" section. When the `Status` column indicates the import has succeeded, select the "Available Updates" header. You should see your imported update in the list now.
+[Learn more](import-update.md) about how to import updates.
- :::image type="content" source="media/import-update/update-ready-ppr.png" alt-text="Job Status" lightbox="media/import-update/update-ready-ppr.png":::
-
-[Learn more](import-update.md) about importing updates.
+## Create an update group
-## Create update group
+1. Go to the **Groups and Deployments** tab at the top of the page.
-1. Go to the Groups and Deployments tab at the top of the page.
- :::image type="content" source="media/create-update-group/ungrouped-devices.png" alt-text="Screenshot of ungrouped devices." lightbox="media/create-update-group/ungrouped-devices.png":::
+ :::image type="content" source="media/create-update-group/ungrouped-devices.png" alt-text="Screenshot that shows ungrouped devices." lightbox="media/create-update-group/ungrouped-devices.png":::
-2. Select the "Add group" button to create a new group.
- :::image type="content" source="media/create-update-group/add-group.png" alt-text="Screenshot of device group addition." lightbox="media/create-update-group/add-group.png":::
+1. Select **Add group** to create a new group.
-3. Select an IoT Hub tag and Device Class from the list and then select Create group.
- :::image type="content" source="media/create-update-group/select-tag.png" alt-text="Screenshot of tag selection." lightbox="media/create-update-group/select-tag.png":::
+ :::image type="content" source="media/create-update-group/add-group.png" alt-text="Screenshot that shows device group addition." lightbox="media/create-update-group/add-group.png":::
-4. Once the group is created, you will see that the update compliance chart and groups list are updated. Update compliance chart shows the count of devices in various states of compliance: On latest update, New updates available, and Updates in Progress. [Learn about update compliance.](device-update-compliance.md)
- :::image type="content" source="media/create-update-group/updated-view.png" alt-text="Screenshot of update compliance view." lightbox="media/create-update-group/updated-view.png":::
+1. Select an **IoT Hub** tag and **Device Class** from the list. Then select **Create group**.
-5. You should see your newly created group and any available updates for the devices in the new group. If there are devices that don't meet the device class requirements of the group, they will show up in a corresponding invalid group. You can deploy the best available update to the new user-defined group from this view by clicking on the "Deploy" button next to the group.
+ :::image type="content" source="media/create-update-group/select-tag.png" alt-text="Screenshot that shows tag selection." lightbox="media/create-update-group/select-tag.png":::
-[Learn more](create-update-group.md) about adding tags and creating update groups
+1. After the group is created, the update compliance chart and groups list are updated. The update compliance chart shows the count of devices in various states of compliance: **On latest update**, **New updates available**, and **Updates in progress**. [Learn about update compliance](device-update-compliance.md).
+ :::image type="content" source="media/create-update-group/updated-view.png" alt-text="Screenshot that shows the update compliance view." lightbox="media/create-update-group/updated-view.png":::
-## Deploy update
+1. You should see your newly created group and any available updates for the devices in the new group. If there are devices that don't meet the device class requirements of the group, they'll show up in a corresponding invalid group. To deploy the best available update to the new user-defined group from this view, select **Deploy** next to the group.
-1. Once the group is created, you should see a new update available for your device group, with a link to the update under Best Update (you may need to Refresh once). [Learn More about update compliance.](device-update-compliance.md)
+[Learn more](create-update-group.md) about how to add tags and create update groups.
-2. Select the target group by clicking on the group name. You will be directed to the group details under Group basics.
+## Deploy the update
- :::image type="content" source="media/deploy-update/group-basics.png" alt-text="Group details" lightbox="media/deploy-update/group-basics.png":::
+1. After the group is created, you should see a new update available for your device group. A link to the update should be under **Best update**. You might need to refresh once. [Learn more about update compliance](device-update-compliance.md).
+1. Select the target group by selecting the group name. You're directed to **Group details** under **Group basics**.
-3. To initiate the deployment, go to the Current deployment tab. Click the deploy link next to the desired update from the Available updates section. The best, available update for a given group will be denoted with a "Best" highlight.
+ :::image type="content" source="media/deploy-update/group-basics.png" alt-text="Screenshot that shows Group details." lightbox="media/deploy-update/group-basics.png":::
- :::image type="content" source="media/deploy-update/select-update.png" alt-text="Select update" lightbox="media/deploy-update/select-update.png":::
+1. To start the deployment, go to the **Current deployment** tab. Select the **deploy** link next to the desired update from the **Available updates** section. The best available update for a given group is denoted with a **Best** highlight.
-4. Schedule your deployment to start immediately or in the future, then select Create.
+ :::image type="content" source="media/deploy-update/select-update.png" alt-text="Screenshot that shows selecting an update." lightbox="media/deploy-update/select-update.png":::
- :::image type="content" source="media/deploy-update/create-deployment.png" alt-text="Create deployment" lightbox="media/deploy-update/create-deployment.png":::
+1. Schedule your deployment to start immediately or in the future. Then select **Create**.
-5. The Status under Deployment details should turn to Active, and the deployed update should be marked with "(deploying)".
+ :::image type="content" source="media/deploy-update/create-deployment.png" alt-text="Screenshot that shows Create deployment." lightbox="media/deploy-update/create-deployment.png":::
- :::image type="content" source="media/deploy-update/deployment-active.png" alt-text="Deployment active" lightbox="media/deploy-update/deployment-active.png":::
+1. Under **Deployment details**, **Status** turns to **Active**. The deployed update is marked with **(deploying)**.
-6. View the compliance chart. You should see the update is now in progress.
+ :::image type="content" source="media/deploy-update/deployment-active.png" alt-text="Screenshot that shows the deployment is active." lightbox="media/deploy-update/deployment-active.png":::
-7. After your device is successfully updated, you should see your compliance chart and deployment details update to reflect the same.
+1. View the compliance chart to see that the update is now in progress.
+1. After your device is successfully updated, you see that your compliance chart and deployment details updated to reflect the same.
- :::image type="content" source="media/deploy-update/update-succeeded.png" alt-text="Update succeeded" lightbox="media/deploy-update/update-succeeded.png":::
+ :::image type="content" source="media/deploy-update/update-succeeded.png" alt-text="Screenshot that shows Update succeeded." lightbox="media/deploy-update/update-succeeded.png":::
-## Monitor an update deployment
+## Monitor the update deployment
-1. Select the Deployment history tab at the top of the page.
+1. Select the **Deployment history** tab at the top of the page.
- :::image type="content" source="media/deploy-update/deployments-history.png" alt-text="Deployment History" lightbox="media/deploy-update/deployments-history.png":::
+ :::image type="content" source="media/deploy-update/deployments-history.png" alt-text="Screenshot that shows Deployment history." lightbox="media/deploy-update/deployments-history.png":::
-2. Select the details link next to the deployment you created.
+1. Select **Details** next to the deployment you created.
- :::image type="content" source="media/deploy-update/deployment-details.png" alt-text="Deployment details" lightbox="media/deploy-update/deployment-details.png":::
+ :::image type="content" source="media/deploy-update/deployment-details.png" alt-text="Screenshot that shows Deployment details." lightbox="media/deploy-update/deployment-details.png":::
-3. Select Refresh to view the latest status details.
+1. Select **Refresh** to view the latest status details.
-You have now completed a successful end-to-end image update using Device Update for IoT Hub using the Ubuntu (18.04 x64) Simulator Reference Agent.
+You've now completed a successful end-to-end image update by using Device Update for IoT Hub using the Ubuntu (18.04 x64) simulator reference agent.
## Clean up resources
-When no longer needed, clean up your Device Update account, instance, IoT Hub and IoT device.
+When no longer needed, clean up your Device Update account, instance, IoT hub, and IoT device.
## Next steps
iot-hub-device-update Device Update Ubuntu Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-ubuntu-agent.md
Title: Device Update for Azure IoT Hub tutorial using the Ubuntu Server 18.04 x64 Package agent | Microsoft Docs
-description: Get started with Device Update for Azure IoT Hub using the Ubuntu Server 18.04 x64 Package agent.
+ Title: Device Update for Azure IoT Hub tutorial using the Ubuntu Server 18.04 x64 package agent | Microsoft Docs
+description: Get started with Device Update for Azure IoT Hub by using the Ubuntu Server 18.04 x64 package agent.
Last updated 1/26/2022
-# Device Update for Azure IoT Hub tutorial using the package agent on Ubuntu Server 18.04 x64
+# Tutorial: Device Update for Azure IoT Hub using the package agent on Ubuntu Server 18.04 x64
-Device Update for IoT Hub supports image-based, package-based and script-based updates.
+Device Update for Azure IoT Hub supports image-based, package-based, and script-based updates.
-Package-based updates are targeted updates that alter only a specific component or application on the device. They lead to lower consumption of bandwidth and helps reduce the time to download and install the update. Package-based updates also typically allow for less downtime of devices when applying an update and avoid the overhead of creating images. They use an [APT manifest](device-update-apt-manifest.md) which provides the Device Update Agent with the information it needs to download and install the packages specified in the APT Manifest file (as well as their dependencies) from a designated repository.
+Package-based updates are targeted updates that alter only a specific component or application on the device. They lead to lower consumption of bandwidth and help reduce the time to download and install the update. Package-based updates also typically allow for less downtime of devices when you apply an update and avoid the overhead of creating images. They use an [APT manifest](device-update-apt-manifest.md), which provides the Device Update agent with the information it needs to download and install the packages specified in the APT manifest file (and their dependencies) from a designated repository.
-This end-to-end tutorial walks you through updating Azure IoT Edge on Ubuntu Server 18.04 x64 by using the Device Update package agent. Although the tutorial demonstrates updating IoT Edge, using similar steps you could update other packages such as the container engine it uses.
+This tutorial walks you through updating Azure IoT Edge on Ubuntu Server 18.04 x64 by using the Device Update package agent. Although the tutorial demonstrates updating IoT Edge, by using similar steps you could update other packages, such as the container engine it uses.
-The tools and concepts in this tutorial still apply even if you plan to use a different OS platform configuration. Complete this introduction to an end-to-end update process, then choose your preferred form of updating and OS platform to dive into the details.
+The tools and concepts in this tutorial still apply even if you plan to use a different OS platform configuration. Finish this introduction to an end-to-end update process. Then choose your preferred form of updating an OS platform to dive into the details.
-In this tutorial you will learn how to:
+In this tutorial, you'll learn how to:
> [!div class="checklist"]
-> * Download and install the Device Update agent and its dependencies
-> * Add a tag to your device
-> * Import an update
-> * Create a device group
-> * Deploy a package update
-> * Monitor the update deployment
+> * Download and install the Device Update agent and its dependencies.
+> * Add a tag to your device.
+> * Import an update.
+> * Create a device group.
+> * Deploy a package update.
+> * Monitor the update deployment.
+ ## Prerequisites
-* If you haven't already done so, create a [Device Update account and instance](create-device-update-account.md), including configuring an IoT Hub.
-* The [connection string for an IoT Edge device](../iot-edge/how-to-provision-single-device-linux-symmetric.md?view=iotedge-2020-11&preserve-view=true#view-registered-devices-and-retrieve-provisioning-information).
-* If you used the [Simulator agent tutorial](device-update-simulator.md) for testing prior to this, run the below command to invoke the APT handler and can deploy over-the-air Package Updates in this tutorial.
+* If you haven't already done so, create a [Device Update account and instance](create-device-update-account.md). Configure an IoT hub.
+* You need the [connection string for an IoT Edge device](../iot-edge/how-to-provision-single-device-linux-symmetric.md?view=iotedge-2020-11&preserve-view=true#view-registered-devices-and-retrieve-provisioning-information).
+* If you used the [Simulator agent tutorial](device-update-simulator.md) for prior testing, run the following command to invoke the APT handler and deploy over-the-air package updates in this tutorial:
-```sh
-# sudo /usr/bin/AducIotAgent --register-content-handler /var/lib/adu/extensions/sources/libmicrosoft_apt_1.so --update-type 'microsoft/a pt:1'
-```
+ ```sh
+ # sudo /usr/bin/AducIotAgent --register-content-handler /var/lib/adu/extensions/sources/libmicrosoft_apt_1.so --update-type 'microsoft/a pt:1'
+ ```
## Prepare a device
-### Using the Automated Deploy to Azure Button
-
-For convenience, this tutorial uses a [cloud-init](../virtual-machines/linux/using-cloud-init.md)-based [Azure Resource Manager template](../azure-resource-manager/templates/overview.md) to help you quickly set up an Ubuntu 18.04 LTS virtual machine. It installs both the Azure IoT Edge runtime and the Device Update package agent and then automatically configures the device with provisioning information using the device connection string for an IoT Edge device (prerequisite) that you supply. The Azure Resource Manager template also avoids the need to start an SSH session to complete setup.
-1. To begin, click the button below:
+Prepare a device automatically or manually.
- [![Deploy to Azure Button for iotedge-vm-deploy](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure%2Fiotedge-vm-deploy%2Fdevice-update-tutorial%2FedgeDeploy.json)
+### Use the automated Deploy to Azure button
-1. On the newly launched window, fill in the available form fields:
-
- > [!div class="mx-imgBorder"]
- > [![Screenshot showing the iotedge-vm-deploy template](../iot-edge/media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-deploy.png)](../iot-edge/media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-deploy.png)
- **Subscription**: The active Azure subscription to deploy the virtual machine into.
+For convenience, this tutorial uses a [cloud-init](../virtual-machines/linux/using-cloud-init.md)-based [Azure Resource Manager template](../azure-resource-manager/templates/overview.md) to help you quickly set up an Ubuntu 18.04 LTS virtual machine. It installs both the Azure IoT Edge runtime and the Device Update package agent. Then it automatically configures the device with provisioning information by using the device connection string for an IoT Edge device (prerequisite) that you supply. The Resource Manager template also avoids the need to start an SSH session to complete setup.
- **Resource group**: An existing or newly created Resource Group to contain the virtual machine and it's associated resources.
+1. To begin, select the button:
- **DNS Label Prefix**: A required value of your choosing that is used to prefix the hostname of the virtual machine.
+ [![Screenshot showing the Deploy to Azure button for iotedge-vm-deploy](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure%2Fiotedge-vm-deploy%2Fdevice-update-tutorial%2FedgeDeploy.json).
- **Admin Username**: A username, which will be provided root privileges on deployment.
+1. Fill in the available text boxes:
- **Device Connection String**: A [device connection string](../iot-edge/how-to-provision-single-device-linux-symmetric.md#view-registered-devices-and-retrieve-provisioning-information) for a device that was created within your intended [IoT Hub](../iot-hub/about-iot-hub.md).
-
- **VM Size**: The [size](../cloud-services/cloud-services-sizes-specs.md) of the virtual machine to be deployed
-
- **Ubuntu OS Version**: The version of the Ubuntu OS to be installed on the base virtual machine. Leave the default value unchanged as it will be set to Ubuntu 18.04-LTS already.
+ > [!div class="mx-imgBorder"]
+ > [![Screenshot showing the iotedge-vm-deploy template.](../iot-edge/media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-deploy.png)](../iot-edge/media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-deploy.png)
+
+ - **Subscription**: The active Azure subscription to deploy the virtual machine into.
+ - **Resource group**: An existing or newly created resource group to contain the virtual machine and its associated resources.
+ - **Region**: The [geographic region](https://azure.microsoft.com/global-infrastructure/locations/) to deploy the virtual machine into. This value defaults to the location of the selected resource group.
+ - **DNS Label Prefix**: A required value of your choosing that's used to prefix the hostname of the virtual machine.
+ - **Admin Username**: A username, which is provided root privileges on deployment.
+ - **Device Connection String**: A [device connection string](../iot-edge/how-to-provision-single-device-linux-symmetric.md#view-registered-devices-and-retrieve-provisioning-information) for a device that was created within your intended [IoT hub](../iot-hub/about-iot-hub.md).
+ - **VM Size**: The [size](../cloud-services/cloud-services-sizes-specs.md) of the virtual machine to be deployed.
+ - **Ubuntu OS Version**: The version of the Ubuntu OS to be installed on the base virtual machine. Leave the default value unchanged because it will be set to Ubuntu 18.04-LTS already.
+ - **Authentication Type**: Choose **sshPublicKey** or **password** based on your preference.
+ - **Admin Password or Key**: The value of the SSH Public Key or the value of the password based on the choice of authentication type.
- **Location**: The [geographic region](https://azure.microsoft.com/global-infrastructure/locations/) to deploy the virtual machine into, this value defaults to the location of the selected Resource Group.
+ After all the boxes are filled in, select the checkbox at the bottom of the page to accept the terms. Select **Purchase** to begin the deployment.
- **Authentication Type**: Choose **sshPublicKey** or **password** depending on your preference.
+1. Verify that the deployment has completed successfully. Allow a few minutes after deployment completes for the post-installation and configuration to finish installing IoT Edge and the device package update agent.
- **Admin Password or Key**: The value of the SSH Public Key or the value of the password depending on the choice of Authentication Type.
+ A virtual machine resource should have been deployed into the selected resource group. Note the machine name, which is in the format `vm-0000000000000`. Also note the associated **DNS name**, which is in the format `<dnsLabelPrefix>`.`<location>`.cloudapp.azure.com.
- When all fields have been filled in, select the checkbox at the bottom of the page to accept the terms and select **Purchase** to begin the deployment.
+ You can obtain the **DNS name** from the **Overview** section of the newly deployed virtual machine in the Azure portal.
-1. Verify that the deployment has completed successfully. Allow a few minutes after deployment completes for the post-installation and configuration to finish installing IoT Edge and the Device Package update agent.
+ > [!div class="mx-imgBorder"]
+ > [![Screenshot showing the DNS name of the iotedge vm.](../iot-edge/media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-dns-name.png)](../iot-edge/media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-dns-name.png)
+
+ > [!TIP]
+ > If you want to SSH into this VM after setup, use the associated **DNS name** with the following command:
+ `ssh <adminUsername>@<DNS_Name>`.
- A virtual machine resource should have been deployed into the selected resource group. Take note of the machine name that should be in the format `vm-0000000000000`. Also, take note of the associated **DNS Name**, which should be in the format `<dnsLabelPrefix>`.`<location>`.cloudapp.azure.com.
+### Manually prepare a device
- The **DNS Name** can be obtained from the **Overview** section of the newly deployed virtual machine within the Azure portal.
+Similar to the steps automated by the [cloud-init script](https://github.com/Azure/iotedge-vm-deploy/blob/1.2.0-rc4/cloud-init.txt), the following manual steps are used to install and configure a device. Use these steps to prepare a physical device.
- > [!div class="mx-imgBorder"]
- > [![Screenshot showing the dns name of the iotedge vm](../iot-edge/media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-dns-name.png)](../iot-edge/media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-dns-name.png)
- > [!TIP]
- > If you want to SSH into this VM after setup, use the associated **DNS Name** with the command:
- `ssh <adminUsername>@<DNS_Name>`
-### Manually prepare a device
-Similar to the steps automated by the [cloud-init script](https://github.com/Azure/iotedge-vm-deploy/blob/1.2.0-rc4/cloud-init.txt), following are manual steps to install and configure the device. These steps can be used to prepare a physical device.
+1. Follow the instructions to [install the Azure IoT Edge runtime](../iot-edge/how-to-provision-single-device-linux-symmetric.md?view=iotedge-2020-11&preserve-view=true).
-1. Follow the instructions to [Install the Azure IoT Edge runtime](../iot-edge/how-to-provision-single-device-linux-symmetric.md?view=iotedge-2020-11&preserve-view=true).
> [!NOTE]
- > The Device Update agent doesn't depend on IoT Edge. But, it does rely on the IoT Identity Service daemon that is installed with IoT Edge (1.2.0 and higher) to obtain an identity and connect to IoT Hub.
+ > The Device Update agent doesn't depend on IoT Edge. But it does rely on the IoT Identity Service daemon that's installed with IoT Edge (1.2.0 and higher) to obtain an identity and connect to IoT Hub.
>
- > Although not covered in this tutorial, the [IoT Identity Service daemon can be installed standalone on Linux-based IoT devices](https://azure.github.io/iot-identity-service/installation.html). The sequence of installation matters. The Device Update package agent must be installed _after_ the IoT Identity Service. Otherwise, the package agent will not be registered as an authorized component to establish a connection to IoT Hub.
-1. Then, install the Device Update agent .deb packages.
+ > Although not covered in this tutorial, the [IoT Identity Service daemon can be installed standalone on Linux-based IoT devices](https://azure.github.io/iot-identity-service/installation.html). The sequence of installation matters. The Device Update package agent must be installed _after_ the IoT Identity Service. Otherwise, the package agent won't be registered as an authorized component to establish a connection to IoT Hub.
+
+1. Install the Device Update agent .deb packages:
```bash sudo apt-get install deviceupdate-agent deliveryoptimization-plugin-apt ```
-
-1. Enter your IoT device's module (or device, depending on how you [provisioned the device with Device Update](device-update-agent-provisioning.md)) primary connection string in the configuration file by running the command below.
+
+1. Enter your IoT device's module (or device, depending on how you [provisioned the device with Device Update](device-update-agent-provisioning.md)) primary connection string in the configuration file by running the following command:
```markdown /etc/adu/du-config.json ```
-
-1. Finally restart the Device Update agent by running the command below.
+
+1. Restart the Device Update agent by running the following command:
```markdown sudo systemctl restart adu-agent ``` Device Update for Azure IoT Hub software packages are subject to the following license terms:+ * [Device update for IoT Hub license](https://github.com/Azure/iot-hub-device-update/blob/main/LICENSE) * [Delivery optimization client license](https://github.com/microsoft/do-client/blob/main/LICENSE)
-Read the license terms prior to using a package. Your installation and use of a package constitutes your acceptance of these terms. If you do not agree with the license terms, do not use that package.
+Read the license terms before you use a package. Your installation and use of a package constitutes your acceptance of these terms. If you don't agree with the license terms, don't use that package.
## Add a tag to your device
-1. Log into [Azure portal](https://portal.azure.com) and navigate to the IoT Hub.
-
-2. From 'IoT Edge' on the left navigation pane, find your IoT Edge device and navigate to the Device Twin or Module Twin.
+1. Sign in to the [Azure portal](https://portal.azure.com) and go to the IoT hub.
+1. On the left pane, under **IoT Edge**, find your IoT Edge device and go to the device twin or module twin.
+1. In the module twin of the Device Update agent module, delete any existing Device Update tag values by setting them to null. If you're using Device identity with Device Update agent, make these changes on the device twin.
+1. Add a new Device Update tag value, as shown:
+
+ ```JSON
+ "tags": {
+ "ADUGroup": "<CustomTagValue>"
+ },
+ ```
+
+## Import the update
+
+1. Go to [Device Update releases](https://github.com/Azure/iot-hub-device-update/releases) in GitHub and select the **Assets** dropdown list. Download `Edge.package.update.samples.zip` by selecting it. Extract the contents of the folder to discover a sample APT manifest (sample-1.0.1-aziot-edge-apt-manifest.json) and its corresponding import manifest (sample-1.0.1-aziot-edge-importManifest.json).
+1. Sign in to the [Azure portal](https://portal.azure.com/) and go to your IoT hub with Device Update. On the left pane, under **Automatic Device Management**, select **Updates**.
+1. Select the **Updates** tab.
+1. Select **+ Import New Update**.
+1. Select **+ Select from storage container**. Select an existing account or create a new account by using **+ Storage account**. Then select an existing container or create a new container by using **+ Container**. This container is used to stage your update files for importing.
-3. In the Module Twin of the Device Update agent module, delete any existing Device Update tag value by setting them to null. If you are using Device identity with Device Update agent make these changes on the Device Twin.
-
-4. Add a new Device Update tag value as shown below.
+ > [!NOTE]
+ > We recommend that you use a new container each time you import an update to avoid accidentally importing files from previous updates. If you don't use a new container, be sure to delete any files from the existing container before you finish this step.
-```JSON
- "tags": {
- "ADUGroup": "<CustomTagValue>"
- },
-```
+ :::image type="content" source="media/import-update/storage-account-ppr.png" alt-text="Screenshot that shows Storage account." lightbox="media/import-update/storage-account-ppr.png":::
-## Import update
+1. In your container, select **Upload** and go to the files you downloaded in step 1. After you select all your update files, select **Upload**. Then select the **Select** button to return to the **Import update** page.
-1. Go to [Device Update releases](https://github.com/Azure/iot-hub-device-update/releases) in GitHub and click the "Assets" drop-down. Download the `Edge.package.update.samples.zip` by clicking on it. Extract the contents of the folder to discover a sample APT manifest(sample-1.0.1-aziot-edge-apt-manifest.json) and its corresponding import manifest(sample-1.0.1-aziot-edge-importManifest.json).
+ :::image type="content" source="media/import-update/import-select-ppr.png" alt-text="Screenshot that shows selecting uploaded files." lightbox="media/import-update/import-select-ppr.png":::
+
+ _This screenshot shows the import step. File names might not match the ones used in the example._
-2. Log in to the [Azure portal](https://portal.azure.com/) and navigate to your IoT Hub with Device Update. Then, select the Updates option under Automatic Device Management from the left-hand navigation bar.
+1. On the **Import update** page, review the files to be imported. Then select **Import update** to start the import process.
-3. Select the Updates tab.
+ :::image type="content" source="media/import-update/import-start-2-ppr.png" alt-text="Screenshot that shows starting the Import process." lightbox="media/import-update/import-start-2-ppr.png":::
-4. Select "+ Import New Update".
+1. The import process begins, and the screen switches to the **Import History** section. When the **Status** column indicates that the import succeeded, select the **Available updates** header. You should see your imported update in the list now.
-5. Select "+ Select from storage container". Select an existing account or create a new account using "+ Storage account". Then select an existing container or create a new container using "+ Container". This container will be used to stage your update files for importing.
- > [!NOTE]
- > We recommend using a new container each time you import an update to avoid accidentally importing files from previous updates. If you don't use a new container, be sure to delete any files from the existing container before completing this step.
-
- :::image type="content" source="media/import-update/storage-account-ppr.png" alt-text="Storage Account" lightbox="media/import-update/storage-account-ppr.png":::
+ :::image type="content" source="media/import-update/update-ready-ppr.png" alt-text="Screenshot that shows the job status." lightbox="media/import-update/update-ready-ppr.png":::
-6. In your container, select "Upload" and navigate to files downloaded in **Step 1**. When you've selected all your update files, select "Upload" Then click the "Select" button to return to the "Import update" page.
+[Learn more](import-update.md) about how to import updates.
- :::image type="content" source="media/import-update/import-select-ppr.png" alt-text="Select Uploaded Files" lightbox="media/import-update/import-select-ppr.png":::
- _This screenshot shows the import step and file names may not match the ones used in the example_
+## Create an update group
-8. On the Import update page, review the files to be imported. Then select "Import update" to start the import process.
+1. Go to the **Groups and Deployments** tab at the top of the page.
- :::image type="content" source="media/import-update/import-start-2-ppr.png" alt-text="Import Start" lightbox="media/import-update/import-start-2-ppr.png":::
+ :::image type="content" source="media/create-update-group/ungrouped-devices.png" alt-text="Screenshot that shows ungrouped devices." lightbox="media/create-update-group/ungrouped-devices.png":::
-9. The import process begins, and the screen switches to the "Import History" section. When the `Status` column indicates the import has succeeded, select the "Available Updates" header. You should see your imported update in the list now.
+1. Select the **Add group** button to create a new group.
- :::image type="content" source="media/import-update/update-ready-ppr.png" alt-text="Job Status" lightbox="media/import-update/update-ready-ppr.png":::
-
-[Learn more](import-update.md) about importing updates.
+ :::image type="content" source="media/create-update-group/add-group.png" alt-text="Screenshot that shows device group addition." lightbox="media/create-update-group/add-group.png":::
-## Create update group
+1. Select an **IoT Hub** tag and **Device Class** from the list. Then select **Create group**.
-1. Go to the Groups and Deployments tab at the top of the page.
- :::image type="content" source="media/create-update-group/ungrouped-devices.png" alt-text="Screenshot of ungrouped devices." lightbox="media/create-update-group/ungrouped-devices.png":::
+ :::image type="content" source="media/create-update-group/select-tag.png" alt-text="Screenshot that shows tag selection." lightbox="media/create-update-group/select-tag.png":::
-2. Select the "Add group" button to create a new group.
- :::image type="content" source="media/create-update-group/add-group.png" alt-text="Screenshot of device group addition." lightbox="media/create-update-group/add-group.png":::
+1. After the group is created, you see that the update compliance chart and groups list are updated. The update compliance chart shows the count of devices in various states of compliance: **On latest update**, **New updates available**, and **Updates in progress**. [Learn about update compliance](device-update-compliance.md).
-3. Select an IoT Hub tag and Device Class from the list and then select Create group.
- :::image type="content" source="media/create-update-group/select-tag.png" alt-text="Screenshot of tag selection." lightbox="media/create-update-group/select-tag.png":::
+ :::image type="content" source="media/create-update-group/updated-view.png" alt-text="Screenshot that shows the update compliance view." lightbox="media/create-update-group/updated-view.png":::
-4. Once the group is created, you will see that the update compliance chart and groups list are updated. Update compliance chart shows the count of devices in various states of compliance: On latest update, New updates available, and Updates in Progress. [Learn about update compliance.](device-update-compliance.md)
- :::image type="content" source="media/create-update-group/updated-view.png" alt-text="Screenshot of update compliance view." lightbox="media/create-update-group/updated-view.png":::
+1. You should see your newly created group and any available updates for the devices in the new group. If there are devices that don't meet the device class requirements of the group, they show up in a corresponding invalid group. To deploy the best available update to the new user-defined group from this view, select **Deploy** next to the group.
-5. You should see your newly created group and any available updates for the devices in the new group. If there are devices that don't meet the device class requirements of the group, they will show up in a corresponding invalid group. You can deploy the best available update to the new user-defined group from this view by clicking on the "Deploy" button next to the group.
+[Learn more](create-update-group.md) about how to add tags and create update groups.
-[Learn more](create-update-group.md) about adding tags and creating update groups
+## Deploy the update
-## Deploy update
+1. After the group is created, you should see a new update available for your device group with a link to the update under **Best update**. You might need to refresh once. [Learn more about update compliance](device-update-compliance.md).
-1. Once the group is created, you should see a new update available for your device group, with a link to the update under Best Update (you may need to Refresh once). [Learn More about update compliance.](device-update-compliance.md)
+1. Select the target group by selecting the group name. You're directed to the group details under **Group basics**.
-2. Select the target group by clicking on the group name. You will be directed to the group details under Group basics.
+ :::image type="content" source="media/deploy-update/group-basics.png" alt-text="Screenshot that shows Group details." lightbox="media/deploy-update/group-basics.png":::
- :::image type="content" source="media/deploy-update/group-basics.png" alt-text="Group details" lightbox="media/deploy-update/group-basics.png":::
+1. To initiate the deployment, go to the **Current deployment** tab. Select the **deploy** link next to the desired update from the **Available updates** section. The best available update for a given group is denoted with a **Best** highlight.
-3. To initiate the deployment, go to the Current deployment tab. Click the deploy link next to the desired update from the Available updates section. The best, available update for a given group will be denoted with a "Best" highlight.
+ :::image type="content" source="media/deploy-update/select-update.png" alt-text="Screenshot that shows selecting an update." lightbox="media/deploy-update/select-update.png":::
- :::image type="content" source="media/deploy-update/select-update.png" alt-text="Select update" lightbox="media/deploy-update/select-update.png":::
+1. Schedule your deployment to start immediately or in the future. Then select **Create**.
-4. Schedule your deployment to start immediately or in the future, then select Create.
> [!TIP]
- > By default the Start date/time is 24 hrs from your current time. Be sure to select a different date/time if you want the deployment to begin earlier.
- :::image type="content" source="media/deploy-update/create-deployment.png" alt-text="Create deployment" lightbox="media/deploy-update/create-deployment.png":::
+ > By default, the **Start** date and time is 24 hours from your current time. Be sure to select a different date and time if you want the deployment to begin earlier.
-5. The Status under Deployment details should turn to Active, and the deployed update should be marked with "(deploying)".
+ :::image type="content" source="media/deploy-update/create-deployment.png" alt-text="Screenshot that shows creating a deployment." lightbox="media/deploy-update/create-deployment.png":::
- :::image type="content" source="media/deploy-update/deployment-active.png" alt-text="Deployment active" lightbox="media/deploy-update/deployment-active.png":::
+1. Under **Deployment details**, **Status** turns to **Active**. The deployed update is marked with **(deploying)**.
-6. View the compliance chart. You should see the update is now in progress.
+ :::image type="content" source="media/deploy-update/deployment-active.png" alt-text="Screenshot that shows the deployment as Active." lightbox="media/deploy-update/deployment-active.png":::
-7. After your device is successfully updated, you should see your compliance chart and deployment details update to reflect the same.
+1. View the compliance chart to see that the update is now in progress.
- :::image type="content" source="media/deploy-update/update-succeeded.png" alt-text="Update succeeded" lightbox="media/deploy-update/update-succeeded.png":::
+1. After your device is successfully updated, you see that your compliance chart and deployment details updated to reflect the same.
-## Monitor an update deployment
+ :::image type="content" source="media/deploy-update/update-succeeded.png" alt-text="Screenshot that shows the update succeeded." lightbox="media/deploy-update/update-succeeded.png":::
-1. Select the Deployment history tab at the top of the page.
+## Monitor the update deployment
- :::image type="content" source="media/deploy-update/deployments-history.png" alt-text="Deployment History" lightbox="media/deploy-update/deployments-history.png":::
+1. Select the **Deployment history** tab at the top of the page.
-2. Select the details link next to the deployment you created.
+ :::image type="content" source="media/deploy-update/deployments-history.png" alt-text="Screenshot that shows Deployment history." lightbox="media/deploy-update/deployments-history.png":::
- :::image type="content" source="media/deploy-update/deployment-details.png" alt-text="Deployment details" lightbox="media/deploy-update/deployment-details.png":::
+1. Select the **details** link next to the deployment you created.
-3. Select Refresh to view the latest status details.
+ :::image type="content" source="media/deploy-update/deployment-details.png" alt-text="Screenshot that shows deployment details." lightbox="media/deploy-update/deployment-details.png":::
+1. Select **Refresh** to view the latest status details.
-You have now completed a successful end-to-end package update using Device Update for IoT Hub on an Ubuntu Server 18.04 x64 device.
+You've now completed a successful end-to-end package update by using Device Update for IoT Hub on an Ubuntu Server 18.04 x64 device.
## Clean up resources
-When no longer needed, clean up your device update account, instance, IoT Hub, and the IoT Edge device (if you created the VM via the Deploy to Azure button). You can do so, by going to each individual resource and selecting "Delete". You need to clean up a device update instance before cleaning up the device update account.
+When no longer needed, clean up your device update account, instance, and IoT hub. Also clean up the IoT Edge device if you created the VM via the **Deploy to Azure** button. To clean up resources, go to each individual resource and select **Delete**. Clean up a device update instance before you clean up the device update account.
## Next steps
-You can use the following tutorials for a simple demonstration of Device Update for IoT Hub:
--- [Image Update: Getting Started with Raspberry Pi 3 B+ Reference Yocto Image](device-update-raspberry-pi.md) extensible via open source to build you own images for other architecture as needed.
-
-- [Proxy Update: Getting Started using Device Update binary agent for downstream devices](device-update-howto-proxy-updates.md)
-
-- [Getting Started Using Ubuntu (18.04 x64) Simulator Reference Agent](device-update-simulator.md)
+Use the following tutorials for a simple demonstration of Device Update for IoT Hub:
-- [Device Update for Azure IoT Hub tutorial for Azure-Real-Time-Operating-System](device-update-azure-real-time-operating-system.md)
+- [Image Update: Getting started with Raspberry Pi 3 B+ reference Yocto image](device-update-raspberry-pi.md) extensible via open source to build your own images for other architecture as needed.
+- [Proxy Update: Getting started using Device Update binary agent for downstream devices](device-update-howto-proxy-updates.md).
+- [Getting started using Ubuntu (18.04 x64) simulator reference agent](device-update-simulator.md).
+- [Device Update for Azure IoT Hub tutorial for Azure real-time operating system](device-update-azure-real-time-operating-system.md).
iot-hub-device-update Migration Pp To Ppr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/migration-pp-to-ppr.md
For the Public Preview Refresh release, the Device Update agent needs to be upda
:::image type="content" source="media/migration/switch-banner.png" alt-text="Screenshot of banner." lightbox="media/migration/switch-banner.png":::
-2. Create a new IoT/IoT Edge device on the Azure portal. Copy the primary connection string for the device from the device view for later. For more details, refer the [Add Device to IoT Hub](device-update-simulator.md#add-device-to-azure-iot-hub) section.
+2. Create a new IoT/IoT Edge device on the Azure portal. Copy the primary connection string for the device from the device view for later. For more details, refer the [Add Device to IoT Hub](device-update-simulator.md#add-a-device-to-azure-iot-hub) section.
3. Then, SSH into your device and remove any old Device Update agent. ```bash
For the Public Preview Refresh release, the Device Update agent needs to be upda
2. Delete the existing groups in the public preview portal by navigating through the banner.
-3. Add group tag to the device twin for the updated devices. For more details, refer the [Add a tag to your device](device-update-simulator.md#add-device-to-azure-iot-hub) section.
+3. Add group tag to the device twin for the updated devices. For more details, refer the [Add a tag to your device](device-update-simulator.md#add-a-device-to-azure-iot-hub) section.
4. Recreate the groups in the PPR portal by going to ΓÇÿAdd GroupsΓÇÖ and selecting the corresponding groups tag from the drop-down list.
iot-hub Iot Hub Automatic Device Management Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-automatic-device-management-cli.md
Use the following command to create a configuration:
--metrics [metric queries] ```
-* --**config-id** - The name of the configuration that will be created in the IoT hub. Give your configuration a unique name that is up to 128 lowercase letters. Avoid spaces and the following invalid characters: `& ^ [ ] { } \ | " < > /`.
+* --**config-id** - The name of the configuration that will be created in the IoT hub. Give your configuration a unique name that is up to 128 characters long. Lowercase letters and the following special characters are allowed: `-+%_*!'`. Spaces are not allowed.
* --**labels** - Add labels to help track your configuration. Labels are Name, Value pairs that describe your deployment. For example, `HostPlatform, Linux` or `Version, 3.0.1`
iot-hub Iot Hub Automatic Device Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-automatic-device-management.md
There are five steps to create a configuration. The following sections walk thro
### Name and Label
-1. Give your configuration a unique name that is up to 128 lowercase letters. Avoid spaces and the following invalid characters: `& ^ [ ] { } \ | " < > /`.
+1. Give your configuration a unique name that is up to 128 characters long. Lowercase letters and the following special characters are allowed: `-+%_*!'`. Spaces are not allowed.
2. Add labels to help track your configurations. Labels are **Name**, **Value** pairs that describe your configuration. For example, `HostPlatform, Linux` or `Version, 3.0.1`.
iot-hub Iot Hub Devguide Quotas Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-quotas-throttling.md
Previously updated : 04/05/2021 Last updated : 02/22/2022
IoT Hub enforces other operational limits:
| Device-to-cloud messaging | Maximum message size 256 KB | | Cloud-to-device messaging<sup>1</sup> | Maximum message size 64 KB. Maximum pending messages for delivery is 50 per device. | | Direct method<sup>1</sup> | Maximum direct method payload size is 128 KB. |
-| Automatic device and module configurations<sup>1</sup> | 100 configurations per paid SKU hub. 20 configurations per free SKU hub. |
+| Automatic device and module configurations<sup>1</sup> | 100 configurations per paid SKU hub. 10 configurations per free SKU hub. |
| IoT Edge automatic deployments<sup>1</sup> | 50 modules per deployment. 100 deployments (including layered deployments) per paid SKU hub. 10 deployments per free SKU hub. | | Twins<sup>1</sup> | Maximum size of desired properties and reported properties sections are 32 KB each. Maximum size of tags section is 8 KB. | | Shared access policies | Maximum number of shared access policies is 16. |
key-vault Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Key Vault description: Sample Azure Resource Graph queries for Azure Key Vault showing use of resource types and tables to access Azure Key Vault related resources and properties. Previously updated : 01/20/2022 Last updated : 02/16/2022
load-balancer Quickstart Load Balancer Standard Internal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-portal.md
During the creation of the load balancer, you'll configure:
| Port | Enter **80**. | | Backend port | Enter **80**. | | Backend pool | Select **myBackendPool**. |
- | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **HTTP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
+ | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
| Session persistence | Select **None**. | | Idle timeout (minutes) | Enter or select **15**. | | TCP reset | Select **Enabled**. |
load-balancer Quickstart Load Balancer Standard Internal Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-powershell.md
$bepool = New-AzLoadBalancerBackendAddressPoolConfig -Name 'myBackEndPool'
## Create the health probe and place in variable. ## $probe = @{ Name = 'myHealthProbe'
- Protocol = 'http'
+ Protocol = 'tcp'
Port = '80' IntervalInSeconds = '360' ProbeCount = '5'
- RequestPath = '/'
} $healthprobe = New-AzLoadBalancerProbeConfig @probe
$bepool = New-AzLoadBalancerBackendAddressPoolConfig -Name 'myBackEndPool'
## Create the health probe and place in variable. ## $probe = @{ Name = 'myHealthProbe'
- Protocol = 'http'
+ Protocol = 'tcp'
Port = '80' IntervalInSeconds = '360' ProbeCount = '5'
- RequestPath = '/'
} $healthprobe = New-AzLoadBalancerProbeConfig @probe
load-balancer Quickstart Load Balancer Standard Public Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-portal.md
During the creation of the load balancer, you'll configure:
| Port | Enter **80**. | | Backend port | Enter **80**. | | Backend pool | Select **myBackendPool**. |
- | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **HTTP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
+ | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
| Session persistence | Select **None**. | | Idle timeout (minutes) | Enter or select **15**. | | TCP reset | Select **Enabled**. |
load-balancer Quickstart Load Balancer Standard Public Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-powershell.md
$bepool = New-AzLoadBalancerBackendAddressPoolConfig -Name 'myBackEndPool'
## Create the health probe and place in variable. ## $probe = @{ Name = 'myHealthProbe'
- Protocol = 'http'
+ Protocol = 'tcp'
Port = '80' IntervalInSeconds = '360' ProbeCount = '5'
- RequestPath = '/'
} $healthprobe = New-AzLoadBalancerProbeConfig @probe
$bepool = New-AzLoadBalancerBackendAddressPoolConfig -Name 'myBackEndPool'
## Create the health probe and place in variable. ## $probe = @{ Name = 'myHealthProbe'
- Protocol = 'http'
+ Protocol = 'tcp'
Port = '80' IntervalInSeconds = '360' ProbeCount = '5'
- RequestPath = '/'
} $healthprobe = New-AzLoadBalancerProbeConfig @probe
load-balancer Tutorial Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-gateway-portal.md
In this section, you'll create the configuration and deploy the gateway load bal
| IP Version | Select **IPv4** or **IPv6** depending on your requirements. | | Frontend IP address | Select **MyFrontend**. | | Backend pool | Select **myBackendPool**. |
- | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **HTTP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
+ | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
| Session persistence | Select **None**. | :::image type="content" source="./media/tutorial-gateway-portal/add-load-balancing-rule.png" alt-text="Screenshot of create load-balancing rule." border="true":::
In this section, you'll create the configuration and deploy the gateway load bal
19. Select **Create**.
-## Add network virtual appliances to the Gateway Load Balancer backend pool
-Deploy NVAs through the Azure Marketplace. Once deployed, add the NVA virtual machines to the backend pool by navigating to the Backend pools tab of your Gateway Load Balancer.
+## Add network virtual appliances to the gateway load balancer backend pool
-## Chain load balancer frontend to gateway load balancer
+Deploy NVAs through the Azure Marketplace. Once deployed, add the NVA virtual machines to the backend pool of the gateway load balancer. To add the virtual machines, go to the backend pools tab of your gateway load balancer.
+
+## Chain load balancer frontend to the gateway load balancer
In this example, you'll chain the frontend of a standard load balancer to the gateway load balancer.
You'll add the frontend to the frontend IP of an existing load balancer in your
:::image type="content" source="./media/tutorial-gateway-portal/select-gateway-load-balancer.png" alt-text="Screenshot of addition of gateway load balancer to frontend IP." border="true":::
+## Chain a virtual machine NIC configuration to the gateway load balancer
+
+Instead of chaining a load balancer frontend to the gateway load balancer, chain a virtual machine's NIC configuration to the gateway load balancer. To chain the NIC configuration, add the configuration to the gateway load balancer frontend.
+
+> [!IMPORTANT]
+> A virtual machine must have a public IP address assigned before attempting to chain the NIC configuration to the frontend of the gateway load balancer.
+
+1. In the search box in the Azure portal, enter **Virtual machine**. In the search results, select **Virtual machines**.
+
+2. In **Virtual machines**, select the virtual machine that you want to add to the gateway load balancer. In this example, the virtual machine is named **myVM1**.
+
+3. In the overview of the virtual machine, select **Networking** in **Settings**.
+
+4. In **Networking**, select the name of the network interface attached to the virtual machine. In this example, it's **myvm1229**.
+
+ :::image type="content" source="./media/tutorial-gateway-portal/vm-nic.png" alt-text="Screenshot of virtual machine networking overview." border="true":::
+
+5. In the network interface page, select **IP configurations** in **Settings**.
+
+6. Select **myFrontend** in **Gateway Load balancer**.
+
+ :::image type="content" source="./media/tutorial-gateway-portal/vm-nic-gw-lb.png" alt-text="Screenshot of nic IP configuration." border="true":::
+
+7. Select **Save**.
## Clean up resources
logic-apps Connect Virtual Network Vnet Isolated Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/connect-virtual-network-vnet-isolated-environment.md
Title: Connect to Azure virtual networks using an ISE
-description: Create an integration service environment (ISE) to access Azure virtual networks (VNETs) from Azure Logic Apps
+ Title: Connect to Azure virtual networks with an ISE
+description: Create an integration service environment (ISE) to access Azure virtual networks (VNETs) from Azure Logic Apps.
ms.suite: integration - Previously updated : 08/11/2021+ Last updated : 02/22/2022 # Connect to Azure virtual networks from Azure Logic Apps using an integration service environment (ISE)
-For scenarios where your logic apps and integration accounts need access to an [Azure virtual network](../virtual-network/virtual-networks-overview.md), create an [*integration service environment* (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md). An ISE is an environment that uses dedicated storage and other resources that are kept separate from the "global" multi-tenant Logic Apps service. This separation also reduces any impact that other Azure tenants might have on your apps' performance. An ISE also provides you with your own static IP addresses. These IP addresses are separate from the static IP addresses that are shared by the logic apps in the public, multi-tenant service.
+For scenarios where Consumption logic app resources and integration accounts need access to an [Azure virtual network](../virtual-network/virtual-networks-overview.md), create an [*integration service environment* (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md). An ISE is an environment that uses dedicated storage and other resources that are kept separate from the "global" multi-tenant Azure Logic Apps. This separation also reduces any impact that other Azure tenants might have on your apps' performance. An ISE also provides you with your own static IP addresses. These IP addresses are separate from the static IP addresses that are shared by the logic apps in the public, multi-tenant service.
-When you create an ISE, Azure *injects* that ISE into your Azure virtual network, which then deploys the Logic Apps service into your virtual network. When you create a logic app or integration account, select your ISE as their location. Your logic app or integration account can then directly access resources, such as virtual machines (VMs), servers, systems, and services, in your virtual network.
+When you create an ISE, Azure *injects* that ISE into your Azure virtual network, which then deploys Azure Logic Apps into your virtual network. When you create a logic app or integration account, select your ISE as their location. Your logic app or integration account can then directly access resources, such as virtual machines (VMs), servers, systems, and services, in your virtual network.
![Select integration service environment](./media/connect-virtual-network-vnet-isolated-environment/select-logic-app-integration-service-environment.png)
An ISE has increased limits on:
* Message sizes * Custom connector requests
-For more information, see [Limits and configuration for Azure Logic Apps](../logic-apps/logic-apps-limits-and-config.md). To learn more about ISEs, see [Access to Azure Virtual Network resources from Azure Logic Apps](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md).
+For more information, review [Limits and configuration for Azure Logic Apps](logic-apps-limits-and-config.md). To learn more about ISEs, review [Access to Azure Virtual Network resources from Azure Logic Apps](connect-virtual-network-vnet-isolated-environment-overview.md).
This article shows you how to complete these tasks by using the Azure portal:
This article shows you how to complete these tasks by using the Azure portal:
* Create your ISE. * Add extra capacity to your ISE.
-You can also create an ISE by using the [sample Azure Resource Manager quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.logic/integration-service-environment) or by using the Logic Apps REST API, including setting up customer-managed keys:
+You can also create an ISE by using the [sample Azure Resource Manager quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.logic/integration-service-environment) or by using the Azure Logic Apps REST API, including setting up customer-managed keys:
-* [Create an integration service environment (ISE) by using the Logic Apps REST API](../logic-apps/create-integration-service-environment-rest-api.md)
-* [Set up customer-managed keys to encrypt data at rest for ISEs](../logic-apps/customer-managed-keys-integration-service-environment.md)
+* [Create an integration service environment (ISE) by using the Azure Logic Apps REST API](create-integration-service-environment-rest-api.md)
+* [Set up customer-managed keys to encrypt data at rest for ISEs](customer-managed-keys-integration-service-environment.md)
## Prerequisites
-* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
> [!IMPORTANT]
- > Logic apps, built-in triggers, built-in actions, and connectors that run in your ISE use a pricing plan
- > different from the consumption-based pricing plan. To learn how pricing and billing work for ISEs, see the
- > [Logic Apps pricing model](../logic-apps/logic-apps-pricing.md#ise-pricing).
- > For pricing rates, see [Logic Apps pricing](../logic-apps/logic-apps-pricing.md).
+ > Logic app workflows, built-in triggers, built-in actions, and connectors that run in your ISE use a pricing plan
+ > that differs from the Consumption pricing plan. To learn how pricing and billing work for ISEs, review the
+ > [Azure Apps pricing model](logic-apps-pricing.md#ise-pricing).
+ > For pricing rates, review [Azure Apps pricing](logic-apps-pricing.md).
* An [Azure virtual network](../virtual-network/virtual-networks-overview.md) that has four *empty* subnets, which are required for creating and deploying resources in your ISE and are used by these internal and hidden components:
- * Logic Apps Compute
+ * Azure Logic Apps Compute
* Internal App Service Environment (connectors) * Internal API Management (connectors) * Internal Redis for caching and performance You can create the subnets in advance or when you create your ISE so that you can create the subnets at the same time. However, before you create your subnets, make sure that you review the [subnet requirements](#create-subnet).
+ * The Developer ISE SKU uses three subnets, but you still have to create four subnets. The fourth subnet doesn't incur any extra cost.
+ * Make sure that your virtual network [enables access for your ISE](#enable-access) so that your ISE can work correctly and stay accessible.
- * If you use a [network virtual appliance (NVA)](../virtual-network/virtual-networks-udr-overview.md#user-defined), make sure that you don't enable TLS/SSL termination or change the outbound TLS/SSL traffic. Also, make sure that you don't enable inspection for traffic that originates from your ISE's subnet. For more information, see [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md).
+ * If you use a [network virtual appliance (NVA)](../virtual-network/virtual-networks-udr-overview.md#user-defined), make sure that you don't enable TLS/SSL termination or change the outbound TLS/SSL traffic. Also, make sure that you don't enable inspection for traffic that originates from your ISE's subnet. For more information, review [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md).
- * If you want to use custom Domain Name System (DNS) servers for your Azure virtual network, [set up those servers by following these steps](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md) before you deploy your ISE to your virtual network. For more information about managing DNS server settings, see [Create, change, or delete a virtual network](../virtual-network/manage-virtual-network.md#change-dns-servers).
+ * If you want to use custom Domain Name System (DNS) servers for your Azure virtual network, [set up those servers by following these steps](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md) before you deploy your ISE to your virtual network. For more information about managing DNS server settings, review [Create, change, or delete a virtual network](../virtual-network/manage-virtual-network.md#change-dns-servers).
> [!NOTE]
- > If you change your DNS server or DNS server settings, you must restart your ISE so that the ISE can pick up those changes. For more information, see [Restart your ISE](../logic-apps/ise-manage-integration-service-environment.md#restart-ISE).
+ > If you change your DNS server or DNS server settings, you must restart your ISE so that the ISE can pick up those changes. For more information, review [Restart your ISE](ise-manage-integration-service-environment.md#restart-ISE).
<a name="enable-access"></a>
To make sure that your ISE is accessible and that the logic apps in that ISE can
### Network ports used by your ISE
-This table describes the ports that your ISE requires to be accessible and the purpose for those ports. To help reduce complexity when you set up security rules, the table uses [service tags](../virtual-network/service-tags-overview.md) that represent groups of IP address prefixes for a specific Azure service. Where noted, *internal ISE* and *external ISE* refer to the [access endpoint that's selected during ISE creation](connect-virtual-network-vnet-isolated-environment.md#create-environment). For more information, see [Endpoint access](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#endpoint-access).
+This table describes the ports that your ISE requires to be accessible and the purpose for those ports. To help reduce complexity when you set up security rules, the table uses [service tags](../virtual-network/service-tags-overview.md) that represent groups of IP address prefixes for a specific Azure service. Where noted, *internal ISE* and *external ISE* refer to the [access endpoint that's selected during ISE creation](connect-virtual-network-vnet-isolated-environment.md#create-environment). For more information, review [Endpoint access](connect-virtual-network-vnet-isolated-environment-overview.md#endpoint-access).
> [!IMPORTANT] > For all rules, make sure that you set source ports to `*` because source ports are ephemeral.
This table describes the ports that your ISE requires to be accessible and the p
|||--|--|-|-| | Intersubnet communication within virtual network | Address space for the virtual network with ISE subnets | * | Address space for the virtual network with ISE subnets | * | Required for traffic to flow *between* the subnets in your virtual network. <p><p>**Important**: For traffic to flow between the *components* in each subnet, make sure that you open all the ports within each subnet. | | Both: <p>Communication to your logic app <p><p>Runs history for logic app| Internal ISE: <br>**VirtualNetwork** <p><p>External ISE: **Internet** or see **Notes** | * | **VirtualNetwork** | 443 | Rather than use the **Internet** service tag, you can specify the source IP address for these items: <p><p>- The computer or service that calls any request triggers or webhooks in your logic app <p>- The computer or service from where you want to access logic app runs history <p><p>**Important**: Closing or blocking this port prevents calls to logic apps that have request triggers or webhooks. You're also prevented from accessing inputs and outputs for each step in runs history. However, you're not prevented from accessing logic app runs history.|
-| Logic Apps designer - dynamic properties | **LogicAppsManagement** | * | **VirtualNetwork** | 454 | Requests come from the Logic Apps access endpoint's [inbound IP addresses](../logic-apps/logic-apps-limits-and-config.md#inbound) for that region. <p><p>**Important**: If you're working with Azure Government cloud, the **LogicAppsManagement** service tag won't work. Instead, you have to provide the Logic Apps [inbound IP addresses](../logic-apps/logic-apps-limits-and-config.md#azure-government-inbound) for Azure Government. |
-| Network health check | **LogicApps** | * | **VirtualNetwork** | 454 | Requests come from the Logic Apps access endpoint's [inbound IP addresses](../logic-apps/logic-apps-limits-and-config.md#inbound) and [outbound IP addresses](../logic-apps/logic-apps-limits-and-config.md#outbound) for that region. <p><p>**Important**: If you're working with Azure Government cloud, the **LogicApps** service tag won't work. Instead, you have to provide both the Logic Apps [inbound IP addresses](../logic-apps/logic-apps-limits-and-config.md#azure-government-inbound) and [outbound IP addresses](../logic-apps/logic-apps-limits-and-config.md#azure-government-outbound) for Azure Government. |
-| Connector deployment | **AzureConnectors** | * | **VirtualNetwork** | 454 | Required to deploy and update connectors. Closing or blocking this port causes ISE deployments to fail and prevents connector updates and fixes. <p><p>**Important**: If you're working with Azure Government cloud, the **AzureConnectors** service tag won't work. Instead, you have to provide the [managed connector outbound IP addresses](../logic-apps/logic-apps-limits-and-config.md#azure-government-outbound) for Azure Government. |
+| Azure Logic Apps designer - dynamic properties | **LogicAppsManagement** | * | **VirtualNetwork** | 454 | Requests come from the Azure Logic Apps access endpoint's [inbound IP addresses](logic-apps-limits-and-config.md#inbound) for that region. <p><p>**Important**: If you're working with Azure Government cloud, the **LogicAppsManagement** service tag won't work. Instead, you have to provide the Azure Logic Apps [inbound IP addresses](logic-apps-limits-and-config.md#azure-government-inbound) for Azure Government. |
+| Network health check | **LogicApps** | * | **VirtualNetwork** | 454 | Requests come from the Azure Logic Apps access endpoint's [inbound IP addresses](logic-apps-limits-and-config.md#inbound) and [outbound IP addresses](logic-apps-limits-and-config.md#outbound) for that region. <p><p>**Important**: If you're working with Azure Government cloud, the **LogicApps** service tag won't work. Instead, you have to provide both the Azure Logic Apps [inbound IP addresses](logic-apps-limits-and-config.md#azure-government-inbound) and [outbound IP addresses](logic-apps-limits-and-config.md#azure-government-outbound) for Azure Government. |
+| Connector deployment | **AzureConnectors** | * | **VirtualNetwork** | 454 | Required to deploy and update connectors. Closing or blocking this port causes ISE deployments to fail and prevents connector updates and fixes. <p><p>**Important**: If you're working with Azure Government cloud, the **AzureConnectors** service tag won't work. Instead, you have to provide the [managed connector outbound IP addresses](logic-apps-limits-and-config.md#azure-government-outbound) for Azure Government. |
| App Service Management dependency | **AppServiceManagement** | * | **VirtualNetwork** | 454, 455 || | Communication from Azure Traffic Manager | **AzureTrafficManager** | * | **VirtualNetwork** | Internal ISE: 454 <p><p>External ISE: 443 || | Both: <p>Connector policy deployment <p>API Management - management endpoint | **APIManagement** | * | **VirtualNetwork** | 3443 | For connector policy deployment, port access is required to deploy and update connectors. Closing or blocking this port causes ISE deployments to fail and prevents connector updates and fixes. |
If you don't permit access for these dependencies, your ISE deployment fails and
To prevent asymmetric routing, you must define a route for each and every IP address that's listed below with **Internet** as the next hop.
- * [Logic Apps inbound and outbound addresses for the ISE region](../logic-apps/logic-apps-limits-and-config.md#firewall-configuration-ip-addresses-and-service-tags)
+ * [Azure Logic Apps inbound and outbound addresses for the ISE region](logic-apps-limits-and-config.md#firewall-configuration-ip-addresses-and-service-tags)
* [Azure IP addresses for connectors in the ISE region, available in this download file](https://www.microsoft.com/download/details.aspx?id=56519) * [App Service Environment management addresses](../app-service/environment/management-addresses.md) * [Azure Traffic Manager management addresses](https://azuretrafficmanagerdata.blob.core.windows.net/probes/azure/probe-ip-ranges.json)
If you don't permit access for these dependencies, your ISE deployment fails and
You need to enable service endpoints for Azure SQL, Storage, Service Bus, KeyVault, and Event Hubs because you can't send traffic through a firewall to these services.
-* Other inbound and outbound dependencies
+* Other inbound and outbound dependencies
Your firewall *must* allow the following inbound and outbound dependencies:
-
+ * [Azure App Service Dependencies](../app-service/environment/firewall-integration.md#deploying-your-ase-behind-a-firewall) * [Azure Cache Service Dependencies](../azure-cache-for-redis/cache-how-to-premium-vnet.md#what-are-some-common-misconfiguration-issues-with-azure-cache-for-redis-and-virtual-networks) * [Azure API Management Dependencies](../api-management/virtual-network-reference.md)
If you don't permit access for these dependencies, your ISE deployment fails and
| **Resource group** | Yes | <*Azure-resource-group-name*> | A new or existing Azure resource group where you want to create your environment | | **Integration service environment name** | Yes | <*environment-name*> | Your ISE name, which can contain only letters, numbers, hyphens (`-`), underscores (`_`), and periods (`.`). | | **Location** | Yes | <*Azure-datacenter-region*> | The Azure datacenter region where to deploy your environment |
- | **SKU** | Yes | **Premium** or **Developer (No SLA)** | The ISE SKU to create and use. For differences between these SKUs, see [ISE SKUs](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#ise-level). <p><p>**Important**: This option is available only at ISE creation and can't be changed later. |
- | **Additional capacity** | Premium: <br>Yes <p><p>Developer: <br>Not applicable | Premium: <br>0 to 10 <p><p>Developer: <br>Not applicable | The number of extra processing units to use for this ISE resource. To add capacity after creation, see [Add ISE capacity](../logic-apps/ise-manage-integration-service-environment.md#add-capacity). |
- | **Access endpoint** | Yes | **Internal** or **External** | The type of access endpoints to use for your ISE. These endpoints determine whether request or webhook triggers on logic apps in your ISE can receive calls from outside your virtual network. <p><p>For example, if you want to use the following webhook-based triggers, make sure that you select **External**: <p><p>- Azure DevOps <br>- Azure Event Grid <br>- Common Data Service <br>- Office 365 <br>- SAP (ISE version) <p><p>Your selection also affects the way that you can view and access inputs and outputs in your logic app runs history. For more information, see [ISE endpoint access](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#endpoint-access). <p><p>**Important**: You can select the access endpoint only during ISE creation and can't change this option later. |
+ | **SKU** | Yes | **Premium** or **Developer (No SLA)** | The ISE SKU to create and use. For differences between these SKUs, review [ISE SKUs](connect-virtual-network-vnet-isolated-environment-overview.md#ise-level). <p><p>**Important**: This option is available only at ISE creation and can't be changed later. |
+ | **Additional capacity** | Premium: <br>Yes <p><p>Developer: <br>Not applicable | Premium: <br>0 to 10 <p><p>Developer: <br>Not applicable | The number of extra processing units to use for this ISE resource. To add capacity after creation, review [Add ISE capacity](ise-manage-integration-service-environment.md#add-capacity). |
+ | **Access endpoint** | Yes | **Internal** or **External** | The type of access endpoints to use for your ISE. These endpoints determine whether request or webhook triggers on logic apps in your ISE can receive calls from outside your virtual network. <p><p>For example, if you want to use the following webhook-based triggers, make sure that you select **External**: <p><p>- Azure DevOps <br>- Azure Event Grid <br>- Common Data Service <br>- Office 365 <br>- SAP (ISE version) <p><p>Your selection also affects the way that you can view and access inputs and outputs in your logic app runs history. For more information, review [ISE endpoint access](connect-virtual-network-vnet-isolated-environment-overview.md#endpoint-access). <p><p>**Important**: You can select the access endpoint only during ISE creation and can't change this option later. |
| **Virtual network** | Yes | <*Azure-virtual-network-name*> | The Azure virtual network where you want to inject your environment so logic apps in that environment can access your virtual network. If you don't have a network, [create an Azure virtual network first](../virtual-network/quick-create-portal.md). <p><p>**Important**: You can *only* perform this injection when you create your ISE. |
- | **Subnets** | Yes | <*subnet-resource-list*> | An ISE requires four *empty* subnets, which are required for creating and deploying resources in your ISE and are used by internal Logic Apps components, such as connectors and caching for performance. <p>**Important**: Make sure that you [review the subnet requirements before continuing with these steps to create your subnets](#create-subnet). |
+ | **Subnets** | Yes | <*subnet-resource-list*> | Regardless you use ISE Premium or Developer, your virtual network requires four *empty* subnets for creating and deploying resources in your ISE. These subnets are used by internal Azure Logic Apps components, such as connectors and caching for performance. <p>**Important**: Make sure that you [review the subnet requirements before continuing with these steps to create your subnets](#create-subnet). |
||||| <a name="create-subnet"></a> **Create subnets**
- Your ISE requires four *empty* subnets, which are needed to create and deploy resources in your ISE and are used by internal Logic Apps components, such as connectors and caching for performance. You *can't* change these subnet addresses after you create your environment. If you create and deploy your ISE through the Azure portal, make sure that you don't delegate these subnets to any Azure services. However, if you create and deploy your ISE through the REST API, Azure PowerShell, or an Azure Resource Manager template, you need to [delegate](../virtual-network/manage-subnet-delegation.md) one empty subnet to `Microsoft.integrationServiceEnvironment`. For more information, see [Add a subnet delegation](../virtual-network/manage-subnet-delegation.md).
+ Whether you plan to use ISE Premium or Developer, make sure that your virtual network has four *empty* subnets. These subnets are used for creating and deploying resources in your ISE and are used by internal Azure Logic Apps components, such as connectors and caching for performance. You *can't* change these subnet addresses after you create your environment. If you create and deploy your ISE through the Azure portal, make sure that you don't delegate these subnets to any Azure services. However, if you create and deploy your ISE through the REST API, Azure PowerShell, or an Azure Resource Manager template, you need to [delegate](../virtual-network/manage-subnet-delegation.md) one empty subnet to `Microsoft.integrationServiceEnvironment`. For more information, review [Add a subnet delegation](../virtual-network/manage-subnet-delegation.md).
Each subnet needs to meet these requirements: * Uses a name that starts with either an alphabetic character or an underscore (no numbers), and doesn't use these characters: `<`, `>`, `%`, `&`, `\\`, `?`, `/`. * Uses the [Classless Inter-Domain Routing (CIDR) format](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing).
-
+ > [!IMPORTANT] > > Don't use the following IP address spaces for your virtual network or subnets because they aren't resolvable by Azure Logic Apps:<p>
If you don't permit access for these dependencies, your ISE deployment fails and
> * 168.63.129.16/32 > * 169.254.169.254/32
- * Uses a `/27` in the address space because each subnet requires 32 addresses. For example, `10.0.0.0/27` has 32 addresses because 2<sup>(32-27)</sup> is 2<sup>5</sup> or 32. More addresses won't provide extra benefits. To learn more about calculating addresses, see [IPv4 CIDR blocks](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#IPv4_CIDR_blocks).
+ * Uses a `/27` in the address space because each subnet requires 32 addresses. For example, `10.0.0.0/27` has 32 addresses because 2<sup>(32-27)</sup> is 2<sup>5</sup> or 32. More addresses won't provide extra benefits. To learn more about calculating addresses, review [IPv4 CIDR blocks](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#IPv4_CIDR_blocks).
* If you use [ExpressRoute](../expressroute/expressroute-introduction.md), you have to [create a route table](../virtual-network/manage-route-table.md) that has the following route and link that table with each subnet that's used by your ISE:
If you don't permit access for these dependencies, your ISE deployment fails and
> If the subnets you try to create aren't valid, the Azure portal shows a message, > but doesn't block your progress.
- For more information about creating subnets, see [Add a virtual network subnet](../virtual-network/virtual-network-manage-subnet.md).
+ For more information about creating subnets, review [Add a virtual network subnet](../virtual-network/virtual-network-manage-subnet.md).
1. After Azure successfully validates your ISE information, select **Create**, for example:
If you don't permit access for these dependencies, your ISE deployment fails and
> If you delete your virtual network, Azure generally takes up to two hours > before releasing up your subnets, but this operation might take longer. > When deleting virtual networks, make sure that no resources are still connected.
- > See [Delete virtual network](../virtual-network/manage-virtual-network.md#delete-a-virtual-network).
+ > For more information, review [Delete virtual network](../virtual-network/manage-virtual-network.md#delete-a-virtual-network).
1. To view your environment, select **Go to resource** if Azure doesn't automatically go to your environment after deployment finishes.
If you don't permit access for these dependencies, your ISE deployment fails and
1. On your ISE menu, under **Settings**, select **Properties**.
- 1. Under **Connector outgoing IP addresses**, copy the public IP address ranges, which also appear in this article, [Limits and configuration - Outbound IP addresses](../logic-apps/logic-apps-limits-and-config.md#outbound).
+ 1. Under **Connector outgoing IP addresses**, copy the public IP address ranges, which also appear in this article, [Limits and configuration - Outbound IP addresses](logic-apps-limits-and-config.md#outbound).
1. Create a network security group, if you don't have one already.
- 1. Based on the following information, add an inbound security rule for the public outbound IP addresses that you copied. For more information, see [Tutorial: Filter network traffic with a network security group using the Azure portal](../virtual-network/tutorial-filter-network-traffic.md#create-a-network-security-group).
+ 1. Based on the following information, add an inbound security rule for the public outbound IP addresses that you copied. For more information, review [Tutorial: Filter network traffic with a network security group using the Azure portal](../virtual-network/tutorial-filter-network-traffic.md#create-a-network-security-group).
| Purpose | Source service tag or IP addresses | Source ports | Destination service tag or IP addresses | Destination ports | Notes | |||--|--|-|-| | Permit traffic from connector outbound IP addresses | <*connector-public-outbound-IP-addresses*> | * | Address space for the virtual network with ISE subnets | * | | |||||||
-1. To check the network health for your ISE, see [Manage your integration service environment](../logic-apps/ise-manage-integration-service-environment.md#check-network-health).
+1. To check the network health for your ISE, review [Manage your integration service environment](ise-manage-integration-service-environment.md#check-network-health).
> [!CAUTION] > If your ISE's network becomes unhealthy, the internal App Service Environment (ASE) that's used by your ISE can also become unhealthy.
If you don't permit access for these dependencies, your ISE deployment fails and
> Resolve any problems that you find, and then restart your ISE. Otherwise, after 90 days, the suspended ASE is deleted, and your > ISE becomes unusable. So, make sure that you keep your ISE healthy to permit the necessary traffic. >
- > For more information, see these topics:
+ > For more information, review these topics:
> > * [Azure App Service diagnostics overview](../app-service/overview-diagnostics.md) > * [Message logging for Azure App Service Environment](../app-service/environment/using-an-ase.md#logging)
-1. To start creating logic apps and other artifacts in your ISE, see [Add resources to integration service environments](../logic-apps/add-artifacts-integration-service-environment-ise.md).
+1. To start creating logic apps and other artifacts in your ISE, review [Add resources to integration service environments](add-artifacts-integration-service-environment-ise.md).
> [!IMPORTANT] > After you create your ISE, managed ISE connectors become available for you to use, but they don't automatically appear > in the connector picker on the Logic App Designer. Before you can use these ISE connectors, you have to manually
- > [add and deploy these connectors to your ISE](../logic-apps/add-artifacts-integration-service-environment-ise.md#add-ise-connectors-environment)
+ > [add and deploy these connectors to your ISE](add-artifacts-integration-service-environment-ise.md#add-ise-connectors-environment)
> so that they appear in the Logic App Designer. ## Next steps
-* [Add resources to integration service environments](../logic-apps/add-artifacts-integration-service-environment-ise.md)
-* [Manage integration service environments](../logic-apps/ise-manage-integration-service-environment.md#check-network-health)
+* [Add resources to integration service environments](add-artifacts-integration-service-environment-ise.md)
+* [Manage integration service environments](ise-manage-integration-service-environment.md#check-network-health)
* Learn more about [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) * Learn about [virtual network integration for Azure services](../virtual-network/virtual-network-for-azure-services.md)
machine-learning Designer Accessibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/designer-accessibility.md
This workflow has been tested with [Narrator](https://support.microsoft.com/help
## Navigate the pipeline graph
-The pipeline graph is organized as a nested list. The outer list is a component list, which describes all the components in the pipeline graph. The inner list is a connection list, which describes all the connections of a specific component.
+The pipeline graph is organized as a nested list. The outer list is a component list, which describes all the components in the pipeline graph. The inner list is a connection list, which describes input/output ports and details for a specific component connection.
-1. In the component list, use the arrow key to switch components.
-1. Use tab to open the connection list for the target component.
-1. Use arrow key to switch between the connection ports for the component.
-1. Use ΓÇ£GΓÇ¥ to go to the target component.
+The following keyboard actions help you navigate a pipeline graph:
+
+- Tab: Move to first node > each port of the node > next node.
+- Up/down arrow keys: Move to next or previous node by its position in the graph.
+- Ctrl+G when focus is on a port: Go to the connected port. When there's more than one connection from one port, open a list view to select the target. Use the Esc key to go to the selected target.
## Edit the pipeline graph
The pipeline graph is organized as a nested list. The outer list is a component
1. Use Ctrl+F6 to switch focus from the canvas to the component tree. 1. Find the desired component in the component tree using standard treeview control.
-### Edit a component
+### Connect a component to another component
-To connect a component to another component:
+1. Use the Tab key to move focus to a port.
+
+ The screen reader reads the port information, which includes whether this port is a valid source port to set a connection to other components.
-1. Use Ctrl + Shift + H when targeting a component in the component list to open the connection helper.
-1. Edit the connection ports for the component.
+1. If the current port is a valid source port, press access key + C to start connecting. This command sets this port as the connection source.
+1. Using the Tab key, move focus through every available destination port.
+1. To use the current port as the destination port and set up the connection, press Enter.
+1. To cancel the connection, press Esc.
-To adjust component properties:
+### Edit setting of a component
-1. Use Ctrl + Shift + E when targeting a component to open the component properties.
-1. Edit the component properties.
+- Use access key + A to open the component setting panel. Then, use the Tab key to move focus to the setting panel, where you can edit the settings.
## Navigation shortcuts
Use the following shortcuts with the access key. For more information on access
| Access key + K | Open "Create inference pipeline" dropdown | | Access key + U | Open "Update inference pipeline" dropdown | | Access key + M | Open more(...) dropdown |
+| Access key + A | Open component settings |
## Next steps
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-forecast.md
Automated machine learning automatically tries different models and algorithms a
### Configuration settings
-Similar to a regression problem, you define standard training parameters like task type, number of iterations, training data, and number of cross-validations. Forecasting tasks require the `time_column_name` and `forecast_horizon` parameters to configure your experiment. If the data includes multiple time series, such as sales data for multiple stores or energy data across different states, automated ML automatically detects this and sets the `time_series_id_column_names` parameter for you. You can also include additional parameters to better configure your run, see the [optional configurations](#optional-configurations) section for more detail on what can be included.
+Similar to a regression problem, you define standard training parameters like task type, number of iterations, training data, and number of cross-validations. Forecasting tasks require the `time_column_name` and `forecast_horizon` parameters to configure your experiment. If the data includes multiple time series, such as sales data for multiple stores or energy data across different states, automated ML automatically detects this and sets the `time_series_id_column_names` parameter (preview) for you. You can also include additional parameters to better configure your run, see the [optional configurations](#optional-configurations) section for more detail on what can be included.
+
+> [!IMPORTANT]
+> Automatic time series identification is currently in public preview. This preview version is provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
| Parameter&nbsp;name | Description | |-|-|
machine-learning How To Use Mlflow Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-databricks.md
To link your ADB workspace to a new or existing Azure Machine Learning workspace
1. Select the **Link Azure Machine Learning workspace** button on the bottom right. ![Link Azure DB and Azure Machine Learning workspaces](./media/how-to-use-mlflow-azure-databricks/link-workspaces.png)
+
## MLflow Tracking in your workspaces
-After you instantiate your workspace, MLflow Tracking is automatically set to be tracked in
-all of the following places:
+After you instantiate your workspace, MLflow Tracking is automatically set to be tracked in all of the following places:
* The linked Azure Machine Learning workspace. * Your original ADB workspace.
mlflow.set_experiment(experimentName)
```
+> [!NOTE]
+> MLflow Tracking in a [private link enabled Azure Machine Learning workspace](how-to-configure-private-link.md) is not supported.
+ ### Set MLflow Tracking to only track in your Azure Machine Learning workspace If you prefer to manage your tracked experiments in a centralized location, you can set MLflow tracking to **only** track in your Azure Machine Learning workspace.
machine-learning Tutorial Create Secure Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace.md
Previously updated : 02/04/2022 Last updated : 02/23/2022
In this tutorial, you accomplish the following tasks:
## Prerequisites
-* Familiarity with Azure Virtual Networks and IP networking
-* While most of the steps in this article use the Azure portal or the Azure Machine Learning studio, some steps use the Azure CLI (v1) extension for Machine Learning.
+* Familiarity with Azure Virtual Networks and IP networking. If you are not familiar, try the [Fundamentals of computer networking](/learn/modules/network-fundamentals/) module.
+* While most of the steps in this article use the Azure portal or the Azure Machine Learning studio, some steps use the Azure CLI extension for Machine Learning v2.
## Create a virtual network
To create a virtual network, use the following steps:
> > The workspace and other dependency services will go into the training subnet. They can still be used by resources in other subnets, such as the scoring subnet.
- 1. Look at the default __IPv4 address space__ value. In the screenshot, the value is __172.17.0.0/16__. __The value may be different for you__. While you can use a different value, the rest of the steps in this tutorial are based on the 172.17.0.0/16 value.
+ 1. Look at the default __IPv4 address space__ value. In the screenshot, the value is __172.17.0.0/16__. __The value may be different for you__. While you can use a different value, the rest of the steps in this tutorial are based on the __172.16.0.0/16 value__.
+
+ > [!IMPORTANT]
+ > We do not recommend using an address in the 172.17.0.1/16 range if you plan on using Azure Kubernetes Services for deployment with this cluster. The Docker bridge in Azure Kubernetes Services uses 172.17.0.1/16 as its default. Other ranges may also conflict depending on what you want to connect to the virtual network. For example, if you plan to connect your on premises network to the VNet, and your on-premises network also uses the 172.16.0.0/16 range. Ultimately, it is up to __you__ to plan your network infrastructure.
+ 1. Select the __Default__ subnet and then select __Remove subnet__. :::image type="content" source="./media/tutorial-create-secure-workspace/delete-default-subnet.png" alt-text="Screenshot of deleting default subnet":::
- 1. To create a subnet to contain the workspace, dependency services, and resources used for training, select __+ Add subnet__ and use the following values for the subnet:
+ 1. To create a subnet to contain the workspace, dependency services, and resources used for training, select __+ Add subnet__ and set the subnet name and address range. The following are the values used in this tutorial:
* __Subnet name__: Training
- * __Subnet address range__: 172.17.0.0/24
+ * __Subnet address range__: 172.16.0.0/24
:::image type="content" source="./media/tutorial-create-secure-workspace/vnet-add-training-subnet.png" alt-text="Screenshot of Training subnet":::
To create a virtual network, use the following steps:
> > If you plan on using a _private endpoint_ to add these services to the VNet, you do not need to select these entries. The steps in this article use a private endpoint for these services, so you do not need to select them when following these steps.
- 1. To create a subnet for compute resources used to score your models, select __+ Add subnet__ again, and use the follow values:
+ 1. To create a subnet for compute resources used to score your models, select __+ Add subnet__ again, and set the name and address range:
* __Subnet name__: Scoring
- * __Subnet address range__: 172.17.1.0/24
+ * __Subnet address range__: 172.16.1.0/24
:::image type="content" source="./media/tutorial-create-secure-workspace/vnet-add-scoring-subnet.png" alt-text="Screenshot of Scoring subnet":::
To create a virtual network, use the following steps:
1. Select __Security__. For __BastionHost__, select __Enable__. [Azure Bastion](../bastion/bastion-overview.md) provides a secure way to access the VM jump box you will create inside the VNet in a later step. Use the following values for the remaining fields: * __Bastion name__: A unique name for this Bastion instance
- * __AzureBastionSubnetAddress space__: 172.17.2.0/27
+ * __AzureBastionSubnetAddress space__: 172.16.2.0/27
* __Public IP address__: Create a new public IP address. Leave the other fields at the default values.
To create a virtual network, use the following steps:
* __Name__: A unique name for this private endpoint. * __Target sub-resource__: blob * __Virtual network__: The virtual network you created earlier.
- * __Subnet__: Training (172.17.0.0/24)
+ * __Subnet__: Training (172.16.0.0/24)
* __Private DNS integration__: Yes * __Private DNS Zone__: privatelink.blob.core.windows.net
To create a virtual network, use the following steps:
* __Name__: A unique name for this private endpoint. * __Target sub-resource__: Vault * __Virtual network__: The virtual network you created earlier.
- * __Subnet__: Training (172.17.0.0/24)
+ * __Subnet__: Training (172.16.0.0/24)
* __Private DNS integration__: Yes * __Private DNS Zone__: privatelink.vaultcore.azure.net
To create a virtual network, use the following steps:
* __Name__: A unique name for this private endpoint. * __Target sub-resource__: registry * __Virtual network__: The virtual network you created earlier.
- * __Subnet__: Training (172.17.0.0/24)
+ * __Subnet__: Training (172.16.0.0/24)
* __Private DNS integration__: Yes * __Private DNS Zone__: privatelink.azurecr.io
To create a virtual network, use the following steps:
* __Name__: A unique name for this private endpoint. * __Target sub-resource__: amlworkspace * __Virtual network__: The virtual network you created earlier.
- * __Subnet__: Training (172.17.0.0/24)
+ * __Subnet__: Training (172.16.0.0/24)
* __Private DNS integration__: Yes * __Private DNS Zone__: Leave the two private DNS zones at the default values of __privatelink.api.azureml.ms__ and __privatelink.notebooks.azure.net__.
For more information on creating a compute cluster and compute cluster, includin
## Configure image builds When Azure Container Registry is behind the virtual network, Azure Machine Learning can't use it to directly build Docker images (used for training and deployment). Instead, configure the workspace to use the compute cluster you created earlier. Use the following steps to create a compute cluster and configure the workspace to use it to build images:
When Azure Container Registry is behind the virtual network, Azure Machine Learn
1. From the Cloud Shell, use the following command to install the 1.0 CLI for Azure Machine Learning: ```azurecli-interactive
- az extension add -n azure-cli-ml
+ az extension add -n ml
``` 1. To update the workspace to use the compute cluster to build Docker images. Replace `docs-ml-rg` with your resource group. Replace `docs-ml-ws` with your workspace. Replace `cpu-cluster` with the compute cluster to use: ```azurecli-interactive az ml workspace update \
- -g docs-ml-rg \
- --name docs-ml-ws \
- --image-build-compute cpu-cluster
+ -n myworkspace \
+ -g myresourcegroup \
+ -i mycomputecluster
``` > [!NOTE]
marketplace Analytics Sample Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-sample-queries.md
Previously updated : 1/25/2022 Last updated : 2/23/2022 # Sample queries for programmatic analytics
For details about the column names, attributes, and descriptions, refer to the f
- [Usage details table](usage-dashboard.md#usage-details-table) - [Revenue details table](revenue-dashboard.md#data-dictionary-table) - [Quality of service table](quality-of-service-dashboard.md#offer-deployment-details)
+- [Customer retention table](customer-retention-dashboard.md#dictionary-of-data-terms)
## Customers report queries
These sample queries apply to the Customers report.
| **Query Description** | **Sample Query** | | | |
-| Active customers of the partner until the date you choose | `SELECT DateAcquired,CustomerCompanyName,CustomerId FROM ISVCustomer WHERE IsActive = 1` |
-| Churned customers of the partner until the date you choose | `SELECT DateAcquired,CustomerCompanyName,CustomerId FROM ISVCustomer WHERE IsActive = 0` |
+| List customer details with active customers of the partner until the date you choose | `SELECT DateAcquired,CustomerCompanyName,CustomerId FROM ISVCustomer WHERE IsActive = 1` |
+| List customer details with churned customers of the partner until the date you choose | `SELECT DateAcquired,CustomerCompanyName,CustomerId FROM ISVCustomer WHERE IsActive = 0` |
| List of new customers from a specific geography in the last six months | `SELECT DateAcquired,CustomerCompanyName,CustomerId FROM ISVCustomer WHERE DateAcquired <= ΓÇÿ2020-06-30ΓÇÖ AND CustomerCountryRegion = ΓÇÿUnited StatesΓÇÖ` | |||
These sample queries apply to the Usage report.
| **Query Description** | **Sample Query** | | | |
-| Virtual Machine (VM) normalized usage for ΓÇ£Billed through AzureΓÇ¥ Marketplace License type for the last 6M | `SELECT MonthStartDate, NormalizedUsage FROM ISVUsage WHERE MarketplaceLicenseType = ΓÇÿBilled Through AzureΓÇÖ AND OfferType NOT IN (ΓÇÿAzure ApplicationsΓÇÖ, ΓÇÿSaaSΓÇÖ) TIMESPAN LAST_6_MONTHS` |
-| VM Raw usage for ΓÇ£Billed through AzureΓÇ¥ Marketplace License type for the last 12M | `SELECT MonthStartDate, RawUsage FROM ISVUsage WHERE MarketplaceLicenseType = ΓÇÿBilled Through AzureΓÇÖ AND OfferType NOT IN (ΓÇÿAzure ApplicationsΓÇÖ, ΓÇÿSaaSΓÇÖ) TIMESPAN LAST_1_YEAR` |
-| VM Normalized usage for ΓÇ£Bring Your Own LicenseΓÇ¥ Marketplace License type for the last 6M | `SELECT MonthStartDate, NormalizedUsage FROM ISVUsage WHERE MarketplaceLicenseType = ΓÇÿBring Your Own LicenseΓÇÖ AND OfferType NOT IN (ΓÇÿAzure ApplicationsΓÇÖ, ΓÇÿSaaSΓÇÖ) TIMESPAN LAST_6_MONTHS` |
-| VM Raw usage for ΓÇ£Bring Your Own LicenseΓÇ¥ Marketplace License type for the last 6M | `SELECT MonthStartDate, RawUsage FROM ISVUsage WHERE MarketplaceLicenseType = ΓÇÿBring Your Own LicenseΓÇÖ AND OfferType NOT IN (ΓÇÿAzure ApplicationsΓÇÖ, ΓÇÿSaaSΓÇÖ) TIMESPAN LAST_6_MONTHS` |
-| Based on Usage Date, daily total normalized usage and ΓÇ£Estimated Extended Charges (PC/CC)ΓÇ¥ for Paid plans for the last month | `SELECT UsageDate, NormalizedUsage, EstimatedExtendedChargePC FROM ISVUsage WHERE SKUBillingType = ΓÇÿPaidΓÇÖ ORDER BY UsageDate DESC TIMESPAN LAST_MONTH` |
-| Based on Usage Date, daily total raw usage and ΓÇ£Estimated Extended Charges (PC/CC)ΓÇ¥ for Paid plans for the last month | `SELECT UsageDate, RawUsage, EstimatedExtendedChargePC FROM ISVUsage WHERE SKUBillingType = ΓÇÿPaidΓÇÖ ORDER BY UsageDate DESC TIMESPAN LAST\_MONTH` |
-| For a specific Offer Name, VM Normalized usage for ΓÇ£Billed through AzureΓÇ¥ Marketplace License type for the last 6M | `SELECT OfferName, NormalizedUsage FROM ISVUsage WHERE MarketplaceLicenseType = ΓÇÿBilled Through AzureΓÇÖ AND OfferName = ΓÇÿExample Offer NameΓÇÖ TIMESPAN LAST_6_MONTHS` |
-| For a specific Offer Name, metered usage for the last 6M | `SELECT OfferName, MeteredUsage FROM ISVUsage WHERE OfferName = ΓÇÿExample Offer NameΓÇÖ AND OfferType IN (ΓÇÿSaaSΓÇÖ, ΓÇÿAzure ApplicationsΓÇÖ) TIMESPAN LAST_6_MONTHS` |
+| List usage details with Virtual Machine (VM) normalized usage for ΓÇ£Billed through AzureΓÇ¥ Marketplace License type for the last 6M | `SELECT MonthStartDate, NormalizedUsage FROM ISVUsage WHERE MarketplaceLicenseType = ΓÇÿBilled Through AzureΓÇÖ AND OfferType NOT IN (ΓÇÿAzure ApplicationsΓÇÖ, ΓÇÿSaaSΓÇÖ) TIMESPAN LAST_6_MONTHS` |
+| List usage details with VM Raw usage for ΓÇ£Billed through AzureΓÇ¥ Marketplace License type for the last 12M | `SELECT MonthStartDate, RawUsage FROM ISVUsage WHERE MarketplaceLicenseType = ΓÇÿBilled Through AzureΓÇÖ AND OfferType NOT IN (ΓÇÿAzure ApplicationsΓÇÖ, ΓÇÿSaaSΓÇÖ) TIMESPAN LAST_1_YEAR` |
+| List usage details with VM Normalized usage for ΓÇ£Bring Your Own LicenseΓÇ¥ Marketplace License type for the last 6M | `SELECT MonthStartDate, NormalizedUsage FROM ISVUsage WHERE MarketplaceLicenseType = ΓÇÿBring Your Own LicenseΓÇÖ AND OfferType NOT IN (ΓÇÿAzure ApplicationsΓÇÖ, ΓÇÿSaaSΓÇÖ) TIMESPAN LAST_6_MONTHS` |
+| List usage details with VM Raw usage for ΓÇ£Bring Your Own LicenseΓÇ¥ Marketplace License type for the last 6M | `SELECT MonthStartDate, RawUsage FROM ISVUsage WHERE MarketplaceLicenseType = ΓÇÿBring Your Own LicenseΓÇÖ AND OfferType NOT IN (ΓÇÿAzure ApplicationsΓÇÖ, ΓÇÿSaaSΓÇÖ) TIMESPAN LAST_6_MONTHS` |
+| List usage details with Usage Date, daily total normalized usage and ΓÇ£Estimated Extended Charges (PC/CC)ΓÇ¥ for Paid plans for the last month | `SELECT UsageDate, NormalizedUsage, EstimatedExtendedChargePC FROM ISVUsage WHERE SKUBillingType = ΓÇÿPaidΓÇÖ ORDER BY UsageDate DESC TIMESPAN LAST_MONTH` |
+| List usage details with Usage Date, daily total raw usage and ΓÇ£Estimated Extended Charges (PC/CC)ΓÇ¥ for Paid plans for the last month | `SELECT UsageDate, RawUsage, EstimatedExtendedChargePC FROM ISVUsage WHERE SKUBillingType = ΓÇÿPaidΓÇÖ ORDER BY UsageDate DESC TIMESPAN LAST\_MONTH` |
+| List usage details with Offer Name, VM Normalized usage for ΓÇ£Billed through AzureΓÇ¥ Marketplace License type for the last 6M | `SELECT OfferName, NormalizedUsage FROM ISVUsage WHERE MarketplaceLicenseType = ΓÇÿBilled Through AzureΓÇÖ AND OfferName = ΓÇÿExample Offer NameΓÇÖ TIMESPAN LAST_6_MONTHS` |
+| List usage details with Offer Name, metered usage for the last 6M | `SELECT OfferName, MeteredUsage FROM ISVUsage WHERE OfferName = ΓÇÿExample Offer NameΓÇÖ AND OfferType IN (ΓÇÿSaaSΓÇÖ, ΓÇÿAzure ApplicationsΓÇÖ) TIMESPAN LAST_6_MONTHS` |
+| List all offer usage details of all offers for last 6M | `SELECT OfferType, OfferName, SKU, IsPrivateOffer, UsageReference, UsageDate, RawUsage, EstimatedPricePC FROM ISVUsage ORDER BY UsageDate DESC TIMESPAN LAST_MONTH` |
+| List all offer usage details of private offers for last 6M | `SELECT OfferType, OfferName, SKU, IsPrivateOffer, UsageReference, UsageDate, RawUsage, EstimatedPricePC FROM ISVUsage WHERE IsPrivateOffer = '1' ORDER BY UsageDate DESC TIMESPAN LAST_MONTH` |
||| ## Orders report queries
These sample queries apply to the Orders report.
| **Query Description** | **Sample Query** | | | |
-| Orders report for Azure License Type as ΓÇ£EnterpriseΓÇ¥ for the last 6M | `SELECT OrderId, OrderPurchaseDate FROM ISVOrder WHERE AzureLicenseType = ΓÇÿEnterpriseΓÇÖ TIMESPAN LAST\_6\_MONTHS` |
-| Orders report for Azure License Type as ΓÇ£Pay as You GoΓÇ¥ for the last 6M | `SELECT OrderId, OrderPurchaseDate FROM ISVOrder WHERE AzureLicenseType = ΓÇÿPay as You GoΓÇÖ TIMESPAN LAST_6_MONTHS` |
-| Orders report for specific offer name for the last 6M | `SELECT OrderId, OrderPurchaseDate FROM ISVOrder WHERE OfferName = ΓÇÿExample Offer NameΓÇÖ TIMESPAN LAST_6_MONTHS` |
-| Orders report for active orders for the last 6M | `SELECT OrderId, OrderPurchaseDate FROM ISVOrder WHERE OrderStatus = ΓÇÿActiveΓÇÖ TIMESPAN LAST_6_MONTHS` |
-| Orders report for canceled orders for the last 6M | `SELECT OrderId, OrderPurchaseDate FROM ISVOrder WHERE OrderStatus = ΓÇÿCancelledΓÇÖ TIMESPAN LAST_6_MONTHS` |
-| Orders report with term start, term end date and estimatedcharges, currency | `SELECT OrderId, TermStartId, TermEndId, estimatedcharges from ISVOrderV2 WHERE OrderStatus = ΓÇÿActiveΓÇÖ TIMESPAN LAST_6_MONTHS` |
-| Orders report for trial orders active for the last 6M | `SELECT OrderId from ISVOrderV2 WHERE OrderStatus = ΓÇÿActiveΓÇÖ and HasTrial = ΓÇÿTrueΓÇÖ TIMESPAN LAST_6_MONTHS` |
+| List Order details for Azure License Type as ΓÇ£EnterpriseΓÇ¥ for the last 6M | `SELECT AssetId, PurchaseRecordId, PurchaseRecordLineItemId, OrderPurchaseDate FROM ISVOrder WHERE AzureLicenseType = 'Enterprise' TIMESPAN LAST_6_MONTHS` |
+| List Order details for Azure License Type as ΓÇ£Pay as You GoΓÇ¥ for the last 6M | `SELECT OfferName, AssetId, PurchaseRecordId, PurchaseRecordLineItemId, OrderPurchaseDate, OrderStatus, OrderCancelDate FROM ISVOrder WHERE AzureLicenseType = 'Pay as You Go' TIMESPAN LAST_6_MONTHS` |
+| List Order details for specific offer name for the last 6M | `SELECT AssetId, PurchaseRecordId, PurchaseRecordLineItemId , OrderPurchaseDate FROM ISVOrder WHERE OfferName = Contoso test Services' TIMESPAN LAST_6_MONTHS` |
+| List Order details for active orders for the last 6M | `SELECT OfferName, AssetId, PurchaseRecordId, PurchaseRecordLineItemId, OrderPurchaseDate FROM ISVOrder WHERE OrderStatus = 'Active' TIMESPAN LAST_6_MONTHS` |
+| List Order details for cancelled orders for the last 6M | `SELECT OfferName, AssetId, PurchaseRecordId, PurchaseRecordLineItemId, OrderPurchaseDate FROM ISVOrder WHERE OrderStatus = 'Cancelled' TIMESPAN LAST_6_MONTHS` |
+| List Order details with quantity, term start, term end date and estimatedcharges, currency for the last 6M | `SELECT AssetId, Quantity, PurchaseRecordId, PurchaseRecordLineItemId, TermStartDate, TermEndDate, BilledRevenue, Currency from ISVOrder WHERE OrderStatus = 'Active' TIMESPAN LAST_6_MONTHS` |
+| List Order details for trial orders active for the last 6M | `SELECT AssetId, Quantity, PurchaseRecordId, PurchaseRecordLineItemId from ISVOrder WHERE OrderStatus = 'Active' and IsTrial = 'True' TIMESPAN LAST_6_MONTHS` |
+| List Order details for all offers that are active for the last 6M | `SELECT OfferName, SKU, IsPrivateOffer, AssetId, PurchaseRecordId, PurchaseRecordLineItemId, OrderPurchaseDate, BilledRevenue FROM ISVOrder WHERE OrderStatus = 'Active' TIMESPAN LAST_6_MONTHS` |
+| List Order details for private offers active for the last 6M | `SELECT OfferName, SKU, IsPrivateOffer, AssetId, PurchaseRecordId, PurchaseRecordLineItemId, OrderPurchaseDate, BilledRevenue FROM ISVOrder WHERE IsPrivateOffer = '1' and OrderStatus = 'Active' TIMESPAN LAST_6_MONTHS` |
||| ## Revenue report queries
These sample queries apply to the Revenue report.
| **Query Description** | **Sample Query** | | | |
-| Show billed revenue of the partner for last 1 month | `SELECT BillingAccountId, OfferName, OfferType, Revenue, EarningAmountCC, EstimatedRevenueUSD, EarningAmountUSD, PayoutStatus, PurchaseRecordId, LineItemId,TransactionAmountCC,TransactionAmountUSD, Quantity,Units FROM ISVRevenue TIMESPAN LAST_MONTH` |
+| List billed revenue of the partner for last 1 month | `SELECT BillingAccountId, OfferName, OfferType, Revenue, EarningAmountCC, EstimatedRevenueUSD, EarningAmountUSD, PayoutStatus, PurchaseRecordId, LineItemId,TransactionAmountCC,TransactionAmountUSD, Quantity,Units FROM ISVRevenue TIMESPAN LAST_MONTH` |
| List estimated revenue in USD of all transactions with sent status in last 3 months | `SELECT BillingAccountId, OfferName, OfferType, EstimatedRevenueUSD, EarningAmountUSD, PayoutStatus, PurchaseRecordId, LineItemId, TransactionAmountUSD FROM ISVRevenue where PayoutStatus='Sent' TIMESPAN LAST_3_MONTHS` | | List of non-trial transactions for subscription-based billing model | `SELECT BillingAccountId, OfferName,OfferType, TrialDeployment EstimatedRevenueUSD, EarningAmountUSD FROM ISVRevenue WHERE TrialDeployment=ΓÇÖFalseΓÇÖ and BillingModel=ΓÇÖSubscriptionBasedΓÇÖ` | |||
This sample query applies to the Quality of service report.
| **Query Description** | **Sample Query** | | | - |
-| Show deployment status of offers for last 6 months | `SELECT OfferId, Sku, DeploymentStatus, DeploymentCorrelationId, SubscriptionId, CustomerTenantId, CustomerName, TemplateType, StartTime, EndTime, DeploymentDurationInMilliSeconds, DeploymentRegion FROM ISVQualityOfService TIMESPAN LAST_6_MONTHS` |
+| List deployment status of offers for last 6 months | `SELECT OfferId, Sku, DeploymentStatus, DeploymentCorrelationId, SubscriptionId, CustomerTenantId, CustomerName, TemplateType, StartTime, EndTime, DeploymentDurationInMilliSeconds, DeploymentRegion FROM ISVQualityOfService TIMESPAN LAST_6_MONTHS` |
+|||
+
+## Customer retention report queries
+
+This sample query applies to the Customer retention report.
+
+| **Query Description** | **Sample Query** |
+| | - |
+| List customer retention details for last 6 months | `SELECT OfferCategory, OfferName, ProductId, DeploymentMethod, ServicePlanName, Sku, SkuBillingType, CustomerId, CustomerName, CustomerCompanyName, CustomerCountryName, CustomerCountryCode, CustomerCurrencyCode, FirstUsageDate, AzureLicenseType, OfferType, Offset FROM ISVOfferRetention TIMESPAN LAST_6_MONTHS` |
+| List usage activity and revenue details of all customers in last 6 months | `SELECT OfferCategory, OfferName, Sku, ProductId, OfferType, FirstUsageDate, Offset, CustomerId, CustomerName, CustomerCompanyName, CustomerCountryName, CustomerCountryCode, CustomerCurrencyCode FROM ISVOfferRetention TIMESPAN LAST_6_MONTHS` |
||| ## Next steps
marketplace Analytics System Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-system-queries.md
description: Learn about system queries you can use to programmatically get anal
Previously updated : 01/20/2022 Last updated : 02/23/2022
The following system queries can be used in the [Create Report API](analytics-programmatic-access.md#create-report-api) directly with a `QueryId`. The system queries are like the export reports in Partner Center for a six month (6M) computation period.
-For more information on the column names, attributes, and description, see these articles about commercial marketplace analytics:
+For more information on the column names, attributes, and descriptions, see these articles about commercial marketplace analytics:
- [Customers dashboard](customer-dashboard.md#customer-details-table) - [Orders dashboard](orders-dashboard.md#orders-details-table)
For more information on the column names, attributes, and description, see these
- [Marketplace Insights dashboard](insights-dashboard.md#marketplace-insights-details-table) - [Revenue dashboard](revenue-dashboard.md) - [Quality of Service dashboard](quality-of-service-dashboard.md)
+- [Customer retention dashboard](customer-retention-dashboard.md#dictionary-of-data-terms)
The following sections provide various report queries.
The following sections provide various report queries.
`SELECT OfferId,Sku,DeploymentStatus,DeploymentCorrelationId,SubscriptionId,CustomerTenantId,CustomerName,TemplateType,StartTime,EndTime,DeploymentDurationInMilliSeconds,DeploymentRegion,ResourceProvider,ResourceUri,ResourceGroup,ResourceType,ResourceName,ErrorCode,ErrorName,ErrorMessage,DeepErrorCode,DeepErrorMessage FROM ISVQualityOfService TIMESPAN LAST_3_MONTHS`
+## Customer retention report query
+
+**Report description**: Customer retention for the last 6M
+
+**QueryID**: `6d37d057-06f3-45aa-a971-3a34415e8511`
+
+**Report query**:
+
+`SELECT OfferCategory,OfferName,ProductId,DeploymentMethod,ServicePlanName,Sku,SkuBillingType,CustomerId,CustomerName,CustomerCompanyName,CustomerCountryName,CustomerCountryCode,CustomerCurrencyCode,FirstUsageDate,AzureLicenseType,OfferType,Offset FROM ISVOfferRetention TIMESPAN LAST_6_MONTHS`
+ ## Next steps - [APIs for accessing commercial marketplace analytics data](analytics-available-apis.md)
marketplace Azure Vm Use Approved Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-use-approved-base.md
Previously updated : 12/07/2021 Last updated : 02/23/2022 # Create a virtual machine using an approved base
Windows OS disks are generalized with the [sysprep](/windows-hardware/manufactur
> [!WARNING] > After you run sysprep, turn the VM off until it's deployed because updates may run automatically. This shutdown will avoid subsequent updates from making instance-specific changes to the operating system or installed services. For more information about running sysprep, see [Generalize a Windows VM](../virtual-machines/generalize.md#windows).
+> [!NOTE]
+> If you have Microsoft Defender for Cloud (Azure Defender) enabled on the subscription where you are creating the VM to be captured and you do not want any VM created from this image to be enrolled in the Defender for Endpoint portal, ensure you disable Microsoft Defender for Cloud on the subscription or for the VM itself. If this is not disabled, any VM created from this image will be enrolled in the Defender for Endpoint portal even if the VM is deployed to a different tenant without Microsoft Defender for Cloud.
+ ### For Linux 1. Remove the Azure Linux agent.
marketplace Dynamics 365 Operations Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-operations-offer-listing.md
Title: Configure Dynamics 365 Operations Apps offer listing details on Microsoft AppSource (Azure Marketplace)
-description: Configure Dynamics 365 Operations Apps offer listing details on Microsoft AppSource (Azure Marketplace).
+ Title: Configure Dynamics 365 for Operations Apps offer listing details on Microsoft AppSource (Azure Marketplace)
+description: Configure Dynamics 365 for Operations Apps offer listing details on Microsoft AppSource (Azure Marketplace).
Last updated 12/03/2021
-# Configure Dynamics 365 Operations Apps offer listing details
+# Configure Dynamics 365 for Operations Apps offer listing details
This page lets you define the offer details such as offer name, description, links, contacts, logo, and screenshots.
marketplace Isv Csp Reseller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/isv-csp-reseller.md
The offer setup page lets you define private offer terms, notification contact,
> - Once your private offer ends, the CSP partners you authorize can continue to sell your marketplace offer at the list price. > - Private offers can be extended to a maximum of 400 CSP partners tenants.
+5. Optional: To extend a private offer to individual customers of a CSP partner, choose **All customers selected** for that CSP partner.
+
+ 1. Choose **Select customers**.
+ 2. Under **Provide customer tenant ID**, select **+Add**.
+ 3. Enter the customerΓÇÖs tenant ID. You can add up to 25 customers for the CSP partner, who will need to provide the customer tenant IDs.
+ 4. Click **Add**.
+ ### Review and Submit This page is where you can review all the information you've provided.
marketplace Usage Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/usage-dashboard.md
Previously updated : 10/11/2021 Last updated : 02/18/2022 # Usage dashboard in commercial marketplace analytics
You can find a month range selection at the top-right corner of each page. Custo
[ ![Illustrates the Month filters on the Usage dashboard.](./media/usage-dashboard/usage-dashboard-filters.png) ](./media/usage-dashboard/usage-dashboard-filters.png#lightbox)
+### Selector for Usage type
+
+You can choose to analyze VM normalized usage, VM raw usage, Metered usage, and Metered usage anomalies from the dropdown picker at the top of the dashboard.
+
+[ ![Screenshot of the dropdown picker on the Usage dashboard.](./media/usage-dashboard/usage-type-picker.png) ](./media/usage-dashboard/usage-type-picker.png#lightbox)
+
+### Public and private offer
+
+You can choose to view subscription and order details of public offers, private offers, or both by selecting the **Public Offers** sub-tab, **Private Offers** sub-tab, and the **All** sub-tab respectively.
+
+[ ![Screenshot of the three tabs on the Usage dashboard.](./media/usage-dashboard/usage-dashboard-tabs.png) ](./media/usage-dashboard/usage-dashboard-tabs.png#lightbox)
+ ### Usage trend In this section, you will find total usage hours and trend for all your offers that are consumed by your customers during the selected computation period. Metrics and growth trends are represented by a line chart. Show the value for each month by hovering over the line on the chart. The percentage value below the usage metrics in the widget represents the amount of growth or decline during the selected computation period.
Note the following:
- The heatmap has a supplementary grid to view the details of customer count, order count, and normalized usage hours in the specific location. - You can search and select a country/region in the grid to zoom to the location in the map. Revert to the original view by selecting the **Home** button in the map.
+### Usage page filters
+
+The Usage page filters are applied at the Orders page level. You can select one or multiple filters to render the chart for the criteria you choose to view the data you want to see in the Usage orders data' grid / export. Filters are applied on the data extracted for the month range that you selected in the upper-right corner of the Usage page.
+
+The widgets and export report for VM Raw usage are similar to VM Normalized usage with the following distinctions:
+
+- Normalized usage hours are defined as the usage hours normalized to account for the number of VM cores: [number of VM cores] x [hours of raw usage]. VMs designated as "SHAREDCORE" use 1/6 (or 0.1666) the [number of VM cores] multiplier.
+- Raw usage hours are defined as the amount of time VMs have been running in terms of usage units.
+
+> [!NOTE]
+> You can use the download icon in the upper-right corner of any widget to download the data. You can provide feedback on each of the widgets by selecting the thumbs up or thumbs down icon.
+ ### Usage details table
-The **usage details** table displays a numbered list of the top 1,000 usage records sorted by usage. Note the following:
+The **usage details** table displays a numbered list of the top 500 usage records sorted by usage. Note the following:
- Each column in the grid is sortable.-- The data can be extracted to a .TSV or .CSV file if the count of the records is less than 1,000.-- If records count is over 1,000, export data will be asynchronously placed in a downloads page that will be available for the next 30 days.
+- The data can be extracted to a .TSV or .CSV file if the count of the records is less than 500.
+- If records count is over 500, export data will be asynchronously placed in a downloads page that will be available for the next 30 days.
- Apply filters to **detailed usage data** to display only the data you are interested in. Filter data by country/region, sales channel, Marketplace license type, usage type, offer name, offer type, free trials, Marketplace subscription ID, customer ID, and company name. _**Table 1: Dictionary of data terms**_
_**Table 1: Dictionary of data terms**_
| Customer Country | Customer Country/Region | The country/region name provided by the customer. Country/region could be different than the country/region in a customer's Azure subscription. | CustomerCountry | | Is Preview SKU | Is Preview SKU | The value shows if you have tagged the SKU as "preview". Value will be "Yes" if the SKU has been tagged accordingly, and only Azure subscriptions authorized by you can deploy and use this image. Value will be "No" if the SKU has not been identified as "preview". | IsPreviewSKU | | SKU Billing Type | SKU Billing Type | The Billing type associated with each SKU in the offer. The possible values are:<ul><li>Free</li><li>Paid</li></ul> | SKUBillingType |
-| IsInternal | Deprecated | Deprecated | Deprecated |
| VM Size | Virtual Machine Size | For VM-based offer types, this entity signifies the size of the VM associated with the SKU of the offer. | VMSize | | Cloud Instance Name | Cloud Instance Name | The Microsoft Cloud in which a VM deployment occurred. | CloudInstanceName |
-| ServicePlanName | Deprecated | Deprecated (Same definition as SKU) | ServicePlanName |
| Offer Name | Offer Name | The name of the commercial marketplace offering. | OfferName |
-| DeploymentMethod | Deprecated | Deprecated (Same definition as Offer type) | DeploymentMethod |
+| Is Private Offer | Is Private Offer | Indicates whether a marketplace offer is a private or a public offer:<br><ul><li>0 value indicates false</li><li>1 value indicates true</li></ul> | IsPrivateOffer
+| Customer name | Customer name | Name of the billed to customer | CustomerName |
| Customer Company Name | Customer Company Name | The company name provided by the customer. The name could be different than the name in a customer's Azure subscription. | CustomerCompanyName | | Usage Date | Usage Date | The date of usage event generation for usage-based assets. | UsageDate | | IsMultisolution | Is Multisolution | Signifies whether the offer is a Multisolution offer type. | IsMultisolution |
_**Table 1: Dictionary of data terms**_
| RawUsage | Raw Usage | The total raw usage units consumed by the asset that is deployed by the customer.<br>Raw usage hours are defined as the amount of time VMs have been running in terms of usage units. | RawUsage | | Estimated Extended Charge (CC) | Estimated Extended Charge in Customer Currency | Signifies the charges associated with the usage. The column is the product of Price (CC) and Usage Quantity. | EstimatedExtendedChargeCC | | Estimated Extended Charge (PC) | Estimated Extended Charge in Payout Currency | Signifies the charges associated with the usage. The column is the product of Estimated Price (PC) and Usage Quantity. | EstimatedExtended ChargePC |
-| Meter Id | Meter Id | Signifies the meter ID for the offer. | MeterId |
-| Partner Center Detected Anomaly | Partner Center Detected Anomaly | **Applicable for offers with custom meter dimensions**.<br>Signifies whether the publisher reported overage usage for the offerΓÇÖs custom meter dimension that was is flagged as an anomaly by Partner Center.The possible values are: <ul><li>0 (Not an anomaly)</li><li>1 (Anomaly)</li></ul>_If the publisher doesnΓÇÖt have offers with custom meter dimensions, and exports this column through programmatic access, then the value will be null._ | PartnerCenterDetectedAnomaly |
+| Meter Id | Meter Id | **Applicable for offers with custom meter dimensions.**<br>Signifies the meter ID for the offer. | MeterId |
+| Metered Dimension | Metered Dimension | **Applicable for offers with custom meter dimensions.**<br>Metered dimension of the custom meter. For example, user/device - billing unit | MeterDimension |
+| Partner Center Detected Anomaly | Partner Center Detected Anomaly | **Applicable for offers with custom meter dimensions**.<br>Signifies whether the publisher reported overage usage for the offerΓÇÖs custom meter dimension that was is flagged as an anomaly by Partner Center. The possible values are: <ul><li>0 (Not an anomaly)</li><li>1 (Anomaly)</li></ul>_If the publisher doesnΓÇÖt have offers with custom meter dimensions, and exports this column through programmatic access, then the value will be null._ | PartnerCenterDetectedAnomaly |
| Publisher Marked Anomaly | Publisher Marked Anomaly | **Applicable for offers with custom meter dimensions**.<br>Signifies whether the publisher acknowledged the overage usage by the customer for the offerΓÇÖs custom meter dimension as genuine or false. The possible values are:<ul><li>0 (Publisher has marked it as not an anomaly)</li><li>1 (Publisher has marked it as an anomaly)</li></ul>_If the publisher doesnΓÇÖt have offers with custom meter dimensions, and exports this column through programmatic access, then the value will be null._ | PublisherMarkedAnomaly | | New Reported Usage | New Reported Usage | **Applicable for offers with custom meter dimensions**.<br>For overage usage by the customer for the offerΓÇÖs custom meter dimension identified as anomalous by the publisher. This field specifies the new overage usage reported by the publisher.<br>_If the publisher doesnΓÇÖt have offers with custom meter dimensions, and exports this column through programmatic access, then the value will be null._ | NewReportedUsage | | Action Taken At | Action Taken At | **Applicable for offers with custom meter dimensions**.<br>Specifies the time when the publisher acknowledged the overage usage by the customer for the offerΓÇÖs custom meter dimension as genuine or false.<br>_If the publisher doesnΓÇÖt have offers with custom meter dimensions, and exports this column through programmatic access, then the value will be null._ | ActionTakenAt | | Action Taken By | Action Taken By | **Applicable for offers with custom meter dimensions**.<br>Specifies the person who acknowledged the overage usage by the customer for the offerΓÇÖs custom meter dimension as genuine or false.<br>_If the publisher doesnΓÇÖt have offers with custom meter dimensions, and exports this column through programmatic access, then the value will be null._ | ActionTakenBy | | Estimated Financial Impact (USD) | Estimated Financial Impact in USD | **Applicable for offers with custom meter dimensions**.<br>When Partner Center flags an overage usage by the customer for the offerΓÇÖs custom meter dimension as anomalous, the field specifies the estimated financial impact (in USD) of the anomalous overage usage.<br>_If the publisher doesnΓÇÖt have offers with custom meter dimensions, and exports this column through programmatic means, then the value will be null._ | EstimatedFinancialImpactUSD |
-| Asset Id | Asset Id | The unique identifier of the customer order for your commercial marketplace service. Virtual machine usage-based offers are not associated with an order. | Asset Id |
+| Asset Id | Asset Id | **Applicable for offers with custom meter dimensions**.<br>The unique identifier of the customer's order subscription for your commercial marketplace service. Virtual machine usage-based offers are not associated with an order. | Asset Id |
| N/A | Resource Id | The fully qualified ID of the resource, including the resource name and resource type. Note that this is a data field available in download reports only.<br>Use the format:<br> /subscriptions/{guid}/resourceGroups/{resource-group-name}/{resource-provider-namespace}/{resource-type}/{resource-name}<br>**Note**: This field will be deprecated on 10/20/2021. | N/A | |||||
mysql Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-connectivity-architecture.md
The following table lists the gateway IP addresses of the Azure Database for MyS
| Australia East | 13.75.149.87, 40.79.161.1 | | | | Australia South East | 13.73.109.251, 13.77.49.32, 13.77.48.10 | | | | Brazil South | 191.233.201.8, 191.233.200.16 | | 104.41.11.5 |
-| Canada Central | 40.85.224.249, 52.228.35.221 | | |
+| Canada Central | 13.71.168.32|| 40.85.224.249, 52.228.35.221 |
| Canada East | 40.86.226.166, 52.242.30.154 | | | | Central US | 23.99.160.139, 52.182.136.37, 52.182.136.38 | 13.67.215.62 | | | China East | 139.219.130.35 | | |
mysql Concepts Pricing Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-pricing-tiers.md
You can monitor your I/O consumption in the Azure portal or by using Azure CLI c
### Reaching the storage limit
-Servers with less than equal to 100 GB provisioned storage are marked read-only if the free storage is less than 5% of the provisioned storage size. Servers with more than 100 GB provisioned storage are marked read only when the free storage is less than 5 GB.
+Servers with less than or equal to 100 GB provisioned storage are marked read-only if the free storage is less than 5% of the provisioned storage size. Servers with more than 100 GB provisioned storage are marked read only when the free storage is less than 5 GB.
For example, if you have provisioned 110 GB of storage, and the actual utilization goes over 105 GB, the server is marked read-only. Alternatively, if you have provisioned 5 GB of storage, the server is marked read-only when the free storage reaches less than 256 MB.
We recommend that you turn on storage auto-grow or to set up an alert to notify
### Storage auto-grow
-Storage auto-grow prevents your server from running out of storage and becoming read-only. If storage auto grow is enabled, the storage automatically grows without impacting the workload. For servers with less than equal to 100 GB provisioned storage, the provisioned storage size is increased by 5 GB when the free storage is below 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10 GB of the provisioned storage size. Maximum storage limits as specified above apply.
+Storage auto-grow prevents your server from running out of storage and becoming read-only. If storage auto grow is enabled, the storage automatically grows without impacting the workload. For servers with less than or equal to 100 GB provisioned storage, the provisioned storage size is increased by 5 GB when the free storage is below 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10 GB of the provisioned storage size. Maximum storage limits as specified above apply.
For example, if you have provisioned 1000 GB of storage, and the actual utilization goes over 990 GB, the server storage size is increased to 1050 GB. Alternatively, if you have provisioned 10 GB of storage, the storage size is increase to 15 GB when less than 1 GB of storage is free.
mysql Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-business-continuity.md
The table below illustrates the features that Flexible server offers.
| **Feature** | **Description** | **Restrictions** | | - | -- | |
-| **Backup & Recovery** | Flexible server automatically performs daily backups of your database files and continuously backs up transaction logs. Backups can be retained for any period between 1 to 35 days. You will be able to restore your database server to any point in time within your backup retention period. Recovery time will be dependent on the size of the data to restore + the time to perform log recovery. Refer to [Concepts - Backup and Restore](./concepts-backup-restore.md) for more details. |Backup data remains within the region |
+| **Backup & Recovery** | Flexible server automatically performs daily backups of your database files and continuously backs up transaction logs. Backups can be retained for any period between 1 to 35 days. You'll be able to restore your database server to any point in time within your backup retention period. Recovery time will be dependent on the size of the data to restore + the time to perform log recovery. Refer to [Concepts - Backup and Restore](./concepts-backup-restore.md) for more details. |Backup data remains within the region |
| **Local redundant backup** | Flexible server backups are automatically and securely stored in a local redundant storage within a region and in same availability zone. The locally redundant backups replicate the server backup data files three times within a single physical location in the primary region. Locally redundant backup storage provides at least 99.999999999% (11 nines) durability of objects over a given year. Refer to [Concepts - Backup and Restore](./concepts-backup-restore.md) for more details.| Applicable in all regions | | **Geo-redundant backup** | Flexible server backups can be configured as geo-redundant at create time. Enabling Geo-redundancy replicates the server backup data files in the primary region’s paired region to provide regional resiliency. Geo-redundant backup storage provides at least 99.99999999999999% (16 nines) durability of objects over a given year. Refer to [Concepts - Backup and Restore](./concepts-backup-restore.md) for more details.| Available in all [Azure paired regions](overview.md#azure-regions) | | **Zone redundant high availability** | Flexible server can be deployed in high availability mode, which deploys primary and standby servers in two different availability zones within a region. This protects from zone-level failures and also helps with reducing application downtime during planned and unplanned downtime events. Data from the primary server is synchronously replicated to the standby replica. During any downtime event, the database server is automatically failed over to the standby replica. Refer to [Concepts - High availability](./concepts-high-availability.md) for more details. | Supported in general purpose and memory optimized compute tiers. Available only in regions where multiple zones are available.|
Here are some planned maintenance scenarios that incur downtime:
| **Scenario** | **Process**| | : | :-- |
-| **Compute scaling (User)**| When you perform compute scaling operation, a new flexible server is provisioned using the scaled compute configuration. In the existing database server, active checkpoints are allowed to complete, client connections are drained, any uncommitted transactions are canceled, and then it is shut down. The storage is then attached to the new server and the database is started which performs recovery if necessary before accepting client connections. |
+| **Compute scaling (User)**| When you perform compute scaling operation, a new flexible server is provisioned using the scaled compute configuration. In the existing database server, active checkpoints are allowed to complete, client connections are drained, any uncommitted transactions are canceled, and then it's shut down. The storage is then attached to the new server and the database is started which performs recovery if necessary before accepting client connections. |
| **New software deployment (Azure)** | New features rollout or bug fixes automatically happen as part of service's planned maintenance, and you can schedule when those activities to happen. For more information, see to the [documentation](https://aka.ms/servicehealthpm), and also check your [portal](https://aka.ms/servicehealthpm) | | **Minor version upgrades (Azure)** | Azure Database for MySQL automatically patches database servers to the minor version determined by Azure. It happens as part of service's planned maintenance. This would incur a short downtime in terms of seconds, and the database server is automatically restarted with the new minor version. For more information, see to the [documentation](../concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).|
When the flexible server is configured with **zone redundant high availability**
## Unplanned downtime mitigation
-Unplanned downtimes can occur as a result of unforeseen failures, including underlying hardware fault, networking issues, and software bugs. If the database server goes down unexpectedly, if configured with high availability [HA], then the standby replica is activated. If not, then a new database server is automatically provisioned. While an unplanned downtime cannot be avoided, flexible server mitigates the downtime by automatically performing recovery operations at both database server and storage layers without requiring human intervention.
+Unplanned downtimes can occur as a result of unforeseen failures, including underlying hardware fault, networking issues, and software bugs. If the database server goes down unexpectedly, if configured with high availability [HA], then the standby replica is activated. If not, then a new database server is automatically provisioned. While an unplanned downtime canΓÇÖt be avoided, flexible server mitigates the downtime by automatically performing recovery operations at both database server and storage layers without requiring human intervention.
### Unplanned downtime: failure scenarios and service recovery
Here are some unplanned failure scenarios and the recovery process:
| **Scenario** | **Recovery process [non-HA]** | **Recovery process [HA]** | | :- | - | - |
-| **Database server failure** | If the database server is down because of some underlying hardware fault, active connections are dropped, and any inflight transactions are aborted. Azure will attempt to restart the database server. If that succeeds, then the database recovery is performed. If the restart fails, the database server will be attempted to restart on another physical node. <br /> <br /> The recovery time (RTO) is dependent on various factors including the activity at the time of fault such as large transaction and the amount of recovery to be performed during the database server startup process. <br /> <br /> Applications using the MySQL databases need to be built in a way that they detect and retry dropped connections and failed transactions. When the application retries, the connections are directed to the newly created database server. | If the database server failure is detected, the standby database server is activated, thus reducing downtime. Refer to [HA concepts page](concepts-high-availability.md) for more details. RTO is expected to be 60-120 s, with RPO=0 |
-| **Storage failure** | Applications do not see any impact for any storage-related issues such as a disk failure or a physical block corruption. As the data is stored in 3 copies, the copy of the data is served by the surviving storage. Block corruptions are automatically corrected. If a copy of data is lost, a new copy of the data is automatically created. | For non-recoverable errors, the flexible server is failed over to the standby replica to reduce downtime. Refer to [HA concepts page](./concepts-high-availability.md) for more details. |
-| **Logical/user errors** | Recovery from user errors, such as accidentally dropped tables or incorrectly updated data, involves performing a [point-in-time recovery](concepts-backup-restore.md) (PITR), by restoring and recovering the data until the time just before the error had occurred.<br> <br> You can recover a deleted MySQL flexible server resource within 5 days from the time of server deletion. For a detailed guide on how to restore a deleted server, [refer documented steps] (../flexible-server/how-to-restore-dropped-server.md). To protect server resources post deployment from accidental deletion or unexpected changes, administrators can leverage [management locks](../../azure-resource-manager/management/lock-resources.md). | These user errors are not protected with high availability due to the fact that all user operations are replicated to the standby too. |
-| **Availability zone failure** | While it is a rare event, if you want to recover from a zone-level failure, you can perform point-in-time recovery using the backup and choosing custom restore point to get to the latest data. A new flexible server will be deployed in another zone. The time taken to restore depends on the previous backup and the number of transaction logs to recover. | Flexible server performs automatic failover to the standby site. Refer to [HA concepts page](./concepts-high-availability.md) for more details. |
-| **Region failure** | While it is a rare event, if you want to recover from a region-level failure, you can perform database recovery by creating a new server using the latest geo-redundant backup available under the same subscription to get to the latest data. A new flexible server will be deployed to the selected region. The time taken to restore depends on the previous backup and the number of transaction logs to recover. | While it is a rare event, if you want to recover from a region-level failure, you can perform database recovery by creating a new server using the latest geo-redundant backup available under the same subscription to get to the latest data. The target flexible server for an existing HA server will be deployed as a Non-HA server to the Azure paired region. The time taken to restore depends on the previous backup and the number of transaction logs to recover. |
+| **Database server failure** |If the database server is down because of some underlying hardware fault, active connections are dropped, and any inflight transactions are aborted. Azure will attempt to restart the database server. If that succeeds, then the database recovery is performed. If the restart fails, the database server will be attempted to restart on another physical node.<br /> <br />The recovery time (RTO) is dependent on various factors including the activity at the time of fault such as large transaction and the amount of recovery to be performed during the database server start-up process. The RPO will be zero as no data loss is expected for the committed transactions. Applications using the MySQL databases need to be built in a way that they detect and retry dropped connections and failed transactions. When the application retries, the connections are directed to the newly created database server.<br /> <br />Other available options are restore from backup. You can use both PITR or Geo restore from paired region. <br /> **PITR** : RTO: Varies, RPO=0sec <br /> **Geo Restore :** RTO: Varies RPO <15Mins. <br /> <br />You can also use [read replica](./concepts-read-replicas.md) as DR solution. You can [stop the replication](./concepts-read-replicas.md#stop-replication) which make the read replica read-write(standalone and then redirect the application traffic to this database. The RTO in most cases will be few minutes and RPO < 5 min. RTO and RPO can be much higher in some cases depending on various factors including latency between sites, the amount of data to be transmitted, and importantly primary database write workload. | If the database server failure or non-recoverable errors is detected, the standby database server is activated, thus reducing downtime. Refer to HA concepts page for more details. RTO is expected to be 60-120 s, with RPO=0. <br /> <br /> **Note:** *The options for Recovery process [non-HA] is also applicable here. Read replica are currently not supported for HA enabled servers.*|
+| **Storage failure** |Applications do not see any impact for any storage-related issues such as a disk failure or a physical block corruption. As the data is stored in three copies, the copy of the data is served by the surviving storage. Block corruptions are automatically corrected. If a copy of data is lost, a new copy of the data is automatically created.<br /> <br />In a rare or worst-case scenario if all copies are corrupted, we can use restore from Geo restore (paired region). RPO would be <15 mins and RTO would vary.<br /> <br />You can also use read replica as DR solution as detailed above. | For this scenario, the options are same as for Recovery process [non-HA] . Read replica are currently not supported for HA enabled servers. |
+| **Logical/user errors** | Recovery from user errors, such as accidentally dropped tables or incorrectly updated data, involves performing a [point-in-time recovery](concepts-backup-restore.md) (PITR), by restoring and recovering the data until the time just before the error had occurred.<br> <br> You can recover a deleted MySQL flexible server resource within five days from the time of server deletion. For a detailed guide on how to restore a deleted server, [refer documented steps] (../flexible-server/how-to-restore-dropped-server.md). To protect server resources post deployment from accidental deletion or unexpected changes, administrators can leverage [management locks](../../azure-resource-manager/management/lock-resources.md). | These user errors aren't protected with high availability since all user operations are replicated to the standby too. For this scenario, the options are same as for Recovery process [non-HA] |
+| **Availability zone failure** | While it's a rare event, if you want to recover from a zone-level failure, you can perform Geo restore from to a paired region. RPO would be <15 mins and RTO would vary. <br /> <br /> You can also use [read replica](./concepts-read-replicas.md) as DR solution by creating replica in other availability zone. RTO\RPO is like what is detailed above. | If you have enabled Zone redundant HA, Flexible server performs automatic failover to the standby site. Refer to [HA concepts](./concepts-high-availability.md) for more details. RTO is expected to be 60-120 s, with RPO=0.<br /> <br />Other available options are restored from backup. You can use both PITR or Geo restore from paired region.<br />**PITR :** RTO: Varies, RPO=0 sec <br />**Geo Restore :** RTO: Varies, RPO <15Mins <br /> <br /> **Note:** *If you have same zone HA enabled the options are same as what we have for Recovery process [non-HA]* |
+| **Region failure** |While it's a rare event, if you want to recover from a region-level failure, you can perform database recovery by creating a new server using the latest geo-redundant backup available under the same subscription to get to the latest data. A new flexible server will be deployed to the selected region. The time taken to restore depends on the previous backup and the number of transaction logs to recover. RPO in most cases would be <15 mins and RTO would vary. | For this scenario, the options are same as for Recovery process [non-HA] . |
## Next steps
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/overview.md
For more information, see [Networking concepts](concepts-networking.md).
## Monitoring and alerting
-The flexible server service is equipped with built-in performance monitoring and alerting features. All Azure metrics have a one-minute frequency, and each metric provides 30 days of history. You can configure alerts on the metrics. The service exposes host server metrics to monitor resources utilization, allows configuring slow query logs. Using these tools, you can quickly optimize your workloads, and configure your server for best performance. Azure Database for MySQL Flexible Server is allows you to visualize the slow query and audit logs data using Azure Monitor workbooks. With workbooks, you get a flexible canvas for analyzing data and creating rich visual reports within the Azure portal. Azure Database for MySQL Flexible Server provides three available workbook templates out of the box viz Server Overview, [Auditing](tutorial-configure-audit.md) and [Query Performance Insights](tutorial-query-performance-insights.md). [Query Performance Insights](tutorial-query-performance-insights.md) workbook is designed to help you spend less time troubleshooting database performance by providing such information as:
+The flexible server service is equipped with built-in performance monitoring and alerting features. All Azure metrics have a one-minute frequency, and each metric provides 30 days of history. You can configure alerts on the metrics. The service exposes host server metrics to monitor resources utilization, allows configuring slow query logs. Using these tools, you can quickly optimize your workloads, and configure your server for best performance. Azure Database for MySQL Flexible Server allows you to visualize the slow query and audit logs data using Azure Monitor workbooks. With workbooks, you get a flexible canvas for analyzing data and creating rich visual reports within the Azure portal. Azure Database for MySQL Flexible Server provides three available workbook templates out of the box viz Server Overview, [Auditing](tutorial-configure-audit.md) and [Query Performance Insights](tutorial-query-performance-insights.md). [Query Performance Insights](tutorial-query-performance-insights.md) workbook is designed to help you spend less time troubleshooting database performance by providing such information as:
* Top N long-running queries and their trends. * The query details: view the query text as well as the history of execution with minimum, maximum, average, and standard deviation query time.
mysql Sample Cli Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-audit-logs.md
ms.devlang: azurecli Previously updated : 09/15/2021 Last updated : 02/10/2022 # Configure audit logs on an Azure Database for MySQL - Flexible Server using Azure CLI
-This sample CLI script enables [audit logs](../concepts-audit-logs.md) on an Azure Database for MySQL - Flexible Server.
+This sample CLI script enables [audit logs](../concepts-audit-logs.md) on an Azure Database for MySQL - Flexible Server.
---- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
## Sample script
-Edit the highlighted lines in the script with your values for variables.
-
-[!code-azurecli-interactive[main](../../../../cli_scripts/mysql/flexible-server/configure-logs/configure-audit-logs.sh?highlight=7,10-11 "Configure audit logs on Azure Database for MySQL - Flexible Server.")]
+### Run the script
-## Clean up deployment
-After the sample script has been run, the following code snippet can be used to clean up the resources.
+## Clean up resources
-[!code-azurecli-interactive[main](../../../../cli_scripts/mysql/flexible-server/configure-logs/clean-up-resources.sh?highlight=4 "Clean up resources.")]
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps - Try additional scripts: [Azure CLI samples for Azure Database for MySQL - Flexible Server](../sample-scripts-azure-cli.md)-- For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
+- For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
mysql Sample Cli Change Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-change-server-parameters.md
ms.devlang: azurecli Previously updated : 09/15/2021 Last updated : 02/10/2022 # List and change server parameters of an Azure Database for MySQL - Flexible Server using Azure CLI This sample CLI script lists all available [server parameters](../concepts-server-parameters.md) as well as their allowable values for Azure Database for MySQL - Flexible Server, and sets the *max_connections* and global *time_zone* parameters to values other than the default ones. --- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Sample script
-Edit the highlighted lines in the script with your values for variables.
-[!code-azurecli-interactive[main](../../../../cli_scripts/mysql/flexible-server/manage-server/change-server-parameters.sh?highlight=7,10-11 "Change server parameters for Azure Database for MySQL - Flexible Server.")]
+### Run the script
-## Clean up deployment
-After the sample script has been run, the following code snippet can be used to clean up the resources.
+## Clean up resources
-[!code-azurecli-interactive[main](../../../../cli_scripts/mysql/flexible-server/manage-server/clean-up-resources.sh?highlight=4 "Clean up resources.")]
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
mysql Sample Cli Create Connect Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-create-connect-private-access.md
ms.devlang: azurecli Previously updated : 09/15/2021 Last updated : 02/10/2022 # Create an Azure Database for MySQL - Flexible Server in a VNet using Azure CLI This sample CLI script creates an Azure Database for MySQL - Flexible Server in a VNet ([private access connectivity method](../concepts-networking-vnet.md)) and connects to the server from a VM within the VNet.
-> [!NOTE]
+> [!NOTE]
> The connectivity method cannot be changed after creating the server. For example, if you create server using *Private access (VNet Integration)*, you cannot change to *Public access (allowed IP addresses)* after creation. To learn more about connectivity methods, see [Networking concepts](../concepts-networking.md). --- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Sample script
-Edit the highlighted lines in the script with your values for variables.
+
+### Run the script
++
+## Test connectivity to the MySQL server from the VM
+
+Use the following steps to test connectivity to the MySQL server from the VM by connecting using SSH, downloading MySQL tools, and then using them to connect to the MySQL server.
+
+1. To SSH into the VM, start by getting the public IP address and then use MySQL tools to connect
+
+ ```bash
+ publicIp=$(az vm list-ip-addresses --resource-group $resourceGroup --name $vm --query "[].virtualMachine.network.publicIpAddresses[0].ipAddress" --output tsv)
+
+ ssh azureuser@$publicIp
+ ```
+
+1. Download MySQL tools and connect to the server. Substitute <server_name> and <admin_user> with your values.
+
+ ```bash
+ sudo apt-get update
+ sudo apt-get install mysql-client
+
+ wget --no-check-certificate https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem
-[!code-azurecli-interactive[main](../../../../cli_scripts/mysql/flexible-server/create-server-private-access/create-connect-server-in-vnet.sh?highlight=7,10 "Create and Connect to an Azure Database for MySQL - Flexible Server (General Purpose SKU) in VNet")]
+ mysql -h <replace_with_server_name>.mysql.database.azure.com -u mysqladmin -p --ssl-mode=REQUIRED --ssl-ca=DigiCertGlobalRootCA.crt.pem
+ ```
-## Clean up deployment
+## Clean up resources
-After the sample script has been run, the following code snippet can be used to clean up the resources.
-[!code-azurecli-interactive[main](../../../../cli_scripts/mysql/flexible-server/create-server-private-access/clean-up-resources.sh "Clean up resources.")]
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps - Try additional scripts: [Azure CLI samples for Azure Database for MySQL - Flexible Server](../sample-scripts-azure-cli.md)-- For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
+- For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
mysql Sample Cli Create Connect Public Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-create-connect-public-access.md
ms.devlang: azurecli Previously updated : 09/15/2021 Last updated : 02/10/2022 # Create an Azure Database for MySQL - Flexible Server and enable public access connectivity using Azure CLI
-This sample CLI script creates an Azure Database for MySQL - Flexible Server, configures a server-level firewall rule ([public access connectivity method](../concepts-networking-public.md)) and connects to the server after creation.
+This sample CLI script creates an Azure Database for MySQL - Flexible Server, configures a server-level firewall rule ([public access connectivity method](../concepts-networking-public.md)) and connects to the server after creation.
Once the script runs successfully, the MySQL Flexible Server will be accessible by all Azure services and the configured IP address, and you will be connected to the server in an interactive mode.
-> [!NOTE]
+> [!NOTE]
> The connectivity method cannot be changed after creating the server. For example, if you create server using *Public access (allowed IP addresses)*, you cannot change to *Private access (VNet Integration)* after creation. To learn more about connectivity methods, see [Networking concepts](../concepts-networking.md). ---- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Sample script
-Edit the highlighted lines in the script with your values for variables.
+
+### Run the script
-[!code-azurecli-interactive[main](../../../../cli_scripts/mysql/flexible-server/create-server-public-access/create-connect-burstable-server-public-access.sh?highlight=8,11-12 "Create Flexible Server and enable public access.")]
-## Clean up deployment
+## Clean up resources
-After the sample script has been run, the following code snippet can be used to clean up the resources.
-[!code-azurecli-interactive[main](../../../../cli_scripts/mysql/flexible-server/create-server-public-access/clean-up-resources.sh?highlight=4 "Clean up resources.")]
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps - Try additional scripts: [Azure CLI samples for Azure Database for MySQL - Flexible Server](../sample-scripts-azure-cli.md)-- For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
+- For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
mysql Sample Cli Monitor And Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-monitor-and-scale.md
ms.devlang: azurecli Previously updated : 09/15/2021 Last updated : 02/10/2022 # Monitor and scale an Azure Database for MySQL - Flexible Server using Azure CLI
-This sample CLI script scales compute, storage and IOPS for a single Azure Database for MySQL - Flexible server after querying the corresponding metrics. Compute and IOPS can be scaled up or down, while storage can only be scaled up.
+This sample CLI script scales compute, storage and IOPS for a single Azure Database for MySQL - Flexible server after querying the corresponding metrics. Compute and IOPS can be scaled up or down, while storage can only be scaled up.
--- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Sample script
-Edit the highlighted lines in the script with your values for variables.
-[!code-azurecli-interactive[main](../../../../cli_scripts/mysql/flexible-server/monitor-and-scale/monitor-and-scale.sh?highlight=8,11-12 "Monitor your Flexible Server and scale Compute, Storage and IOPS.")]
+### Run the script
-## Clean up deployment
-After the sample script has been run, the following code snippet can be used to clean up the resources.
+## Clean up resources
-[!code-azurecli-interactive[main](../../../../cli_scripts/mysql/flexible-server/monitor-and-scale/clean-up-resources.sh?highlight=4 "Clean up resources.")]
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps - Try additional scripts: [Azure CLI samples for Azure Database for MySQL - Flexible Server](../sample-scripts-azure-cli.md)-- For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
+- For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
mysql Sample Cli Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-read-replicas.md
ms.devlang: azurecli Previously updated : 09/15/2021 Last updated : 02/10/2022 # Create and manage read replicas in an Azure Database for MySQL - Flexible Server using Azure CLI
This sample CLI script creates and manages [read replicas](../concepts-read-repl
>[!IMPORTANT] >When you create a replica for a source that has no existing replicas, the source will first restart to prepare itself for replication. Take this into consideration and perform these operations during an off-peak period. --- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Sample script
-Edit the highlighted lines in the script with your values for variables.
+
+### Run the script
-[!code-azurecli-interactive[main](../../../../cli_scripts/mysql/flexible-server/read-replicas/create-manage-read-replicas.sh?highlight=7,10-12 "Create and manage Flexible Server Read Replicas.")]
-## Clean up deployment
+## Clean up resources
-After the sample script has been run, the following code snippet can be used to clean up the resources.
-[!code-azurecli-interactive[main](../../../../cli_scripts/mysql/flexible-server/read-replicas/clean-up-resources.sh?highlight=4-5 "Clean up resources.")]
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps - Try additional scripts: [Azure CLI samples for Azure Database for MySQL - Flexible Server](../sample-scripts-azure-cli.md)-- For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
+- For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
mysql Sample Cli Restart Stop Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-restart-stop-start.md
ms.devlang: azurecli Previously updated : 09/15/2021 Last updated : 02/10/2022 # Restart/stop/start an Azure Database for MySQL - Flexible Server using Azure CLI
-This sample CLI script performs restart, start and stop operations on an Azure Database for MySQL - Flexible Server.
--
+This sample CLI script performs restart, start and stop operations on an Azure Database for MySQL - Flexible Server.
> [!IMPORTANT] > When you **Stop** the server it remains in that state for the next 7 days in a stretch. If you do not manually **Start** it during this time, the server will automatically be started at the end of 7 days. You can chose to **Stop** it again if you are not using the server.
-During the time server is stopped, no management operations can be performed on the server. In order to change any configuration settings on the server, you will need to start the server.
+During the time server is stopped, no management operations can be performed on the server. In order to change any configuration settings on the server, you will need to start the server.
Also, see [stop/start limitations](../concepts-limitations.md#stopstart-operation) before performing stop/start operations. --- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Sample script
-Edit the highlighted lines in the script with your values for variables.
+
+### Run the script
-[!code-azurecli-interactive[main](../../../../cli_scripts/mysql/flexible-server/manage-server/restart-start-stop.sh?highlight=7,10-11 "Create a server, perform restart / start / stop operations.")]
-## Clean up deployment
+## Clean up resources
-After the sample script has been run, the following code snippet can be used to clean up the resources.
-[!code-azurecli-interactive[main](../../../../cli_scripts/mysql/flexible-server/manage-server/clean-up-resources.sh?highlight=4 "Clean up resources.")]
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps - Try additional scripts: [Azure CLI samples for Azure Database for MySQL - Flexible Server](../sample-scripts-azure-cli.md)-- For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
+- For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
mysql Sample Cli Restore Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-restore-server.md
ms.devlang: azurecli Previously updated : 09/15/2021 Last updated : 02/10/2022 # Restore an Azure Database for MySQL - Flexible Server using Azure CLI Azure Database for MySQL - Flexible Server, automatically creates server backups and securely stores them in local redundant storage within the region.
-This sample CLI script performs a [point-in-time restore](../concepts-backup-restore.md) and creates a new server from your Flexible Server's backups.
+This sample CLI script performs a [point-in-time restore](../concepts-backup-restore.md) and creates a new server from your Flexible Server's backups.
The new Flexible Server is created with the original server's configuration and also inherits tags and settings such as virtual network and firewall from the source server. The restored server's compute and storage tier, configuration and security settings can be changed after the restore is completed. --- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Sample script
-Edit the highlighted lines in the script with your values for variables.
+
+### Run the script
-[!code-azurecli-interactive[main](../../../../cli_scripts/mysql/flexible-server/backup-restore/restore-server.sh?highlight=7,10-12 "Perform point-in-time-restore of a source server to a new server.")]
-## Clean up deployment
+## Clean up resources
-After the sample script has been run, the following code snippet can be used to clean up the resources.
-[!code-azurecli-interactive[main](../../../../cli_scripts/mysql/flexible-server/backup-restore/clean-up-resources.sh?highlight=4-5 "Clean up resources.")]
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps - Try additional scripts: [Azure CLI samples for Azure Database for MySQL - Flexible Server](../sample-scripts-azure-cli.md)-- For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
+- For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
mysql Sample Cli Same Zone Ha https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-same-zone-ha.md
ms.devlang: azurecli Previously updated : 09/15/2021 Last updated : 02/10/2022 # Configure same-zone high availability in an Azure Database for MySQL - Flexible Server using Azure CLI
-This sample CLI script configures and manages [Same-Zone high availability](../concepts-high-availability.md) in an Azure Database for MySQL - Flexible Server.
+This sample CLI script configures and manages [Same-Zone high availability](../concepts-high-availability.md) in an Azure Database for MySQL - Flexible Server.
Currently, Same-Zone high availability is supported only for the General purpose and Memory optimized pricing tiers. ---- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Sample script
-Edit the highlighted lines in the script with your values for variables.
+
+### Run the script
-[!code-azurecli-interactive[main](../../../../cli_scripts/mysql/flexible-server/high-availability/same-zone-ha.sh?highlight=7,10-11 "Configure Same-Zone High Availability.")]
-## Clean up deployment
+## Clean up resources
-After the sample script has been run, the following code snippet can be used to clean up the resources.
-[!code-azurecli-interactive[main](../../../../cli_scripts/mysql/flexible-server/high-availability/clean-up-resources.sh?highlight=4 "Clean up resources.")]
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
mysql Sample Cli Slow Query Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-slow-query-logs.md
ms.devlang: azurecli Previously updated : 09/15/2021 Last updated : 02/10/2022 # Configure slow query logs on an Azure Database for MySQL - Flexible Server using Azure CLI
-This sample CLI script configures [slow query logs](../concepts-slow-query-logs.md) on an Azure Database for MySQL - Flexible Server.
+This sample CLI script configures [slow query logs](../concepts-slow-query-logs.md) on an Azure Database for MySQL - Flexible Server.
--- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Sample script
-Edit the highlighted lines in the script with your values for variables.
+
+### Run the script
-[!code-azurecli-interactive[main](../../../../cli_scripts/mysql/flexible-server/configure-logs/configure-slow-query-logs.sh?highlight=7,10-11 "Configure slow-query logs on Azure Database for MySQL - Flexible Server.")]
-## Clean up deployment
+## Clean up resources
-After the sample script has been run, the following code snippet can be used to clean up the resources.
-[!code-azurecli-interactive[main](../../../../cli_scripts/mysql/flexible-server/configure-logs/clean-up-resources.sh?highlight=4 "Clean up resources.")]
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps - Try additional scripts: [Azure CLI samples for Azure Database for MySQL - Flexible Server](../sample-scripts-azure-cli.md)-- For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
+- For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
mysql Sample Cli Zone Redundant Ha https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-zone-redundant-ha.md
ms.devlang: azurecli Previously updated : 09/15/2021 Last updated : 02/10/2022 # Configure zone-redundant high availability in an Azure Database for MySQL - Flexible Server using Azure CLI
-This sample CLI script configures and manages [Zone-Redundant high availability](../concepts-high-availability.md) in an Azure Database for MySQL - Flexible Server.
-You can enable Zone-Redundant high availability only during Flexible Server creation, and can disable it anytime. You can also choose the availability zone for the primary and the standby replica.
+This sample CLI script configures and manages [Zone-Redundant high availability](../concepts-high-availability.md) in an Azure Database for MySQL - Flexible Server.
+You can enable Zone-Redundant high availability only during Flexible Server creation, and can disable it anytime. You can also choose the availability zone for the primary and the standby replica.
Currently, Zone-Redundant high availability is supported only for the General purpose and Memory optimized pricing tiers. ---- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Sample script
-Edit the highlighted lines in the script with your values for variables.
-
-[!code-azurecli-interactive[main](../../../../cli_scripts/mysql/flexible-server/high-availability/zone-redundant-ha.sh?highlight=7,10-11,13-14 "Configure Zone-Redundant High Availability.")]
+### Run the script
-## Clean up deployment
-After the sample script has been run, the following code snippet can be used to clean up the resources.
+## Clean up resources
-[!code-azurecli-interactive[main](../../../../cli_scripts/mysql/flexible-server/high-availability/clean-up-resources.sh?highlight=4 "Clean up resources.")]
+```azurecli
+az group delete --name $resourceGroup
+```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
This script uses the following commands. Each command in the table links to comm
## Next steps - Try additional scripts: [Azure CLI samples for Azure Database for MySQL - Flexible Server](../sample-scripts-azure-cli.md)-- For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
+- For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
mysql Howto Manage Vnet Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-manage-vnet-using-cli.md
ms.devlang: azurecli Previously updated : 3/18/2020 Last updated : 02/10/2022 # Create and manage Azure Database for MySQL VNet service endpoints using Azure CLI
Virtual Network (VNet) services endpoints and rules extend the private address s
[!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)] -- You need an [Azure Database for MySQL server and database](quickstart-create-mysql-server-database-using-azure-cli.md).
-
-- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.- > [!NOTE] > Support for VNet service endpoints is only for General Purpose and Memory Optimized servers. > In case of VNet peering, if traffic is flowing through a common VNet Gateway with service endpoints and is supposed to flow to the peer, please create an ACL/VNet rule to allow Azure Virtual Machines in the Gateway VNet to access the Azure Database for MySQL server. ## Configure Vnet service endpoints for Azure Database for MySQL
-The [az network vnet](/cli/azure/network/vnet) commands are used to configure Virtual Networks.
-
-If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. Select the specific subscription ID under your account using [az account set](/cli/azure/account#az_account_set) command. Substitute the **id** property from the **az login** output for your subscription into the subscription ID placeholder.
-- The account must have the necessary permissions to create a virtual network and service endpoint.
+The [az network vnet](/cli/azure/network/vnet) commands are used to configure Virtual Networks.
+If you have multiple subscriptions, choose the appropriate subscription in which the resource should be billed. The account must have the necessary permissions to create a virtual network and service endpoint.
Service endpoints can be configured on virtual networks independently, by a user with write access to the virtual network. To secure Azure service resources to a VNet, the user must have permission to "Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/" for the subnets being added. This permission is included in the built-in service administrator roles, by default and can be modified by creating custom roles.
Learn more about [built-in roles](../role-based-access-control/built-in-roles.md
VNets and Azure service resources can be in the same or different subscriptions. If the VNet and Azure service resources are in different subscriptions, the resources should be under the same Active Directory (AD) tenant. Ensure that both the subscriptions have the **Microsoft.Sql** and **Microsoft.DBforMySQL** resource providers registered. For more information refer [resource-manager-registration][resource-manager-portal] > [!IMPORTANT]
-> It is highly recommended to read this article about service endpoint configurations and considerations before running the sample script below, or configuring service endpoints. **Virtual Network service endpoint:** A [Virtual Network service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) is a subnet whose property values include one or more formal Azure service type names. VNet services endpoints use the service type name **Microsoft.Sql**, which refers to the Azure service named SQL Database. This service tag also applies to the Azure SQL Database, Azure Database for PostgreSQL and MySQL services. It is important to note when applying the **Microsoft.Sql** service tag to a VNet service endpoint it configures service endpoint traffic for all Azure Database services, including Azure SQL Database, Azure Database for PostgreSQL and Azure Database for MySQL servers on the subnet.
->
+> It is highly recommended to read this article about service endpoint configurations and considerations before running the sample script below, or configuring service endpoints. **Virtual Network service endpoint:** A [Virtual Network service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) is a subnet whose property values include one or more formal Azure service type names. VNet services endpoints use the service type name **Microsoft.Sql**, which refers to the Azure service named SQL Database. This service tag also applies to the Azure SQL Database, Azure Database for PostgreSQL and MySQL services. It is important to note when applying the **Microsoft.Sql** service tag to a VNet service endpoint it configures service endpoint traffic for all Azure Database services, including Azure SQL Database, Azure Database for PostgreSQL and Azure Database for MySQL servers on the subnet.
+
+## Sample script
++
+### Run the script
++
+## Clean up resources
-### Sample script to create an Azure Database for MySQL database, create a VNet, VNet service endpoint and secure the server to the subnet with a VNet rule
-In this sample script, change the highlighted lines to customize the admin username and password. Replace the SubscriptionID used in the `az account set --subscription` command with your own subscription identifier.
-[!code-azurecli-interactive[main](../../cli_scripts/mysql/create-mysql-server-vnet/create-mysql-server.sh?highlight=5,20 "Create an Azure Database for MySQL, VNet, VNet service endpoint, and VNet rule.")]
-## Clean up deployment
-After the script sample has been run, the following command can be used to remove the resource group and all resources associated with it.
-[!code-azurecli-interactive[main](../../cli_scripts/mysql/create-mysql-server-vnet/delete-mysql.sh "Delete the resource group.")]
+```azurecli
+az group delete --name $resourceGroup
+```
-<!-- Link references, to text, Within this same GitHub repo. -->
+<!-- Link references, to text, Within this same GitHub repo. -->
[resource-manager-portal]: ../azure-resource-manager/management/resource-providers-and-types.md
mysql Sample Scripts Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/sample-scripts-azure-cli.md
The following table includes links to sample Azure CLI scripts for Azure Databas
| Sample link | Description | ||| |**Create a server**||
-| [Create a server and firewall rule](./scripts/sample-create-server-and-firewall-rule.md?toc=%2fcli%2fazure%2ftoc.json) | Azure CLI script that creates a single Azure Database for MySQL server and configures a server-level firewall rule. |
+| [Create a server and firewall rule](./scripts/sample-create-server-and-firewall-rule.md) | Azure CLI script that creates a single Azure Database for MySQL server and configures a server-level firewall rule. |
|**Scale a server**||
-| [Scale a server](./scripts/sample-scale-server.md?toc=%2fcli%2fazure%2ftoc.json) | Azure CLI script that scales a single Azure Database for MySQL server up or down to allow for changing performance needs. |
+| [Scale a server](./scripts/sample-scale-server.md) | Azure CLI script that scales a single Azure Database for MySQL server up or down to allow for changing performance needs. |
|**Change server configurations**||
-| [Change server configurations](./scripts/sample-change-server-configuration.md?toc=%2fcli%2fazure%2ftoc.json) | Azure CLI script that change configurations of a single Azure Database for MySQL server. |
+| [Change server configurations](./scripts/sample-change-server-configuration.md) | Azure CLI script that change configurations of a single Azure Database for MySQL server. |
|**Restore a server**||
-| [Restore a server](./scripts/sample-point-in-time-restore.md?toc=%2fcli%2fazure%2ftoc.json) | Azure CLI script that restores a single Azure Database for MySQL server to a previous point in time. |
+| [Restore a server](./scripts/sample-point-in-time-restore.md) | Azure CLI script that restores a single Azure Database for MySQL server to a previous point in time. |
|**Manipulate with server logs**||
-| [Enable and download server logs](./scripts/sample-server-logs.md?toc=%2fcli%2fazure%2ftoc.json) | Azure CLI script that enables and downloads server logs of a single Azure Database for MySQL server. |
+| [Enable server logs](./scripts/sample-server-logs.md) | Azure CLI script that enables server logs of a single Azure Database for MySQL server. |
|||
mysql Sample Change Server Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-change-server-configuration.md
ms.devlang: azurecli Previously updated : 12/02/2019 Last updated : 02/10/2022 # List and update configurations of an Azure Database for MySQL server using Azure CLI
Last updated 12/02/2019
This sample CLI script lists all available configuration parameters as well as their allowable values for Azure Database for MySQL server, and sets the *innodb_lock_wait_timeout* to a value that is other than the default one. -- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Sample script
-In this sample script, edit the highlighted lines to update the admin username and password to your own.
-[!code-azurecli-interactive[main](../../../cli_scripts/mysql/change-server-configurations/change-server-configurations.sh?highlight=15-16 "List and update configurations of Azure Database for MySQL.")]
-## Clean up deployment
-Use the following command to remove the resource group and all resources associated with it after the script has been run.
-[!code-azurecli-interactive[main](../../../cli_scripts/mysql/change-server-configurations/delete-mysql.sh "Delete the resource group.")]
+
+### Run the script
++
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
-## Script explanation
This script uses the commands outlined in the following table: | **Command** | **Notes** |
This script uses the commands outlined in the following table:
| [az group delete](/cli/azure/group#az_group_delete) | Deletes a resource group including all nested resources. | ## Next steps+ - Read more information on the Azure CLI: [Azure CLI documentation](/cli/azure). - Try additional scripts: [Azure CLI samples for Azure Database for MySQL](../sample-scripts-azure-cli.md) - For more information on server parameters, see [How To Configure Server Parameters in Azure Database for MySQL](../howto-server-parameters.md).
mysql Sample Create Server And Firewall Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-create-server-and-firewall-rule.md
ms.devlang: azurecli Previously updated : 12/02/2019 Last updated : 02/10/2022 # Create a MySQL server and configure a firewall rule using the Azure CLI
Last updated 12/02/2019
This sample CLI script creates an Azure Database for MySQL server and configures a server-level firewall rule. Once the script runs successfully, the MySQL server is accessible by all Azure services and the configured IP address. -- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Sample script
-In this sample script, edit the highlighted lines to update the admin username and password to your own.
-[!code-azurecli-interactive[main](../../../cli_scripts/mysql/create-mysql-server-and-firewall-rule/create-mysql-server-and-firewall-rule.sh?highlight=15-16 "Create an Azure Database for MySQL, and server-level firewall rule.")]
-## Clean up deployment
+### Run the script
-Use the following command to remove the resource group and all resources associated with it after the script has been run.
-[!code-azurecli-interactive[main](../../../cli_scripts/mysql/create-mysql-server-and-firewall-rule/delete-mysql.sh "Delete the resource group.")]
-## Script explanation
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
This script uses the commands outlined in the following table:
mysql Sample Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-point-in-time-restore.md
ms.devlang: azurecli Previously updated : 12/02/2019 Last updated : 02/10/2022 # Restore an Azure Database for MySQL server using Azure CLI
Last updated 12/02/2019
This sample CLI script restores a single Azure Database for MySQL server to a previous point in time. -- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Sample script
-In this sample script, edit the highlighted lines to update the admin username and password to your own. Replace the subscription ID used in the `az monitor` commands with your own subscription ID.
-[!code-azurecli-interactive[main](../../../cli_scripts/mysql/backup-restore-pitr/backup-restore.sh?highlight=15-16 "Restore Azure Database for MySQL.")]
-## Clean up deployment
+### Run the script
-Use the following command to remove the resource group and all resources associated with it after the script has been run.
-[!code-azurecli-interactive[main](../../../cli_scripts/mysql/backup-restore-pitr/delete-mysql.sh "Delete the resource group.")]
-## Script explanation
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
This script uses the commands outlined in the following table:
mysql Sample Scale Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-scale-server.md
ms.devlang: azurecli Previously updated : 12/02/2019 Last updated : 02/10/2022 # Monitor and scale an Azure Database for MySQL server using Azure CLI
Last updated 12/02/2019
This sample CLI script scales compute and storage for a single Azure Database for MySQL server after querying the metrics. Compute can scale up or down. Storage can only scale up. -- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Sample script
-Update the script with your subscription ID.
-[!code-azurecli-interactive[main](../../../cli_scripts/mysql/scale-mysql-server/scale-mysql-server.sh "Create and scale Azure Database for MySQL.")]
-## Clean up deployment
+### Run the script
-Use the following command to remove the resource group and all resources associated with it after the script has been run.
-[!code-azurecli-interactive[main](../../../cli_scripts/mysql/scale-mysql-server/delete-mysql.sh "Delete the resource group.")]
-## Script explanation
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
This script uses the commands outlined in the following table:
mysql Sample Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-server-logs.md
ms.devlang: azurecli Previously updated : 12/02/2019 Last updated : 02/10/2022 # Enable and download server slow query logs of an Azure Database for MySQL server using Azure CLI
Last updated 12/02/2019
This sample CLI script enables and downloads the slow query logs of a single Azure Database for MySQL server. -- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Sample script
-In this sample script, edit the highlighted lines to update the admin username and password to your own. Replace the &lt;log_file_name&gt; in the `az monitor` commands with your own server log file name.
-[!code-azurecli-interactive[main](../../../cli_scripts/mysql/server-logs/server-logs.sh?highlight=15-16 "Manipulate with server logs.")]
-## Clean up deployment
+### Run the script
-Use the following command to remove the resource group and all resources associated with it after the script has been run.
-[!code-azurecli-interactive[main](../../../cli_scripts/mysql/server-logs/delete-mysql.sh "Delete the resource group.")]
-## Script explanation
+## Clean up resources
++
+```azurecli
+az group delete --name $resourceGroup
+```
+
+## Sample reference
This script uses the commands outlined in the following table:
networking Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/fundamentals/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure networking description: Sample Azure Resource Graph queries for Azure networking showing use of resource types and tables to access Azure networking related resources and properties. Previously updated : 01/20/2022 Last updated : 02/16/2022
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/overview.md
Title: Apache Kafka on Confluent Cloud overview - Azure partner solutions description: Learn about using Apache Kafka on Confluent Cloud in the Azure Marketplace. Previously updated : 12/14/2021 Last updated : 02/22/2022 # What is Apache Kafka for Confluent Cloud?
To learn about managing the solutions, see:
For support and terms, see: * [Confluent support](https://support.confluent.io)
-* [Terms of Service](https://www.confluent.io/confluent-cloud-tos).
+* [Terms of Service](https://www.confluent.io/confluent-cloud-tos)
+
+To learn more, see Confluent blog articles about Azure services that integrate with Confluent Cloud:
+
+* [Use Azure Cosmos DB sink connector](https://www.confluent.io/blog/announcing-confluent-cloud-azure-cosmos-db-connector)
+* [Set up secure networking with Azure Private Link](https://www.confluent.io/blog/how-to-set-up-secure-networking-in-confluent-with-azure-private-link)
+* [Search using Azure Cache for Redis and Azure Spring Cloud](https://www.confluent.io/blog/real-time-search-and-analytics-with-confluent-cloud-azure-redis-spring-cloud)
+* [Consume data with Confluent and Azure Databricks](https://www.confluent.io/blog/consume-avro-data-from-kafka-topics-and-secured-schema-registry-with-databricks-confluent-cloud-on-azure)
## Next steps
postgresql Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking.md
Private DNS zone settings and virtual network peering are independent of each ot
> [!NOTE] > Only private DNS zone names that end with `postgres.database.azure.com` can be linked. Your DNS zone name cannot be the same as your flexible server(s) otherwise name resolution will fail. ++ ### Unsupported virtual network scenarios Here are some limitations for working with virtual networks:
Here are some limitations for working with virtual networks:
* Subnet size (address spaces) can't be increased after resources exist in the subnet. * A flexible server doesn't support Azure Private Link. Instead, it uses virtual network injection to make the flexible server available within a virtual network.
+> [!IMPORTANT]
+> Azure Resource Manager supports ability to lock resources, as a security control. Resource locks are applied to the resource, and are effective across all users and roles. There are two types of resource lock: CanNotDelete and ReadOnly. These lock types can be applied either to a Private DNS zone, or to an individual record set. Applying a lock of either type against Private DNS Zone or individual record set may interfere with ability of Azure Database for PostgreSQL - Flexible Server service to update DNS records and cause issues during important operations on DNS, such as High Availability failover from primary to secondary. Please make sure you are not utilizing DNS private zone or record locks when utilizing High Availability features with Azure Database for PostgreSQL - Flexible Server.
## Public access (allowed IP addresses)
postgresql Howto Modify Distributed Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-modify-distributed-tables.md
SELECT create_distributed_table('github_events', 'repo_id');
``` The function call informs Hyperscale (Citus) that the github\_events table
-should be distributed on the repo\_id column (by hashing the column value). The
-function also creates shards on the worker nodes using the citus.shard\_count
-and citus.shard\_replication\_factor configuration values.
+should be distributed on the repo\_id column (by hashing the column value).
-It creates a total of citus.shard\_count number of shards, where each shard
-owns a portion of a hash space and gets replicated based on the default
+It creates a total of 32 shards by default, where each shard owns a portion of
+a hash space and gets replicated based on the default
citus.shard\_replication\_factor configuration value. The shard replicas created on the worker have the same table schema, index, and constraint definitions as the table on the coordinator. Once the replicas are created, the
commits ([2PC](https://en.wikipedia.org/wiki/Two-phase_commit_protocol)) for
modifications to tables marked this way, which provides strong consistency guarantees.
-If you have a distributed table with a shard count of one, you can upgrade it
-to be a recognized reference table like this:
-
-```postgresql
-SELECT upgrade_to_reference_table('table_name');
-```
- For another example of using reference tables, see the [multi-tenant database tutorial](tutorial-design-database-multi-tenant.md).
Attempting to run DDL that is ineligible for automatic propagation will raise
an error and leave tables on the coordinator node unchanged. Here is a reference of the categories of DDL statements that propagate.
-Automatic propagation can be enabled or disabled with a [configuration
-parameter](reference-parameters.md#citusenable_ddl_propagation-boolean)
### Adding/Modifying Columns
postgresql Reference Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-functions.md
Previously updated : 04/07/2021 Last updated : 02/18/2022 # Functions in the Hyperscale (Citus) SQL API
inserts appropriate metadata to mark the table as distributed. The function
defaults to 'hash' distribution if no distribution method is specified. If the table is hash-distributed, the function also creates worker shards based on the shard count and shard replication factor configuration values. If the table
-contains any rows, they are automatically distributed to worker nodes.
+contains any rows, they're automatically distributed to worker nodes.
This function replaces usage of master\_create\_distributed\_table() followed by master\_create\_worker\_shards().
table is to be distributed. Permissible values are append or hash, with
a default value of 'hash'. **colocate\_with:** (Optional) include current table in the colocation group
-of another table. By default tables are colocated when they are distributed by
+of another table. By default tables are colocated when they're distributed by
columns of the same type, have the same shard count, and have the same replication factor. Possible values for `colocate_with` are `default`, `none` to start a new colocation group, or the name of another table to colocate
distribution columns, accidentally colocating them can decrease performance
during [shard rebalancing](howto-scale-rebalance.md). The table shards will be moved together unnecessarily in a \"cascade.\"
-If a new distributed table is not related to other tables, it's best to
+If a new distributed table isn't related to other tables, it's best to
specify `colocate_with => 'none'`. #### Return Value
SELECT create_distributed_table('github_events', 'repo_id',
colocate_with => 'github_repo'); ```
+### truncate\_local\_data\_after\_distributing\_table
+
+Truncate all local rows after distributing a table, and prevent constraints
+from failing due to outdated local records. The truncation cascades to tables
+having a foreign key to the designated table. If the referring tables aren't themselves distributed, then truncation is forbidden until they are, to protect
+referential integrity:
+
+```
+ERROR: cannot truncate a table referenced in a foreign key constraint by a local table
+```
+
+Truncating local coordinator node table data is safe for distributed tables
+because their rows, if any, are copied to worker nodes during
+distribution.
+
+#### Arguments
+
+**table_name:** Name of the distributed table whose local counterpart on the
+coordinator node should be truncated.
+
+#### Return Value
+
+N/A
+
+#### Example
+
+```postgresql
+-- requires that argument is a distributed table
+SELECT truncate_local_data_after_distributing_table('public.github_events');
+```
+ ### create\_reference\_table The create\_reference\_table() function is used to define a small
defined as a reference table
SELECT create_reference_table('nation'); ```
-### upgrade\_to\_reference\_table
+### alter_distributed_table
-The upgrade\_to\_reference\_table() function takes an existing distributed
-table that has a shard count of one, and upgrades it to be a recognized
-reference table. After calling this function, the table will be as if it had
-been created with [create_reference_table](#create_reference_table).
+The alter_distributed_table() function can be used to change the distribution
+column, shard count or colocation properties of a distributed table.
#### Arguments
-**table\_name:** Name of the distributed table (having shard count = 1)
-which will be distributed as a reference table.
+**table\_name:** Name of the table that will be altered.
+
+**distribution\_column:** (Optional) Name of the new distribution column.
+
+**shard\_count:** (Optional) The new shard count.
+
+**colocate\_with:** (Optional) The table that the current distributed table
+will be colocated with. Possible values are `default`, `none` to start a new
+colocation group, or the name of another table with which to colocate. (See
+[table colocation](concepts-colocation.md).)
+
+**cascade_to_colocated:** (Optional) When this argument is set to "true",
+`shard_count` and `colocate_with` changes will also be applied to all of the
+tables that were previously colocated with the table, and the colocation will
+be preserved. If it is "false", the current colocation of this table will be
+broken.
#### Return Value
N/A
#### Example
-This example informs the database that the nation table should be
-defined as a reference table
- ```postgresql
-SELECT upgrade_to_reference_table('nation');
+-- change distribution column
+SELECT alter_distributed_table('github_events', distribution_column:='event_id');
+
+-- change shard count of all tables in colocation group
+SELECT alter_distributed_table('github_events', shard_count:=6, cascade_to_colocated:=true);
+
+-- change colocation
+SELECT alter_distributed_table('github_events', colocate_with:='another_table');
```
-### mark\_tables\_colocated
+### update_distributed_table_colocation
-The mark\_tables\_colocated() function takes a distributed table (the
-source), and a list of others (the targets), and puts the targets into
-the same colocation group as the source. If the source is not yet in a
-group, this function creates one, and assigns the source and targets to
-it.
+The update_distributed_table_colocation() function is used to update colocation
+of a distributed table. This function can also be used to break colocation of a
+distributed table. Citus will implicitly colocate two tables if the
+distribution column is the same type, this can be useful if the tables are
+related and will do some joins. If tables A and B are colocated, and table A
+gets rebalanced, table B will also be rebalanced. If table B doesn't have a
+replica identity, the rebalance will fail. Therefore, this function can be
+useful breaking the implicit colocation in that case.
-Colocating tables ought to be done at table distribution time via the
-`colocate_with` parameter of
-[create_distributed_table](#create_distributed_table), but
-`mark_tables_colocated` can take care of it later if necessary.
+This function doesn't move any data around physically.
#### Arguments
-**source\_table\_name:** Name of the distributed table whose colocation
-group the targets will be assigned to match.
+**table_name:** Name of the table colocation of which will be updated.
-**target\_table\_names:** Array of names of the distributed target
-tables, must be non-empty. These distributed tables must match the
-source table in:
+**colocate_with:** The table to which the table should be colocated with.
-> - distribution method
-> - distribution column type
-> - replication type
-> - shard count
+If you want to break the colocation of a table, you should specify
+`colocate_with => 'none'`.
-If none of the above apply, Hyperscale (Citus) will raise an error. For
-instance, attempting to colocate tables `apples` and `oranges` whose
-distribution column types differ results in:
+#### Return Value
+
+N/A
+
+#### Example
+This example shows that colocation of table A is updated as colocation of table
+B.
+
+```postgresql
+SELECT update_distributed_table_colocation('A', colocate_with => 'B');
```
-ERROR: XX000: cannot colocate tables apples and oranges
-DETAIL: Distribution column types don't match for apples and oranges.
+
+Assume that table A and table B are colocated (possibly implicitly), if you
+want to break the colocation:
+
+```postgresql
+SELECT update_distributed_table_colocation('A', colocate_with => 'none');
+```
+
+Now, assume that table A, table B, table C and table D are colocated and you
+want to colocate table A and table B together, and table C and table D
+together:
+
+```postgresql
+SELECT update_distributed_table_colocation('C', colocate_with => 'none');
+SELECT update_distributed_table_colocation('D', colocate_with => 'C');
+```
+
+If you have a hash distributed table named none and you want to update its
+colocation, you can do:
+
+```postgresql
+SELECT update_distributed_table_colocation('"none"', colocate_with => 'some_other_hash_distributed_table');
```
+### undistribute\_table
+
+The undistribute_table() function undoes the action of create_distributed_table
+or create_reference_table. Undistributing moves all data from shards back into
+a local table on the coordinator node (assuming the data can fit), then deletes
+the shards.
+
+Citus won't undistribute tables that have--or are referenced by--foreign
+keys, unless the cascade_via_foreign_keys argument is set to true. If this
+argument is false (or omitted), then you must manually drop the offending
+foreign key constraints before undistributing.
+
+#### Arguments
+
+**table_name:** Name of the distributed or reference table to undistribute.
+
+**cascade_via_foreign_keys:** (Optional) When this argument set to "true,"
+undistribute_table also undistributes all tables that are related to table_name
+through foreign keys. Use caution with this parameter, because it can
+potentially affect many tables.
+ #### Return Value N/A #### Example
-This example puts `products` and `line_items` in the same colocation
-group as `stores`. The example assumes that these tables are all
-distributed on a column with matching type, most likely a \"store id.\"
+This example distributes a `github_events` table and then undistributes it.
```postgresql
-SELECT mark_tables_colocated('stores', ARRAY['products', 'line_items']);
+-- first distribute the table
+SELECT create_distributed_table('github_events', 'repo_id');
+
+-- undo that and make it local again
+SELECT undistribute_table('github_events');
``` ### create\_distributed\_function
to pick a worker node to run the function. Executing the function on workers
increases parallelism, and can bring the code closer to data in shards for lower latency.
-The Postgres search path is not propagated from the coordinator to workers
+The Postgres search path isn't propagated from the coordinator to workers
during distributed function execution, so distributed function code should fully qualify the names of database objects. Also notices emitted by the
-functions will not be displayed to the user.
+functions won't be displayed to the user.
#### Arguments
multiple functions can have the same name in PostgreSQL. For instance,
`'foo(int)'` is different from `'foo(int, text)'`. **distribution\_arg\_name:** (Optional) The argument name by which to
-distribute. For convenience (or if the function arguments do not have
+distribute. For convenience (or if the function arguments don't have
names), a positional placeholder is allowed, such as `'$1'`. If this
-parameter is not specified, then the function named by `function_name`
+parameter isn't specified, then the function named by `function_name`
is merely created on the workers. If worker nodes are added in the future, the function will automatically be created there too.
overridden with these GUCs:
**table_name:** Name of the columnar table. **chunk_row_count:** (Optional) The maximum number of rows per chunk for
-newly inserted data. Existing chunks of data will not be changed and may have
+newly inserted data. Existing chunks of data won't be changed and may have
more rows than this maximum value. The default value is 10000. **stripe_row_count:** (Optional) The maximum number of rows per stripe for
-newly inserted data. Existing stripes of data will not be changed and may have
+newly inserted data. Existing stripes of data won't be changed and may have
more rows than this maximum value. The default value is 150000. **compression:** (Optional) `[none|pglz|zstd|lz4|lz4hc]` The compression type
-for newly inserted data. Existing data will not be recompressed or
+for newly inserted data. Existing data won't be recompressed or
decompressed. The default and suggested value is zstd (if support has been compiled in). **compression_level:** (Optional) Valid settings are from 1 through 19. If the
-compression method does not support the level chosen, the closest level will be
+compression method doesn't support the level chosen, the closest level will be
selected instead. #### Return value
SELECT alter_columnar_table_set(
stripe_row_count => 10000); ```
-## Metadata / Configuration Information
+### alter_table_set_access_method
+
+The alter_table_set_access_method() function changes access method of a table
+(for example, heap or columnar).
-### master\_get\_table\_metadata
+#### Arguments
-The master\_get\_table\_metadata() function can be used to return
-distribution-related metadata for a distributed table. This metadata includes
-the relation ID, storage type, distribution method, distribution column,
-replication count, maximum shard size, and shard placement policy for the
-table. Behind the covers, this function queries Hyperscale (Citus) metadata
-tables to get the required information and concatenates it into a tuple before
-returning it to the user.
+**table_name:** Name of the table whose access method will change.
+
+**access_method:** Name of the new access method.
+
+#### Return Value
+
+N/A
+
+#### Example
+
+```postgresql
+SELECT alter_table_set_access_method('github_events', 'columnar');
+```
+
+### create_time_partitions
+
+The create_time_partitions() function creates partitions of a given interval to
+cover a given range of time.
#### Arguments
-**table\_name:** Name of the distributed table for which you want to
-fetch metadata.
+**table_name:** (regclass) table for which to create new partitions. The table
+must be partitioned on one column, of type date, timestamp, or timestamptz.
+
+**partition_interval:** an interval of time, such as `'2 hours'`, or `'1
+month'`, to use when setting ranges on new partitions.
+
+**end_at:** (timestamptz) create partitions up to this time. The last partition
+will contain the point end_at, and no later partitions will be created.
+
+**start_from:** (timestamptz, optional) pick the first partition so that it
+contains the point start_from. The default value is `now()`.
#### Return Value
-A tuple containing the following information:
+True if it needed to create new partitions, false if they all existed already.
-**logical\_relid:** Oid of the distributed table. It references
-the relfilenode column in the pg\_class system catalog table.
+#### Example
-**part\_storage\_type:** Type of storage used for the table. May be
-'t' (standard table), 'f' (foreign table) or 'c' (columnar table).
+```postgresql
+-- create a year's worth of monthly partitions
+-- in table foo, starting from the current time
-**part\_method:** Distribution method used for the table. May be 'a'
-(append), or 'h' (hash).
+SELECT create_time_partitions(
+ table_name := 'foo',
+ partition_interval := '1 month',
+ end_at := now() + '12 months'
+);
+```
+
+### drop_old_time_partitions
+
+The drop_old_time_partitions() function removes all partitions whose intervals
+fall before a given timestamp. In addition to using this function, you might
+consider
+[alter_old_partitions_set_access_method](#alter_old_partitions_set_access_method)
+to compress the old partitions with columnar storage.
+
+#### Arguments
-**part\_key:** Distribution column for the table.
+**table_name:** (regclass) table for which to remove partitions. The table must
+be partitioned on one column, of type date, timestamp, or timestamptz.
-**part\_replica\_count:** Current shard replication count.
+**older_than:** (timestamptz) drop partitions whose upper range is less than or
+equal to older_than.
-**part\_max\_size:** Current maximum shard size in bytes.
+#### Return Value
-**part\_placement\_policy:** Shard placement policy used for placing the
-table's shards. May be 1 (local-node-first) or 2 (round-robin).
+N/A
#### Example
-The example below fetches and displays the table metadata for the
-github\_events table.
+```postgresql
+-- drop partitions that are over a year old
+
+CALL drop_old_time_partitions('foo', now() - interval '12 months');
+```
+
+### alter_old_partitions_set_access_method
+
+In a timeseries use case, tables are often partitioned by time, and old
+partitions are compressed into read-only columnar storage.
+
+#### Arguments
+
+**parent_table_name:** (regclass) table for which to change partitions. The
+table must be partitioned on one column, of type date, timestamp, or
+timestamptz.
+
+**older_than:** (timestamptz) change partitions whose upper range is less than
+or equal to older_than.
+
+**new_access_method:** (name) either ΓÇÿheapΓÇÖ for row-based storage, or
+ΓÇÿcolumnarΓÇÖ for columnar storage.
+
+#### Return Value
+
+N/A
+
+#### Example
```postgresql
-SELECT * from master_get_table_metadata('github_events');
- logical_relid | part_storage_type | part_method | part_key | part_replica_count | part_max_size | part_placement_policy
-+-+-+-+--++--
- 24180 | t | h | repo_id | 2 | 1073741824 | 2
-(1 row)
+CALL alter_old_partitions_set_access_method(
+ 'foo', now() - interval '6 months',
+ 'columnar'
+);
```
+## Metadata / Configuration Information
+ ### get\_shard\_id\_for\_distribution\_column Hyperscale (Citus) assigns every row of a distributed table to a shard based on
database administrator can ignore. However it can be useful to determine a
row's shard, either for manual database maintenance tasks or just to satisfy curiosity. The `get_shard_id_for_distribution_column` function provides this info for hash-distributed, range-distributed, and reference tables. It
-does not work for the append distribution.
+doesn't work for the append distribution.
#### Arguments
N/A
None
+### citus_get_active_worker_nodes
+
+The citus_get_active_worker_nodes() function returns a list of active worker
+host names and port numbers.
+
+#### Arguments
+
+N/A
+
+#### Return Value
+
+List of tuples where each tuple contains the following information:
+
+**node_name:** DNS name of the worker node
+
+**node_port:** Port on the worker node on which the database server is
+listening
+
+#### Example
+
+```postgresql
+SELECT * from citus_get_active_worker_nodes();
+ node_name | node_port
+--+--
+ localhost | 9700
+ localhost | 9702
+ localhost | 9701
+
+(3 rows)
+```
+ ## Server group management and repair ### master\_copy\_shard\_placement
SELECT master_copy_shard_placement(12345, 'good_host', 5432, 'bad_host', 5432);
### master\_move\_shard\_placement This function moves a given shard (and shards colocated with it) from one node
-to another. It is typically used indirectly during shard rebalancing rather
+to another. It's typically used indirectly during shard rebalancing rather
than being called directly by a database administrator. There are two ways to move the data: blocking or nonblocking. The blocking
SELECT master_move_shard_placement(12345, 'from_host', 5432, 'to_host', 5432);
### rebalance\_table\_shards
-The rebalance\_table\_shards() function moves shards of the given table to make
-them evenly distributed among the workers. The function first calculates the
+The rebalance\_table\_shards() function moves shards of the given table to
+distribute them evenly among the workers. The function first calculates the
list of moves it needs to make in order to ensure that the server group is balanced within the given threshold. Then, it moves shard placements one by one from the source node to the destination node and updates the corresponding
Output the planned shard movements of
[rebalance_table_shards](#rebalance_table_shards) without performing them. While it's unlikely, get\_rebalance\_table\_shards\_plan can output a slightly different plan than what a rebalance\_table\_shards call with the same
-arguments will do. They are not executed at the same time, so facts about the
+arguments will do. They aren't executed at the same time, so facts about the
server group \-- for example, disk space \-- might differ between the calls. #### Arguments
SELECT * from citus_remote_connection_stats();
(1 row) ```
-### replicate\_table\_shards
-
-The replicate\_table\_shards() function replicates the under-replicated shards
-of the given table. The function first calculates the list of under-replicated
-shards and locations from which they can be fetched for replication. The
-function then copies over those shards and updates the corresponding shard
-metadata to reflect the copy.
-
-#### Arguments
-
-**table\_name:** The name of the table whose shards need to be
-replicated.
-
-**shard\_replication\_factor:** (Optional) The desired replication
-factor to achieve for each shard.
-
-**max\_shard\_copies:** (Optional) Maximum number of shards to copy to
-reach the desired replication factor.
-
-**excluded\_shard\_list:** (Optional) Identifiers of shards that
-shouldn't be copied during the replication operation.
-
-#### Return Value
-
-N/A
-
-#### Examples
-
-The example below will attempt to replicate the shards of the
-github\_events table to shard\_replication\_factor.
-
-```postgresql
-SELECT replicate_table_shards('github_events');
-```
-
-This example will attempt to bring the shards of the github\_events table to
-the desired replication factor with a maximum of 10 shard copies. The
-rebalancer will copy a maximum of 10 shards in its attempt to reach the desired
-replication factor.
-
-```postgresql
-SELECT replicate_table_shards('github_events', max_shard_copies:=10);
-```
- ### isolate\_tenant\_to\_new\_shard This function creates a new shard to hold rows with a specific single value in
-the distribution column. It is especially handy for the multi-tenant Hyperscale
+the distribution column. It's especially handy for the multi-tenant Hyperscale
(Citus) use case, where a large tenant can be placed alone on its own shard and ultimately its own physical node.
postgresql Reference Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-metadata.md
Previously updated : 08/10/2020 Last updated : 02/18/2022 # System tables and views
table. Pg_dist_shard has information about which distributed table shards
belong to, and statistics about the distribution column for shards. For append distributed tables, these statistics correspond to min / max values of the distribution column. For hash distributed tables,
-they are hash token ranges assigned to that shard. These statistics are
+they're hash token ranges assigned to that shard. These statistics are
used for pruning away unrelated shards during SELECT queries. | Name | Type | Description |
and their representation is below.
| COLUMNAR | 'c' | Indicates that shard stores columnar data. (Used by distributed cstore_fdw tables) | | FOREIGN | 'f' | Indicates that shard stores foreign data. (Used by distributed file_fdw tables) | +
+### Shard information view
+
+In addition to the low-level shard metadata table described above, Hyperscale
+(Citus) provides a `citus_shards` view to easily check:
+
+* Where each shard is (node, and port),
+* What kind of table it belongs to, and
+* Its size
+
+This view helps you inspect shards to find, among other things, any size
+imbalances across nodes.
+
+```postgresql
+SELECT * FROM citus_shards;
+.
+ table_name | shardid | shard_name | citus_table_type | colocation_id | nodename | nodeport | shard_size
+++--+++--+-+
+ dist | 102170 | dist_102170 | distributed | 34 | localhost | 9701 | 90677248
+ dist | 102171 | dist_102171 | distributed | 34 | localhost | 9702 | 90619904
+ dist | 102172 | dist_102172 | distributed | 34 | localhost | 9701 | 90701824
+ dist | 102173 | dist_102173 | distributed | 34 | localhost | 9702 | 90693632
+ ref | 102174 | ref_102174 | reference | 2 | localhost | 9701 | 8192
+ ref | 102174 | ref_102174 | reference | 2 | localhost | 9702 | 8192
+ dist2 | 102175 | dist2_102175 | distributed | 34 | localhost | 9701 | 933888
+ dist2 | 102176 | dist2_102176 | distributed | 34 | localhost | 9702 | 950272
+ dist2 | 102177 | dist2_102177 | distributed | 34 | localhost | 9701 | 942080
+ dist2 | 102178 | dist2_102178 | distributed | 34 | localhost | 9702 | 933888
+```
+
+The colocation_id refers to the colocation group.
+ ### Shard placement table The pg\_dist\_placement table tracks the location of shard replicas on
the cluster.
| Name | Type | Description | |||--| | nodeid | int | Autogenerated identifier for an individual node. |
-| groupid | int | Identifier used to denote a group of one primary server and zero or more secondary servers, when the streaming replication model is used. By default it is the same as the nodeid. |
+| groupid | int | Identifier used to denote a group of one primary server and zero or more secondary servers, when the streaming replication model is used. By default it's the same as the nodeid. |
| nodename | text | Host Name or IP Address of the PostgreSQL worker node. | | nodeport | int | Port number on which the PostgreSQL worker node is listening. | | noderack | text | (Optional) Rack placement information for the worker node. |
the cluster.
| isactive | boolean | Whether the node is active accepting shard placements. | | noderole | text | Whether the node is a primary or secondary | | nodecluster | text | The name of the cluster containing this node |
-| shouldhaveshards | boolean | If false, shards will be moved off node (drained) when rebalancing, nor will shards from new distributed tables be placed on the node, unless they are colocated with shards already there |
+| shouldhaveshards | boolean | If false, shards will be moved off node (drained) when rebalancing, nor will shards from new distributed tables be placed on the node, unless they're colocated with shards already there |
``` SELECT * from pg_dist_node;
distribution_argument_index |
colocationid | ```
+### Distributed tables view
+
+The `citus_tables` view shows a summary of all tables managed by Hyperscale
+(Citus) (distributed and reference tables). The view combines information from
+Hyperscale (Citus) metadata tables for an easy, human-readable overview of
+these table properties:
+
+* Table type
+* Distribution column
+* Colocation group ID
+* Human-readable size
+* Shard count
+* Owner (database user)
+* Access method (heap or columnar)
+
+HereΓÇÖs an example:
+
+```postgresql
+SELECT * FROM citus_tables;
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé table_name Γöé citus_table_type Γöé distribution_column Γöé colocation_id Γöé table_size Γöé shard_count Γöé table_owner Γöé access_method Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé foo.test Γöé distributed Γöé test_column Γöé 1 Γöé 0 bytes Γöé 32 Γöé citus Γöé heap Γöé
+Γöé ref Γöé reference Γöé <none> Γöé 2 Γöé 24 GB Γöé 1 Γöé citus Γöé heap Γöé
+Γöé test Γöé distributed Γöé id Γöé 1 Γöé 248 TB Γöé 32 Γöé citus Γöé heap Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+### Time partitions view
+
+Hyperscale (Citus) provides UDFs to manage partitions for the Timeseries Data
+use case. It also maintains a `time_partitions` view to inspect the partitions
+it manages.
+
+Columns:
+
+* **parent_table** the table that is partitioned
+* **partition_column** the column on which the parent table is partitioned
+* **partition** the name of a partition table
+* **from_value** lower bound in time for rows in this partition
+* **to_value** upper bound in time for rows in this partition
+* **access_method** heap for row-based storage, and columnar for columnar
+ storage
+
+```postgresql
+SELECT * FROM time_partitions;
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé parent_table Γöé partition_column Γöé partition Γöé from_value Γöé to_value Γöé access_method Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé github_columnar_events Γöé created_at Γöé github_columnar_events_p2015_01_01_0000 Γöé 2015-01-01 00:00:00 Γöé 2015-01-01 02:00:00 Γöé columnar Γöé
+Γöé github_columnar_events Γöé created_at Γöé github_columnar_events_p2015_01_01_0200 Γöé 2015-01-01 02:00:00 Γöé 2015-01-01 04:00:00 Γöé columnar Γöé
+Γöé github_columnar_events Γöé created_at Γöé github_columnar_events_p2015_01_01_0400 Γöé 2015-01-01 04:00:00 Γöé 2015-01-01 06:00:00 Γöé columnar Γöé
+Γöé github_columnar_events Γöé created_at Γöé github_columnar_events_p2015_01_01_0600 Γöé 2015-01-01 06:00:00 Γöé 2015-01-01 08:00:00 Γöé heap Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+ ### Colocation group table The pg\_dist\_colocation table contains information about which tables\' shards
calls | 1
Caveats: -- The stats data is not replicated, and won\'t survive database
+- The stats data isn't replicated, and won\'t survive database
crashes or failover - Tracks a limited number of queries, set by the `pg_stat_statements.max` GUC (default 5000)
For example, consider counting the rows in a distributed table:
SELECT count(*) FROM users_table; ```
-We can see the query appear in `citus_dist_stat_activity`:
+We can see that the query appears in `citus_dist_stat_activity`:
```postgresql SELECT * FROM citus_dist_stat_activity;
postgresql Reference Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-overview.md
+
+ Title: Reference ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Overview of the Hyperscale (Citus) SQL API
+++++ Last updated : 02/18/2022++
+# The Hyperscale (Citus) SQL API
+
+Azure Database for PostgreSQL - Hyperscale (Citus) includes features beyond
+standard PostgreSQL. Below is a categorized reference of functions and
+configuration options for:
+
+* managing sharded data between multiple servers
+* compressing data with columnar storage
+* automating timeseries partitioning
+* parallelizing query execution across shards
+
+## SQL Functions
+
+### Sharding
+
+| Name | Description |
+||-|
+| [alter_distributed_table](reference-functions.md#alter_distributed_table) | change the distribution column, shard count or colocation properties of a distributed table |
+| [citus_copy_shard_placement](reference-functions.md#master_copy_shard_placement) | repair an inactive shard placement using data from a healthy placement |
+| [create_distributed_table](reference-functions.md#create_distributed_table) | turn a PostgreSQL table into a distributed (sharded) table |
+| [create_reference_table](reference-functions.md#create_reference_table) | maintain full copies of a table in sync across all nodes |
+| [isolate_tenant_to_new_shard](reference-functions.md#isolate_tenant_to_new_shard) | create a new shard to hold rows with a specific single value in the distribution column |
+| [truncate_local_data_after_distributing_table](reference-functions.md#truncate_local_data_after_distributing_table) | truncate all local rows after distributing a table |
+| [undistribute_table](reference-functions.md#undistribute_table) | undo the action of create_distributed_table or create_reference_table |
+
+### Shard rebalancing
+
+| Name | Description |
+||-|
+| [citus_add_rebalance_strategy](reference-functions.md#citus_add_rebalance_strategy) | append a row to `pg_dist_rebalance_strategy` |
+| [citus_move_shard_placement](reference-functions.md#master_move_shard_placement) | typically used indirectly during shard rebalancing rather than being called directly by a database administrator |
+| [citus_set_default_rebalance_strategy](reference-functions.md#) | change the strategy named by its argument to be the default chosen when rebalancing shards |
+| [get_rebalance_progress](reference-functions.md#get_rebalance_progress) | monitor the moves planned and executed by `rebalance_table_shards` |
+| [get_rebalance_table_shards_plan](reference-functions.md#get_rebalance_table_shards_plan) | output the planned shard movements of rebalance_table_shards without performing them |
+| [rebalance_table_shards](reference-functions.md#rebalance_table_shards) | move shards of the given table to distribute them evenly among the workers |
+
+### Colocation
+
+| Name | Description |
+||-|
+| [create_distributed_function](reference-functions.md#create_distributed_function) | make function run on workers near colocated shards |
+| [update_distributed_table_colocation](reference-functions.md#update_distributed_table_colocation) | update or break colocation of a distributed table |
+
+### Columnar storage
+
+| Name | Description |
+||-|
+| [alter_columnar_table_set](reference-functions.md#alter_columnar_table_set) | change settings on a columnar table |
+| [alter_table_set_access_method](reference-functions.md#alter_table_set_access_method) | convert a table between heap or columnar storage |
+
+### Timeseries partitioning
+
+| Name | Description |
+||-|
+| [alter_old_partitions_set_access_method](reference-functions.md#alter_old_partitions_set_access_method) | change storage method of partitions |
+| [create_time_partitions](reference-functions.md#create_time_partitions) | create partitions of a given interval to cover a given range of time |
+| [drop_old_time_partitions](reference-functions.md#drop_old_time_partitions) | remove all partitions whose intervals fall before a given timestamp |
+
+### Informational
+
+| Name | Description |
+||-|
+| [citus_get_active_worker_nodes](reference-functions.md#citus_get_active_worker_nodes) | get active worker host names and port numbers |
+| [citus_relation_size](reference-functions.md#citus_relation_size) | get disk space used by all the shards of the specified distributed table |
+| [citus_remote_connection_stats](reference-functions.md#citus_remote_connection_stats) | show the number of active connections to each remote node |
+| [citus_stat_statements_reset](reference-functions.md#citus_stat_statements_reset) | remove all rows from `citus_stat_statements` |
+| [citus_table_size](reference-functions.md#citus_table_size) | get disk space used by all the shards of the specified distributed table, excluding indexes |
+| [citus_total_relation_size](reference-functions.md#citus_total_relation_size) | get total disk space used by the all the shards of the specified distributed table, including all indexes and TOAST data |
+| [column_to_column_name](reference-functions.md#column_to_column_name) | translate the `partkey` column of `pg_dist_partition` into a textual column name |
+| [get_shard_id_for_distribution_column](reference-functions.md#get_shard_id_for_distribution_column) | find the shard ID associated with a value of the distribution column |
+
+## Server parameters
+
+### Query execution
+
+| Name | Description |
+||-|
+| [citus.all_modifications_commutative](reference-parameters.md#citusall_modifications_commutative) | allow all commands to claim a shared lock |
+| [citus.count_distinct_error_rate](reference-parameters.md#cituscount_distinct_error_rate-floating-point) | tune error rate of postgresql-hll approximate counting |
+| [citus.enable_repartition_joins](reference-parameters.md#citusenable_repartition_joins-boolean) | allow JOINs made on non-distribution columns |
+| [citus.enable_repartitioned_insert_select](reference-parameters.md#citusenable_repartition_joins-boolean) | allow repartitioning rows from the SELECT statement and transferring them between workers for insertion |
+| [citus.limit_clause_row_fetch_count](reference-parameters.md#cituslimit_clause_row_fetch_count-integer) | the number of rows to fetch per task for limit clause optimization |
+| [citus.local_table_join_policy](reference-parameters.md#cituslocal_table_join_policy-enum) | where data moves when doing a join between local and distributed tables |
+| [citus.multi_shard_commit_protocol](reference-parameters.md#citusmulti_shard_commit_protocol-enum) | the commit protocol to use when performing COPY on a hash distributed table |
+| [citus.propagate_set_commands](reference-parameters.md#cituspropagate_set_commands-enum) | which SET commands are propagated from the coordinator to workers |
+
+### Informational
+
+| Name | Description |
+||-|
+| [citus.explain_all_tasks](reference-parameters.md#citusexplain_all_tasks-boolean) | make EXPLAIN output show all tasks |
+| [citus.explain_analyze_sort_method](reference-parameters.md#citusexplain_analyze_sort_method-enum) | sort method of the tasks in the output of EXPLAIN ANALYZE |
+| [citus.log_remote_commands](reference-parameters.md#cituslog_remote_commands-boolean) | log queries the coordinator sends to worker nodes |
+| [citus.multi_task_query_log_level](reference-parameters.md#citusmulti_task_query_log_level-enum-multi_task_logging) | log-level for any query that generates more than one task |
+| [citus.stat_statements_max](reference-parameters.md#citusstat_statements_max-integer) | max number of rows to store in `citus_stat_statements` |
+| [citus.stat_statements_purge_interval](reference-parameters.md#citusstat_statements_purge_interval-integer) | frequency at which the maintenance daemon removes records from `citus_stat_statements` that are unmatched in `pg_stat_statements` |
+| [citus.stat_statements_track](reference-parameters.md#citusstat_statements_track-enum) | enable/disable statement tracking |
+
+### Inter-node connection management
+
+| Name | Description |
+||-|
+| [citus.executor_slow_start_interval](reference-parameters.md#citusexecutor_slow_start_interval-integer) | time to wait in milliseconds between opening connections to the same worker node |
+| [citus.force_max_query_parallelization](reference-parameters.md#citusforce_max_query_parallelization-boolean) | open as many connections as possible |
+| [citus.max_adaptive_executor_pool_size](reference-parameters.md#citusmax_adaptive_executor_pool_size-integer) | max worker connections per session |
+| [citus.max_cached_conns_per_worker](reference-parameters.md#citusmax_cached_conns_per_worker-integer) | number of connections kept open to speed up subsequent commands |
+| [citus.node_connection_timeout](reference-parameters.md#citusnode_connection_timeout-integer) | max duration (in milliseconds) to wait for connection establishment |
+
+### Data transfer
+
+| Name | Description |
+||-|
+| [citus.enable_binary_protocol](reference-parameters.md#citusenable_binary_protocol-boolean) | use PostgreSQLΓÇÖs binary serialization format (when applicable) to transfer data with workers |
+| [citus.max_intermediate_result_size](reference-parameters.md#citusmax_intermediate_result_size-integer) | size in KB of intermediate results for CTEs and subqueries that are unable to be pushed down |
+
+### Deadlock
+
+| Name | Description |
+||-|
+| [citus.distributed_deadlock_detection_factor](reference-parameters.md#citusdistributed_deadlock_detection_factor-floating-point) | time to wait before checking for distributed deadlocks |
+| [citus.log_distributed_deadlock_detection](reference-parameters.md#cituslog_distributed_deadlock_detection-boolean) | whether to log distributed deadlock detection-related processing in the server log |
+
+## System tables
+
+The Hyperscale (Citus) coordinator node contains metadata tables and views to
+help you see data properties and query activity across the server group.
+
+| Name | Description |
+||-|
+| [citus_dist_stat_activity](reference-metadata.md#distributed-query-activity) | distributed queries that are executing on all nodes |
+| [citus_lock_waits](reference-metadata.md#distributed-query-activity) | queries blocked throughout the server group |
+| [citus_shards](reference-metadata.md#shard-information-view) | the location of each shard, the type of table it belongs to, and its size |
+| [citus_stat_statements](reference-metadata.md#query-statistics-table) | stats about how queries are being executed, and for whom |
+| [citus_tables](reference-metadata.md#distributed-tables-view) | a summary of all distributed and reference tables |
+| [citus_worker_stat_activity](reference-metadata.md#distributed-query-activity) | queries on workers, including tasks on individual shards |
+| [pg_dist_colocation](reference-metadata.md#colocation-group-table) | which tables' shards should be placed together |
+| [pg_dist_node](reference-metadata.md#worker-node-table) | information about worker nodes in the server group |
+| [pg_dist_object](reference-metadata.md#distributed-object-table) | objects such as types and functions that have been created on the coordinator node and propagated to worker nodes |
+| [pg_dist_placement](reference-metadata.md#shard-placement-table) | the location of shard replicas on worker nodes |
+| [pg_dist_rebalance_strategy](reference-metadata.md#rebalancer-strategy-table) | strategies that `rebalance_table_shards` can use to determine where to move shards |
+| [pg_dist_shard](reference-metadata.md#shard-table) | the table, distribution column, and value ranges for every shard |
+| [time_partitions](reference-metadata.md#time-partitions-view) | information about each partition managed by such functions as `create_time_partitions` and `drop_old_time_partitions` |
++
+## Next steps
+
+* Learn some [useful diagnostic queries](howto-useful-diagnostic-queries.md)
+* Review the list of [configuration
+ parameters](reference-parameters.md#postgresql-parameters) in the underlying
+ PostgreSQL database.
postgresql Reference Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/reference-parameters.md
Previously updated : 08/10/2020 Last updated : 02/18/2022 # Server parameters
all worker nodes, or just for the coordinator node.
#### citus.use\_secondary\_nodes (enum)
-Sets the policy to use when choosing nodes for SELECT queries. If it
-is set to 'always', then the planner will query only nodes that are
+Sets the policy to use when choosing nodes for SELECT queries. If it's set to 'always', then the planner will query only nodes that are
marked as 'secondary' noderole in [pg_dist_node](reference-metadata.md#worker-node-table).
ALTER DATABASE foo
SET citus.node_connection_timeout = 30000; ```
+#### citus.log_remote_commands (boolean)
+
+Log all commands that the coordinator sends to worker nodes. For instance:
+
+```postgresql
+-- reveal the per-shard queries behind the scenes
+SET citus.log_remote_commands TO on;
+
+-- run a query on distributed table "github_users"
+SELECT count(*) FROM github_users;
+```
+
+The output reveals several queries running on workers because of the single
+`count(*)` query on the coordinator.
+
+```
+NOTICE: issuing SELECT count(*) AS count FROM public.github_events_102040 github_events WHERE true
+DETAIL: on server citus@private-c.demo.postgres.database.azure.com:5432 connectionId: 1
+NOTICE: issuing SELECT count(*) AS count FROM public.github_events_102041 github_events WHERE true
+DETAIL: on server citus@private-c.demo.postgres.database.azure.com:5432 connectionId: 1
+NOTICE: issuing SELECT count(*) AS count FROM public.github_events_102042 github_events WHERE true
+DETAIL: on server citus@private-c.demo.postgres.database.azure.com:5432 connectionId: 1
+... etc, one for each of the 32 shards
+```
+ ### Query Statistics #### citus.stat\_statements\_purge\_interval (integer)
SET citus.stat_statements_purge_interval TO 5;
This parameter is effective on the coordinator and can be changed at runtime.
+#### citus.stat_statements_max (integer)
+
+The maximum number of rows to store in `citus_stat_statements`. Defaults to
+50000, and may be changed to any value in the range 1000 - 10000000. Each row requires 140 bytes of storage, so setting `stat_statements_max` to its
+maximum value of 10M would consume 1.4 GB of memory.
+
+Changing this GUC won't take effect until PostgreSQL is restarted.
+
+#### citus.stat_statements_track (enum)
+
+Recording statistics for `citus_stat_statements` requires extra CPU resources.
+When the database is experiencing load, the administrator may wish to disable
+statement tracking. The `citus.stat_statements_track` GUC can turn tracking on
+and off.
+
+* **all:** (default) Track all statements.
+* **none:** Disable tracking.
+ ### Data Loading #### citus.multi\_shard\_commit\_protocol (enum)
runtime.
Sets the commit protocol to use when performing COPY on a hash distributed table. On each individual shard placement, the COPY is performed in a transaction block to ensure that no data is ingested if an error occurs during
-the COPY. However, there is a particular failure case in which the COPY
+the COPY. However, there's a particular failure case in which the COPY
succeeds on all placements, but a (hardware) failure occurs before all transactions commit. This parameter can be used to prevent data loss in that case by choosing between the following commit protocols:
on the size of the cluster and rate of node failure. For example, you may want
to increase this replication factor if you run large clusters and observe node failures on a more frequent basis.
-#### citus.shard\_count (integer)
+### Planner Configuration
-Sets the shard count for hash-distributed tables and defaults to 32. This
-value is used by the
-[create_distributed_table](reference-functions.md#create_distributed_table) UDF
-when creating hash-distributed tables. This parameter can be set at run-time
-and is effective on the coordinator.
+#### citus.local_table_join_policy (enum)
-#### citus.shard\_max\_size (integer)
+This GUC determines how Hyperscale (Citus) moves data when doing a join between
+local and distributed tables. Customizing the join policy can help reduce the
+amount of data sent between worker nodes.
-Sets the maximum size to which a shard will grow before it gets split
-and defaults to 1 GB. When the source file\'s size (which is used for
-staging) for one shard exceeds this configuration value, the database
-ensures that a new shard gets created. This parameter can be set at
-run-time and is effective on the coordinator.
+Hyperscale (Citus) will send either the local or distributed tables to nodes as
+necessary to support the join. Copying table data is referred to as a
+ΓÇ£conversion.ΓÇ¥ If a local table is converted, then it will be sent to any
+workers that need its data to perform the join. If a distributed table is
+converted, then it will be collected in the coordinator to support the join.
+The Citus planner will send only the necessary rows doing a conversion.
-### Planner Configuration
+There are four modes available to express conversion preference:
+
+* **auto:** (Default) Citus will convert either all local or all distributed
+ tables to support local and distributed table joins. Citus decides which to
+ convert using a heuristic. It will convert distributed tables if they're
+ joined using a constant filter on a unique index (such as a primary key). The
+ conversion ensures less data gets moved between workers.
+* **never:** Citus won't allow joins between local and distributed tables.
+* **prefer-local:** Citus will prefer converting local tables to support local
+ and distributed table joins.
+* **prefer-distributed:** Citus will prefer converting distributed tables to
+ support local and distributed table joins. If the distributed tables are
+ huge, using this option might result in moving lots of data between workers.
+
+For example, assume `citus_table` is a distributed table distributed by the
+column `x`, and that `postgres_table` is a local table:
+
+```postgresql
+CREATE TABLE citus_table(x int primary key, y int);
+SELECT create_distributed_table('citus_table', 'x');
+
+CREATE TABLE postgres_table(x int, y int);
+
+-- even though the join is on primary key, there isn't a constant filter
+-- hence postgres_table will be sent to worker nodes to support the join
+SELECT * FROM citus_table JOIN postgres_table USING (x);
+
+-- there is a constant filter on a primary key, hence the filtered row
+-- from the distributed table will be pulled to coordinator to support the join
+SELECT * FROM citus_table JOIN postgres_table USING (x) WHERE citus_table.x = 10;
+
+SET citus.local_table_join_policy to 'prefer-distributed';
+-- since we prefer distributed tables, citus_table will be pulled to coordinator
+-- to support the join. Note that citus_table can be huge.
+SELECT * FROM citus_table JOIN postgres_table USING (x);
+
+SET citus.local_table_join_policy to 'prefer-local';
+-- even though there is a constant filter on primary key for citus_table
+-- postgres_table will be sent to necessary workers because we are using 'prefer-local'.
+SELECT * FROM citus_table JOIN postgres_table USING (x) WHERE citus_table.x = 10;
+```
#### citus.limit\_clause\_row\_fetch\_count (integer)
be used.
a round-robin fashion alternating between different replicas. This policy enables better cluster utilization when the shard count for a table is low compared to the number of workers.-- **first-replica:** The first-replica policy assigns tasks on the
- basis of the insertion order of placements (replicas) for the
+- **first-replica:** The first-replica policy assigns tasks based on the insertion order of placements (replicas) for the
shards. In other words, the fragment query for a shard is assigned to the worker that has the first replica of that shard. This method allows you to have strong guarantees about which shards will be used on which nodes (that is, stronger memory residency
coordinator.
### Intermediate Data Transfer
-#### citus.binary\_worker\_copy\_format (boolean)
-
-Use the binary copy format to transfer intermediate data between workers.
-During large table joins, Hyperscale (Citus) may have to dynamically
-repartition and shuffle data between different workers. By default, this data
-is transferred in text format. Enabling this parameter instructs the database
-to use PostgreSQL's binary serialization format to transfer this data. This
-parameter is effective on the workers and needs to be changed in the
-postgresql.conf file. After editing the config file, users can send a SIGHUP
-signal or restart the server for this change to take effect.
-
-#### citus.binary\_master\_copy\_format (boolean)
-
-Use the binary copy format to transfer data between coordinator and the
-workers. When running distributed queries, the workers transfer their
-intermediate results to the coordinator for final aggregation. By default, this
-data is transferred in text format. Enabling this parameter instructs the
-database to use PostgreSQL's binary serialization format to transfer this data.
-This parameter can be set at runtime and is effective on the coordinator.
- #### citus.max\_intermediate\_result\_size (integer) The maximum size in KB of intermediate results for CTEs that are unable
subqueries. The default is 1 GB, and a value of -1 means no limit.
Queries exceeding the limit will be canceled and produce an error message.
-### DDL
-
-#### citus.enable\_ddl\_propagation (boolean)
-
-Specifies whether to automatically propagate DDL changes from the coordinator
-to all workers. The default value is true. Because some schema changes require
-an access exclusive lock on tables, and because the automatic propagation
-applies to all workers sequentially, it can make a Hyperscale (Citus) cluster
-temporarily less responsive. You may choose to disable this setting and
-propagate changes manually.
- ### Executor Configuration #### General
Hyperscale (Citus) enforces commutativity rules and acquires appropriate locks
for modify operations in order to guarantee correctness of behavior. For example, it assumes that an INSERT statement commutes with another INSERT statement, but not with an UPDATE or DELETE statement. Similarly, it assumes
-that an UPDATE or DELETE statement does not commute with another UPDATE or
+that an UPDATE or DELETE statement doesn't commute with another UPDATE or
DELETE statement. This precaution means that UPDATEs and DELETEs require Hyperscale (Citus) to acquire stronger locks.
Hyperscale (Citus) has three executor types for running distributed SELECT
queries. The desired executor can be selected by setting this configuration parameter. The accepted values for this parameter are: -- **adaptive:** The default. It is optimal for fast responses to
+- **adaptive:** The default. It's optimal for fast responses to
queries that involve aggregations and colocated joins spanning across multiple shards. - **task-tracker:** The task-tracker executor is well suited for long
HINT: Queries are split to multiple tasks if they have to be split into several
STATEMENT: select * from foo; ```
+##### citus.propagate_set_commands (enum)
+
+Determines which SET commands are propagated from the coordinator to workers.
+The default value for this parameter is ΓÇÿnoneΓÇÖ.
+
+The supported values are:
+
+* **none:** no SET commands are propagated.
+* **local:** only SET LOCAL commands are propagated.
+ ##### citus.enable\_repartition\_joins (boolean) Ordinarily, attempting to perform repartition joins with the adaptive executor
will fail with an error message. However setting
temporarily switch into the task-tracker executor to perform the join. The default value is false.
+##### citus.enable_repartitioned_insert_select (boolean)
+
+By default, an INSERT INTO … SELECT statement that can’t be pushed down will
+attempt to repartition rows from the SELECT statement and transfer them between
+workers for insertion. However, if the target table has too many shards then
+repartitioning will probably not perform well. The overhead of processing the
+shard intervals when determining how to partition the results is too great.
+Repartitioning can be disabled manually by setting
+`citus.enable_repartitioned_insert_select` to false.
+
+##### citus.enable_binary_protocol (boolean)
+
+Setting this parameter to true instructs the coordinator node to use
+PostgreSQLΓÇÖs binary serialization format (when applicable) to transfer data
+with workers. Some column types don't support binary serialization.
+
+Enabling this parameter is mostly useful when the workers must return large
+amounts of data. Examples are when many rows are requested, the rows have
+many columns, or they use wide types such as `hll` from the postgresql-hll
+extension.
+
+The default value is true for Postgres versions 14 and higher. For Postgres
+versions 13 and lower the default is false, which means all results are encoded
+and transferred in text format.
+
+##### citus.max_adaptive_executor_pool_size (integer)
+
+Max_adaptive_executor_pool_size limits worker connections from the current
+session. This GUC is useful for:
+
+* Preventing a single backend from getting all the worker resources
+* Providing priority management: designate low priority sessions with low
+ max_adaptive_executor_pool_size, and high priority sessions with higher
+ values
+
+The default value is 16.
+
+##### citus.executor_slow_start_interval (integer)
+
+Time to wait in milliseconds between opening connections to the same worker
+node.
+
+When the individual tasks of a multi-shard query take little time, they
+can often be finished over a single (often already cached) connection. To avoid
+redundantly opening more connections, the executor waits between
+connection attempts for the configured number of milliseconds. At the end of
+the interval, it increases the number of connections it's allowed to open next
+time.
+
+For long queries (those taking >500 ms), slow start might add latency, but for
+short queries itΓÇÖs faster. The default value is 10 ms.
+
+##### citus.max_cached_conns_per_worker (integer)
+
+Each backend opens connections to the workers to query the shards. At the end
+of the transaction, the configured number of connections is kept open to speed
+up subsequent commands. Increasing this value will reduce the latency of
+multi-shard queries, but will also increase overhead on the workers.
+
+The default value is 1. A larger value such as 2 might be helpful for clusters
+that use a small number of concurrent sessions, but itΓÇÖs not wise to go much
+further (for example, 16 would be too high).
+
+##### citus.force_max_query_parallelization (boolean)
+
+Simulates the deprecated and now nonexistent real-time executor. This is used
+to open as many connections as possible to maximize query parallelization.
+
+When this GUC is enabled, Citus will force the adaptive executor to use as many
+connections as possible while executing a parallel distributed query. If not
+enabled, the executor might choose to use fewer connections to optimize overall
+query execution throughput. Internally, setting this true will end up using one
+connection per task.
+
+One place where this is useful is in a transaction whose first query is
+lightweight and requires few connections, while a subsequent query would
+benefit from more connections. Citus decides how many connections to use in a
+transaction based on the first statement, which can throttle other queries
+unless we use the GUC to provide a hint.
+
+```postgresql
+BEGIN;
+-- add this hint
+SET citus.force_max_query_parallelization TO ON;
+
+-- a lightweight query that doesn't require many connections
+SELECT count(*) FROM table WHERE filter = x;
+
+-- a query that benefits from more connections, and can obtain
+-- them since we forced max parallelization above
+SELECT ... very .. complex .. SQL;
+COMMIT;
+```
+
+The default value is false.
+ #### Task tracker executor configuration ##### citus.task\_tracker\_delay (integer)
higher execution times. In those cases, it can be useful to enable this
parameter, after which the EXPLAIN output will include all tasks. Explaining all tasks may cause the EXPLAIN to take longer.
+##### citus.explain_analyze_sort_method (enum)
+
+Determines the sort method of the tasks in the output of EXPLAIN ANALYZE. The
+default value of `citus.explain_analyze_sort_method` is `execution-time`.
+
+The supported values are:
+
+* **execution-time:** sort by execution time.
+* **taskId:** sort by task ID.
+ ## PostgreSQL parameters * [DateStyle](https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-DATETIME-OUTPUT) - Sets the display format for date and time values
all tasks may cause the EXPLAIN to take longer.
* [exit_on_error](https://www.postgresql.org/docs/current/runtime-config-error-handling.html#GUC-EXIT-ON-ERROR) - Terminates session on any error * [extra_float_digits](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-EXTRA-FLOAT-DIGITS) - Sets the number of digits displayed for floating-point values * [force_parallel_mode](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-FORCE-PARALLEL-MODE) - Forces use of parallel query facilities
-* [from_collapse_limit](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-FROM-COLLAPSE-LIMIT) - Sets the FROM-list size beyond which subqueries are not collapsed
+* [from_collapse_limit](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-FROM-COLLAPSE-LIMIT) - Sets the FROM-list size beyond which subqueries aren't collapsed
* [geqo](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-GEQO) - Enables genetic query optimization * [geqo_effort](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-GEQO-EFFORT) - GEQO: effort is used to set the default for other GEQO parameters * [geqo_generations](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-GEQO-GENERATIONS) - GEQO: number of iterations of the algorithm
all tasks may cause the EXPLAIN to take longer.
* [gin_fuzzy_search_limit](https://www.postgresql.org/docs/current/runtime-config-client.html#id-1.6.6.14.5.2.2.1.3) - Sets the maximum allowed result for exact search by GIN * [gin_pending_list_limit](https://www.postgresql.org/docs/current/runtime-config-client.html#id-1.6.6.14.2.2.23.1.3) - Sets the maximum size of the pending list for GIN index * [idle_in_transaction_session_timeout](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-IDLE-IN-TRANSACTION-SESSION-TIMEOUT) - Sets the maximum allowed duration of any idling transaction
-* [join_collapse_limit](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-JOIN-COLLAPSE-LIMIT) - Sets the FROM-list size beyond which JOIN constructs are not flattened
+* [join_collapse_limit](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-JOIN-COLLAPSE-LIMIT) - Sets the FROM-list size beyond which JOIN constructs aren't flattened
* [lc_monetary](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-LC-MONETARY) - Sets the locale for formatting monetary amounts * [lc_numeric](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-LC-NUMERIC) - Sets the locale for formatting numbers * [lo_compat_privileges](https://www.postgresql.org/docs/current/runtime-config-compatible.html#GUC-LO-COMPAT-PRIVILEGES) - Enables backward compatibility mode for privilege checks on large objects
all tasks may cause the EXPLAIN to take longer.
* [log_statement_stats](https://www.postgresql.org/docs/current/runtime-config-statistics.html#id-1.6.6.12.3.2.1.1.3) - For each query, writes cumulative performance statistics to the server log * [log_temp_files](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-TEMP-FILES) - Logs the use of temporary files larger than this number of kilobytes * [maintenance_work_mem](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAINTENANCE-WORK-MEM) - Sets the maximum memory to be used for maintenance operations
-* [max_parallel_workers](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAX-PARALLEL-WORKERS) - Sets the maximum number of parallel workers than can be active at one time
+* [max_parallel_workers](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAX-PARALLEL-WORKERS) - Sets the maximum number of parallel workers that can be active at one time
* [max_parallel_workers_per_gather](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAX-PARALLEL-WORKERS-PER-GATHER) - Sets the maximum number of parallel processes per executor node * [max_pred_locks_per_page](https://www.postgresql.org/docs/current/runtime-config-locks.html#GUC-MAX-PRED-LOCKS-PER-PAGE) - Sets the maximum number of predicate-locked tuples per page * [max_pred_locks_per_relation](https://www.postgresql.org/docs/current/runtime-config-locks.html#GUC-MAX-PRED-LOCKS-PER-RELATION) - Sets the maximum number of predicate-locked pages and tuples per relation
all tasks may cause the EXPLAIN to take longer.
* [quote_all_identifiers](https://www.postgresql.org/docs/current/runtime-config-compatible.html#GUC-QUOTE-ALL-IDENTIFIERS) - When generating SQL fragments, quotes all identifiers * [random_page_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-RANDOM-PAGE-COST) - Sets the planner's estimate of the cost of a nonsequentially fetched disk page * [row_security](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-ROW-SECURITY) - Enables row security
-* [search_path](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-SEARCH-PATH) - Sets the schema search order for names that are not schema-qualified
+* [search_path](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-SEARCH-PATH) - Sets the schema search order for names that aren't schema-qualified
* [seq_page_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-SEQ-PAGE-COST) - Sets the planner's estimate of the cost of a sequentially fetched disk page * [session_replication_role](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-SESSION-REPLICATION-ROLE) - Sets the session's behavior for triggers and rewrite rules * [standard_conforming_strings](https://www.postgresql.org/docs/current/runtime-config-compatible.html#id-1.6.6.16.2.2.7.1.3) - Causes '...' strings to treat backslashes literally
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Storage account (Microsoft.Storage/storageAccounts) / File (file, file_secondary) | privatelink.file.core.windows.net | file.core.windows.net | | Storage account (Microsoft.Storage/storageAccounts) / Web (web, web_secondary) | privatelink.web.core.windows.net | web.core.windows.net | | Azure Data Lake File System Gen2 (Microsoft.Storage/storageAccounts) / Data Lake File System Gen2 (dfs, dfs_secondary) | privatelink.dfs.core.windows.net | dfs.core.windows.net |
-| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) / SQL | privatelink.documents.azure.com | documents.azure.com |
+| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) / Sql | privatelink.documents.azure.com | documents.azure.com |
| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) / MongoDB | privatelink.mongo.cosmos.azure.com | mongo.cosmos.azure.com | | Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) / Cassandra | privatelink.cassandra.cosmos.azure.com | cassandra.cosmos.azure.com | | Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) / Gremlin | privatelink.gremlin.cosmos.azure.com | gremlin.cosmos.azure.com | | Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) / Table | privatelink.table.cosmos.azure.com | table.cosmos.azure.com |
-| Azure Batch (Microsoft.Batch/batchAccounts) / batch account | privatelink.{region}.batch.azure.com | {region}.batch.azure.com |
+| Azure Batch (Microsoft.Batch/batchAccounts) / batchAccount | privatelink.{region}.batch.azure.com | {region}.batch.azure.com |
| Azure Database for PostgreSQL - Single server (Microsoft.DBforPostgreSQL/servers) / postgresqlServer | privatelink.postgres.database.azure.com | postgres.database.azure.com | | Azure Database for MySQL (Microsoft.DBforMySQL/servers) / mysqlServer | privatelink.mysql.database.azure.com | mysql.database.azure.com | | Azure Database for MariaDB (Microsoft.DBforMariaDB/servers) / mariadbServer | privatelink.mariadb.database.azure.com | mariadb.database.azure.com | | Azure Key Vault (Microsoft.KeyVault/vaults) / vault | privatelink.vault.azure.net | vault.azure.net |
+| Azure Key Vault (Microsoft.KeyVault/managedHSMs) / Managed HSMs | privatelink.managedhsm.azure.net | managedhsm.azure.net |
| Azure Kubernetes Service - Kubernetes API (Microsoft.ContainerService/managedClusters) / management | privatelink.{region}.azmk8s.io | {region}.azmk8s.io | | Azure Search (Microsoft.Search/searchServices) / searchService | privatelink.search.windows.net | search.windows.net | | Azure Container Registry (Microsoft.ContainerRegistry/registries) / registry | privatelink.azurecr.io | azurecr.io |
For Azure services, use the recommended zone names as described in the following
| Azure Data Factory (Microsoft.DataFactory/factories) / dataFactory | privatelink.datafactory.azure.net | datafactory.azure.net | | Azure Data Factory (Microsoft.DataFactory/factories) / portal | privatelink.adf.azure.com | adf.azure.com | | Azure Cache for Redis (Microsoft.Cache/Redis) / redisCache | privatelink.redis.cache.windows.net | redis.cache.windows.net |
-| Azure Cache for Redis Enterprise (Microsoft.Cache/RedisEnterprise) / redisCache | privatelink.redisenterprise.cache.azure.net | redisenterprise.cache.azure.net |
-| Azure Purview (Microsoft.Purview)| privatelink.purview.azure.com | purview.azure.com |
-| Azure Purview (Microsoft.Purview)| privatelink.purviewstudio.azure.com | purview.azure.com |
+| Azure Cache for Redis Enterprise (Microsoft.Cache/RedisEnterprise) / redisEnterprise | privatelink.redisenterprise.cache.azure.net | redisenterprise.cache.azure.net |
+| Azure Purview (Microsoft.Purview) / portal | privatelink.purview.azure.com | purview.azure.com |
+| Azure Purview (Microsoft.Purview) / portal| privatelink.purviewstudio.azure.com | purview.azure.com |
| Azure Digital Twins (Microsoft.DigitalTwins) / digitalTwinsInstances | privatelink.digitaltwins.azure.net | digitaltwins.azure.net | | Azure HDInsight (Microsoft.HDInsight) | privatelink.azurehdinsight.net | azurehdinsight.net |
+| Azure Arc (Microsoft.HybridCompute) / hybridcompute | privatelink.his.arc.azure.com<br />privatelink.guestconfiguration.azure.com | his.arc.azure.com<br />guestconfiguration.azure.com |
+| Azure Media Services (Microsoft.Media) / keydelivery, liveevent, streamingendpoint | privatelink.media.azure.net | media.azure.net |
<sup>1</sup>To use with IoT Hub's built-in Event Hub compatible endpoint. To learn more, see [private link support for IoT Hub's built-in endpoint](../iot-hub/virtual-network-support.md#built-in-event-hub-compatible-endpoint)
For Azure services, use the recommended zone names as described in the following
| Storage account (Microsoft.Storage/storageAccounts) / File (file, file_secondary) | privatelink.file.core.chinacloudapi.cn | file.core.chinacloudapi.cn | | Storage account (Microsoft.Storage/storageAccounts) / Web (web, web_secondary) | privatelink.web.core.chinacloudapi.cn | web.core.chinacloudapi.cn | | Azure Data Lake File System Gen2 (Microsoft.Storage/storageAccounts) / Data Lake File System Gen2 (dfs, dfs_secondary) | privatelink.dfs.core.chinacloudapi.cn | dfs.core.chinacloudapi.cn |
-| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) / SQL | privatelink.documents.azure.cn | documents.azure.cn |
+| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) / Sql | privatelink.documents.azure.cn | documents.azure.cn |
| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) / MongoDB | privatelink.mongo.cosmos.azure.cn | mongo.cosmos.azure.cn | | Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) / Cassandra | privatelink.cassandra.cosmos.azure.cn | cassandra.cosmos.azure.cn | | Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) / Gremlin | privatelink.gremlin.cosmos.azure.cn | gremlin.cosmos.azure.cn |
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md
As you're creating private endpoints, consider the following:
- Network connections can be initiated only by clients that are connecting to the private endpoint. Service providers don't have a routing configuration to create connections into service customers. Connections can be established in a single direction only. -- A read-only network interface is created for the lifecycle of the resource. The interface is assigned a dynamic private IP address from the subnet that maps to the private-link resource. The value of the private IP address remains unchanged for the entire lifecycle of the private endpoint.
+- A read-only network interface is *automatically created* for the lifecycle of the private endpoint. The interface is assigned a dynamic private IP address from the subnet that maps to the private-link resource. The value of the private IP address remains unchanged for the entire lifecycle of the private endpoint.
- The private endpoint must be deployed in the same region and subscription as the virtual network.
A private-link resource is the destination target of a specified private endpoin
| Azure Managed Disks | Microsoft.Compute/diskAccesses | managed disk | | Azure Container Registry | Microsoft.ContainerRegistry/registries | registry | | Azure Kubernetes Service - Kubernetes API | Microsoft.ContainerService/managedClusters | management |
-| Azure Data Factory | Microsoft.DataFactory/factories | data factory |
+| Azure Data Factory | Microsoft.DataFactory/factories | dataFactory |
| Azure Database for MariaDB | Microsoft.DBforMariaDB/servers | mariadbServer | | Azure Database for MySQL | Microsoft.DBforMySQL/servers | mysqlServer | | Azure Database for PostgreSQL - Single server | Microsoft.DBforPostgreSQL/servers | postgresqlServer | | Azure IoT Hub | Microsoft.Devices/IotHubs | iotHub | | Azure Digital Twins | Microsoft.DigitalTwins/digitalTwinsInstances | digitaltwinsinstance | | Azure Event Grid | Microsoft.EventGrid/domains | domain |
-| Azure Event Grid | Microsoft.EventGrid/topics | Event grid topic |
+| Azure Event Grid | Microsoft.EventGrid/topics | topic |
| Azure Event Hub | Microsoft.EventHub/namespaces | namespace | | Azure HDInsight | Microsoft.HDInsight/clusters | cluster |
-| Azure API for FHIR (Fast Healthcare Interoperability Resources) | Microsoft.HealthcareApis/services | service |
+| Azure API for FHIR (Fast Healthcare Interoperability Resources) | Microsoft.HealthcareApis/services | fhir |
| Azure Key Vault HSM (hardware security module) | Microsoft.Keyvault/managedHSMs | HSM | | Azure Key Vault | Microsoft.KeyVault/vaults | vault | | Azure Machine Learning | Microsoft.MachineLearningServices/workspaces | amlworkspace |
A private-link resource is the destination target of a specified private endpoin
| Azure App Service | Microsoft.Web/sites | sites | | Azure App Service | Microsoft.Web/staticSites | staticSite |
+> [!NOTE]
+> You can create private endpoints only on a General Purpose v2 (GPv2) storage account.
+
## Network security of private endpoints When you use private endpoints, traffic is secured to a private-link resource. The platform does an access control to validate network connections that reach only the specified private-link resource. To access more resources within the same Azure service, you need additional private endpoints.
purview Azure Purview Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/azure-purview-connector-overview.md
The table below shows the supported capabilities for each data source. Select th
|File|[Amazon S3](register-scan-amazon-s3.md)|[Yes](register-scan-amazon-s3.md)| [Yes](register-scan-amazon-s3.md)| Limited* | No| |Services and apps| [Erwin](register-scan-erwin-source.md)| [Yes](register-scan-erwin-source.md#register)| No | [Yes](register-scan-erwin-source.md#lineage)| No| || [Looker](register-scan-looker-source.md)| [Yes](register-scan-looker-source.md#register)| No | [Yes](register-scan-looker-source.md#lineage)| No|
-|| [Power BI](register-scan-power-bi-tenant.md)| [Yes](register-scan-power-bi-tenant.md#register)| No | [Yes](how-to-lineage-powerbi.md)| No|
+|| [Power BI](register-scan-power-bi-tenant.md)| [Yes](register-scan-power-bi-tenant.md)| No | [Yes](how-to-lineage-powerbi.md)| No|
|| [Salesforce](register-scan-salesforce.md) | [Yes](register-scan-salesforce.md#register) | No | No | No | || [SAP ECC](register-scan-sapecc-source.md)| [Yes](register-scan-sapecc-source.md#register) | No | [Yes*](register-scan-sapecc-source.md#lineage) | No| || [SAP S/4HANA](register-scan-saps4hana-source.md) | [Yes](register-scan-saps4hana-source.md#register)| No | [Yes*](register-scan-saps4hana-source.md#lineage) | No|
purview Catalog Private Link Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-private-link-troubleshoot.md
This guide summarizes known limitations related to using private endpoints for A
- Using Azure portal, the ingestion private endpoints can be created via the Azure Purview portal experience described in the preceding steps. They can't be created from the Private Link Center. - Creating DNS A records for ingestion private endpoints inside existing Azure DNS Zones, while the Azure Private DNS Zones are located in a different subscription than the private endpoints is not supported via the Azure Purview portal experience. A records can be added manually in the destination DNS Zones in the other subscription. - Self-hosted integration runtime machine must be deployed in the same VNet or a peered VNet where Azure Purview account and ingestion private endpoints are deployed.-- We currently do not support scanning a Power BI tenant, which has a private endpoint configured with public access blocked.
+- We currently do not support scanning a cross-tenant Power BI tenant, which has a private endpoint configured with public access blocked.
- For limitation related to Private Link service, see [Azure Private Link limits](../azure-resource-manager/management/azure-subscription-service-limits.md#private-link-limits). ## Recommended troubleshooting steps
purview Catalog Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-private-link.md
For scenarios where _ingestion_ private endpoint is used in your Azure Purview a
|SQL Server | Self-Hosted IR| SQL Authentication| |Azure Synapse Analytics | Self-Hosted IR| Service Principal| |Azure Synapse Analytics | Self-Hosted IR| SQL Authentication|
+|Power BI tenant (Same tenant) |Self-Hosted IR| Delegated Auth|
## Frequently Asked Questions
purview How To Data Owner Policy Authoring Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policy-authoring-generic.md
Previously updated : 2/2/2022 Last updated : 2/22/2022 # Authoring and publishing data owner access policies (preview)- This tutorial describes how a data owner can create, update and publish access policies in Azure Purview.
+## Prerequisites
+The following actions are needed before authoring access policies in Azure Purview:
+1. Configure permissions in the data source and in Azure Purview
+1. Register the data source in Azure Purview for Data Use Governance
+
+These tutorials list the pre-requisites of supported data sources
+- [Azure Storage](./tutorial-data-owner-policies-storage.md#configuration)
+- [Resource Groups and Subscriptions](./tutorial-data-owner-policies-resource-group.md#configuration)
+ ## Create a new policy This section describes the steps to create a new policy in Azure Purview.
purview How To Integrate With Azure Security Products https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-integrate-with-azure-security-products.md
Title: Integrate with Azure security products
+ Title: Integrate with Azure security products
description: This article describes how to connect Azure security services and Azure Purview to get enriched security experiences. Previously updated : 11/05/2021 Last updated : 01/23/2022 # Integrate Azure Purview with Azure security products This document explains the steps required for connecting an Azure Purview account with various Azure security products to enrich security experiences with data classifications and sensitivity labels. ## Microsoft Defender for Cloud+ Azure Purview provides rich insights into the sensitivity of your data. This makes it valuable to security teams using Microsoft Defender for Cloud to manage the organizationΓÇÖs security posture and protect against threats to their workloads. Data resources remain a popular target for malicious actors, making it crucial for security teams to identify, prioritize, and secure sensitive data resources across their cloud environments. The integration with Azure Purview expands visibility into the data layer, enabling security teams to prioritize resources that contain sensitive data. To take advantage of this [enrichment in Microsoft Defender for Cloud](../security-center/information-protection.md), no additional steps are needed in Azure Purview. Start exploring the security enrichments with Microsoft Defender for Cloud's [Inventory page](https://portal.azure.com/#blade/Microsoft_Azure_Security/SecurityMenuBlade/25) where you can see the list of data sources with classifications and sensitivity labels.
The integration supports data sources in Azure and AWS; sensitive data discovere
6. Deleting the Azure Purview account will persist the data sensitivity enrichment for 30 days in Microsoft Defender for Cloud. 7. Custom classifications defined in the Microsoft 365 Compliance Center or in Azure Purview are not shared with Microsoft Defender for Cloud.
-## FAQ
-### **Why don't I see the AWS data source I have scanned with Azure Purview in Microsoft Defender for Cloud?**
+### FAQ
+#### **Why don't I see the AWS data source I have scanned with Azure Purview in Microsoft Defender for Cloud?**
Data sources must be onboarded to Microsoft Defender for Cloud as well. Learn more about how to [connect your AWS accounts](../security-center/quickstart-onboard-aws.md) and see your AWS data sources in Microsoft Defender for Cloud.
-### **Why don't I see sensitivity labels in Microsoft Defender for Cloud?**
+#### **Why don't I see sensitivity labels in Microsoft Defender for Cloud?**
Assets must first be labeled in Azure Purview, before the labels are shown in Microsoft Defender for Cloud. Check if you have the [prerequisites of sensitivity labels](./how-to-automatically-label-your-content.md) in place. Once your scan the data, the labels will show up in Azure Purview and then automatically in Microsoft Defender for Cloud.
+## Microsoft Sentinel
+
+Microsoft Sentinel is a scalable, cloud-native, solution for both security information and event management (SIEM), and security orchestration, automation, and response (SOAR). Microsoft Sentinel delivers intelligent security analytics and threat intelligence across the enterprise, providing a single solution for attack detection, threat visibility, proactive hunting, and threat response.
+
+Integrate Azure Purview with Microsoft Sentinel to gain visibility into where on your network sensitive information is stored, in a way that helps you prioritize at-risk data for protection, and understand the most critical incidents and threats to investigate in Microsoft Sentinel.
+
+1. Start by ingesting your Azure Purview logs into Microsoft Sentinel through a data connector.
+1. Then use a Microsoft Sentinel workbook to view data such as assets scanned, classifications found, and labels applied by Azure Purview.
+1. Use analytics rules to create alerts for changes within data sensitivity.
+
+Customize the Azure Purview workbook and analytics rules to best suit the needs of your organization, and combine Azure Purview logs with data ingested from other sources to create enriched insights within Microsoft Sentinel.
+
+For more information, see [Tutorial: Integrate Microsoft Sentinel and Azure Purview](/azure/sentinel/purview-solution).
+ ## Next steps - [Experiences in Microsoft Defender for Cloud enriched using sensitivity from Azure Purview](../security-center/information-protection.md)
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
It's important to register the data source in Azure Purview before setting up a
## Scan
+### Firewall settings
+
+If your database server has a firewall enabled, you'll need to update the firewall to allow access in one of two ways:
+
+1. Allow Azure connections through the firewall.
+1. Install a Self-Hosted Integration Runtime and give it access through the firewall.
+
+#### Allow Azure Connections
+
+Enabling Azure connections will allow Azure Purview to reach and connect the server without updating the firewall itself. You can follow the How-to guide for [Connections from inside Azure](../azure-sql/database/firewall-configure.md#connections-from-inside-azure).
+
+1. Navigate to your database account
+1. Select the server name in the **Overview** page
+1. Select **Security > Firewalls and virtual networks**
+1. Select **Yes** for **Allow Azure services and resources to access this server**
+
+#### Self-Hosted Integration Runtime
+
+A self-hosted integration runtime (SHIR) can be installed on a machine to connect with a resource in a private network.
+
+1. [Create and install a self-hosted integration runtime](./manage-integration-runtimes.md) on a personal machine, or a machine inside the same VNet as your database server.
+1. Check your database server firewall to confirm that the SHIR machine has access through the firewall. Add the IP of the machine if it doesn't already have access.
+1. If your Azure SQL Server is behind a private endpoint or in a VNet, you can use an [ingestion private endpoint](catalog-private-link-ingestion.md#deploy-self-hosted-integration-runtime-ir-and-scan-your-data-sources) to ensure end-to-end network isolation.
+ ### Authentication for a scan To scan your data source, you'll need to configure an authentication method in the Azure SQL Database. The following options are supported:
-* [**SQL Authentication**](#using-sql-authentication-for-scanning)
+* **SQL Authentication**
-* [**System-assigned managed identity**](#using-a-system-or-user-assigned-managed-identity-for-scanning) - As soon as the Azure Purview account is created, a system-assigned managed identity (SAMI) is created automatically in Azure AD tenant, and has the same name as your Azure Purview account. Depending on the type of resource, specific RBAC role assignments are required for the Azure Purview SAMI to be able to scan.
+* **System-assigned managed identity** - As soon as the Azure Purview account is created, a system-assigned managed identity (SAMI) is created automatically in Azure AD tenant, and has the same name as your Azure Purview account. Depending on the type of resource, specific RBAC role assignments are required for the Azure Purview SAMI to be able to scan.
-* [**User-assigned managed identity**](#using-a-system-or-user-assigned-managed-identity-for-scanning) (preview) - Similar to a SAMI, a user-assigned managed identity (UAMI) is a credential resource that can be used to allow Azure Purview to authenticate against Azure Active Directory. Depending on the type of resource, specific RBAC role assignments are required when using a UAMI credential to run scans.
+* **User-assigned managed identity** (preview) - Similar to a SAMI, a user-assigned managed identity (UAMI) is a credential resource that can be used to allow Azure Purview to authenticate against Azure Active Directory. Depending on the type of resource, specific RBAC role assignments are required when using a UAMI credential to run scans.
-* [**Service Principal**](#using-service-principal-for-scanning) - In this method, you can create a new or use an existing service principal in your Azure Active Directory tenant.
+* **Service Principal**- In this method, you can create a new or use an existing service principal in your Azure Active Directory tenant.
>[!IMPORTANT] > If you are using a [self-hosted integration runtime](manage-integration-runtimes.md) to connect to your resource, system-assigned and user-assigned managed identities will not work. You need to use SQL Authentication or Service Principal Authentication.
-#### Using SQL Authentication for scanning
+Select your method of authentication from the tabs below for steps to authenticate with your Azure SQL Database.
+
+# [SQL authentication](#tab/sql-authentication)
> [!Note] > Only the server-level principal login (created by the provisioning process) or members of the `loginmanager` database role in the master database can create new logins. It takes about **15 minutes** after granting permission, the Azure Purview account should have the appropriate permissions to be able to scan the resource(s).
You can follow the instructions in [CREATE LOGIN](/sql/t-sql/statements/create-l
1. Select **Create** to complete
-1. If your key vault is not connected to Azure Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
+1. If your key vault isn't connected to Azure Purview yet, you'll need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the key to set up your scan
You can follow the instructions in [CREATE LOGIN](/sql/t-sql/statements/create-l
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-key-vault-options.png" alt-text="Screenshot that shows the key vault option to create a secret":::
-#### Using a system or user assigned managed identity for scanning
+# [Managed identity](#tab/managed-identity)
>[!IMPORTANT] > If you are using a [self-hosted integration runtime](manage-integration-runtimes.md) to connect to your resource, system-assigned and user-assigned managed identities will not work. You need to use SQL Authentication or Service Principal Authentication.
The managed identity needs permission to get metadata for the database, schemas,
##### Configure Portal Authentication
-It is important to give your Azure Purview account's system-managed identity or [user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity) the permission to scan the Azure SQL DB. You can add the SAMI or UAMI at the Subscription, Resource Group, or Resource level, depending on the breadth of the scan.
+It's important to give your Azure Purview account's system-managed identity or [user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity) the permission to scan the Azure SQL DB. You can add the SAMI or UAMI at the Subscription, Resource Group, or Resource level, depending on the breadth of the scan.
-> [!Note]
+> [!Note]
> You need to be an owner of the subscription to be able to add a managed identity on an Azure resource. 1. From the [Azure portal](https://portal.azure.com), find either the subscription, resource group, or resource (for example, an Azure SQL Database) that the catalog should scan.
It is important to give your Azure Purview account's system-managed identity or
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-access-managed-identity.png" alt-text="Screenshot that shows the details to assign permissions for the Azure Purview account":::
-#### Using Service Principal for scanning
+# [Service principal](#tab/service-principal)
##### Creating a new service principal
-If you do not have a service principal, you can [follow the service principal guide to create one.](./create-service-principal-azure.md)
+If you don't have a service principal, you can [follow the service principal guide to create one.](./create-service-principal-azure.md)
> [!NOTE] > To create a service principal, it's required to register an application in your Azure AD tenant. If you do not have access to do this, your Azure AD Global Administrator, or other roles like Application Administrator can perform this operation.
The service principal needs permission to get metadata for the database, schemas
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-secret.png" alt-text="Screenshot that shows the key vault option to generate a secret for Service Principal":::
-1. Give the secret a **Name** of your choice.
+1. Give the secret a **Name** of your choice.
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-create-secret.png" alt-text="Screenshot that shows the key vault option to enter the secret values":::
-1. The secret's **Value** will be the Service Principal's **Secret Value**. If you have already created a secret for your service principal, you can find its value in **Client credentials** on your secret's overview page.
+1. The secret's **Value** will be the Service Principal's **Secret Value**. If you've already created a secret for your service principal, you can find its value in **Client credentials** on your secret's overview page.
If you need to create a secret, you can follow the steps in the [service principal guide](create-service-principal-azure.md#adding-a-secret-to-the-client-credentials).
The service principal needs permission to get metadata for the database, schemas
:::image type="content" source="media/register-scan-azure-sql-database/select-create.png" alt-text="Screenshot that shows the Key Vault Create a secret menu, with the Create button highlighted.":::
-1. If your key vault is not connected to Azure Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
+1. If your key vault isn't connected to Azure Purview yet, you'll need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
1. Then, [create a new credential](manage-credentials.md#create-a-new-credential).
The service principal needs permission to get metadata for the database, schemas
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-sp-cred.png" alt-text="Screenshot that shows the key vault option to create a secret for Service Principal":::
-### Firewall settings
-
-If your database server has a firewall enabled, you will need to update the firewall to allow access in one of two ways:
-
-1. Allow Azure connections through the firewall.
-1. Install a Self-Hosted Integration Runtime and give it access through the firewall.
-
-#### Allow Azure Connections
-
-Enabling Azure connections will allow Azure Purview to reach and connect the server without updating the firewall itself. You can follow the How-to guide for [Connections from inside Azure](../azure-sql/database/firewall-configure.md#connections-from-inside-azure).
-
-1. Navigate to your database account
-1. Select the server name in the **Overview** page
-1. Select **Security > Firewalls and virtual networks**
-1. Select **Yes** for **Allow Azure services and resources to access this server**
-
-#### Self-Hosted Integration Runtime
-
-A self-hosted integration runtime (SHIR) can be installed on a machine to connect with a resource in a private network.
-
-1. [Create and install a self-hosted integration runtime](./manage-integration-runtimes.md) on a personal machine, or a machine inside the same VNet as your database server.
-1. Check your database server firewall to confirm that the SHIR machine has access through the firewall. Add the IP of the machine if it does not already have access.
-1. If your Azure SQL Server is behind a private endpoint or in a VNet, you can use an [ingestion private endpoint](catalog-private-link-ingestion.md#deploy-self-hosted-integration-runtime-ir-and-scan-your-data-sources) to ensure end-to-end network isolation.
+ ### Creating the scan
A self-hosted integration runtime (SHIR) can be installed on a machine to connec
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-new-scan.png" alt-text="Screenshot that shows the screen to create a new scan":::
-#### If using SQL Authentication
+Select your method of authentication from the tabs below for scanning steps.
+
+# [SQL authentication](#tab/sql-authentication)
1. Provide a **Name** for the scan, select **Database selection method** as _Enter manually_, enter the **Database name** and the **Credential** created earlier, choose the appropriate collection for the scan and select **Test connection** to validate the connection. Once the connection is successful, select **Continue** :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-sql-auth.png" alt-text="Screenshot that shows the SQL Authentication option for scanning":::
-#### If using a system or user assigned managed identity
+# [Managed identity](#tab/managed-identity)
1. Provide a **Name** for the scan, select the SAMI or UAMI under **Credential**, choose the appropriate collection for the scan.
A self-hosted integration runtime (SHIR) can be installed on a machine to connec
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-test.png" alt-text="Screenshot that allows the managed identity option to run the scan":::
-#### If using Service Principal
+# [Service principal](#tab/service-principal)
1. Provide a **Name** for the scan, choose the appropriate collection for the scan, and select the **Credential** dropdown to select the credential created earlier.
A self-hosted integration runtime (SHIR) can be installed on a machine to connec
1. Select **Test connection**. On a successful connection, select **Continue**. ++ ### Scoping and running the scan 1. You can scope your scan to specific folders and subfolders by choosing the appropriate items in the list.
Scans can be managed or run again on completion
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
+Now that you've registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md
Previously updated : 11/02/2021 Last updated : 02/02/2022
This article outlines how to register a Power BI tenant, and how to authenticate
|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**| ||||||||
-| [Yes](#register)| [Yes](#scan)| No | No | No | No| [Yes](how-to-lineage-powerbi.md)|
+| [Yes](#prerequisites)| [Yes](#prerequisites)| Yes | No | No | No| [Yes](how-to-lineage-powerbi.md)|
-> [!Note]
-> If the Azure Purview instance and the Power BI tenant are in the same Azure tenant, you can only use managed identity (MSI) authentication to set up a scan of a Power BI tenant.
+### Supported scenarios for Power BI scans
-### Known limitations
+|**Azure Purview public access allowed/denied** |**Power BI public access allowed /denied** | **Power BI tenant same/cross** | **Runtime option** |
+|||||
+|Allowed |Allowed |Same tenant |[Azure Runtime & Managed Identity](#authenticate-to-power-bi-tenant-managed-identity-only) |
+|Allowed |Allowed |Same tenant |[Self-hosted runtime & Delegated authentication](#scan-same-tenant-using-self-hosted-ir-and-delegated-authentication) |
+|Allowed |Denied |Same tenant |[Self-hosted runtime & Delegated authentication](#scan-same-tenant-using-self-hosted-ir-and-delegated-authentication) |
+|Denied |Allowed |Same tenant |[Self-hosted runtime & Delegated authentication](#scan-same-tenant-using-self-hosted-ir-and-delegated-authentication) |
+|Denied |Denied |Same tenant |[Self-hosted runtime & Delegated authentication](#scan-same-tenant-using-self-hosted-ir-and-delegated-authentication) |
+|Allowed |Allowed |Cross-tenant |[Azure Runtime & Delegated authentication](#cross-power-bi-tenant-registration-and-scan) |
+|Allowed |Allowed |Cross-tenant |[Self-hosted runtime & Delegated authentication](#cross-power-bi-tenant-registration-and-scan) |
-- For cross-tenant scenario, no UX experience currently available to register and scan cross Power BI tenant.-- By Editing the Power BI cross tenant registered with PowerShell using Azure Purview Studio will tamper the data source registration with inconsistent scan behavior.-- Review [Power BI Metadata scanning limitations](/power-bi/admin/service-admin-metadata-scanning).
+### Known limitations
+- If Azure Purview or Power BI tenant is protected behind a private endpoint, Self-hosted runtime is the only option to scan
+- Delegated authentication is the only supported authentication option if self-hosted integration runtime is used during the scan
+- For cross-tenant scenario, delegated authentication is only supported option for scanning.
+- You can create only one scan for a Power BI data source that is registered in your Azure Purview account
+- If Power BI dataset schema is not shown after scan, it is due to one of the current limitations with [Power BI Metadata scanner](/power-bi/admin/service-admin-metadata-scanning)
## Prerequisites
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- An active [Azure Purview resource](create-catalog-portal.md).
+
+- You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+
+- If self-hosted integration runtime is used:
-* An active [Azure Purview resource](create-catalog-portal.md).
+ - Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). The minimum required version is 5.14.8055.1. For more information, see[the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
+
+ - Ensure [JDK 8 or later](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html), is installed on the virtual machine where the self-hosted integration runtime is installed.
+
+## Same Power BI tenant registration and scan
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+### Authentication options
-## Register
+- Managed Identity
+- Delegated Authentication
-This section describes how to register a Power BI tenant in Azure Purview in both [same-tenant](#authentication-for-a-same-tenant-scenario) and [cross-tenant](#steps-to-register-cross-tenant) scenarios.
+### Authenticate to Power BI tenant-managed identity only
-### Authentication for a same-tenant scenario
+> [!Note]
+> Follow steps in this section, only if you are planning to use **Managed Identity** as authentication option.
-For both same-tenant and cross-tenant scenarios, to set up authentication, create a security group and add the Purview-managed identity to it.
+In Azure Active Directory Tenant, where Power BI tenant is located:
1. In the [Azure portal](https://portal.azure.com), search for **Azure Active Directory**.
-1. Create a new security group in your Azure Active Directory, by following [Create a basic group and add members using Azure Active Directory](../active-directory/fundamentals/active-directory-groups-create-azure-portal.md).
+
+2. Create a new security group in your Azure Active Directory, by following [Create a basic group and add members using Azure Active Directory](../active-directory/fundamentals/active-directory-groups-create-azure-portal.md).
> [!Tip] > You can skip this step if you already have a security group you want to use.
-1. Select **Security** as the **Group Type**.
+3. Select **Security** as the **Group Type**.
:::image type="content" source="./media/setup-power-bi-scan-PowerShell/security-group.png" alt-text="Screenshot of security group type.":::
-1. Add your Purview-managed identity to this security group. Select **Members**, then select **+ Add members**.
+4. Add your Azure Purview managed identity to this security group. Select **Members**, then select **+ Add members**.
:::image type="content" source="./media/setup-power-bi-scan-PowerShell/add-group-member.png" alt-text="Screenshot of how to add the catalog's managed instance to group.":::
-1. Search for your Purview-managed identity and select it.
+5. Search for your Azure Purview managed identity and select it.
:::image type="content" source="./media/setup-power-bi-scan-PowerShell/add-catalog-to-group-by-search.png" alt-text="Screenshot showing how to add catalog by searching for its name."::: You should see a success notification showing you that it was added.
- :::image type="content" source="./media/setup-power-bi-scan-PowerShell/success-add-catalog-msi.png" alt-text="Screenshot showing successful addition of catalog MSI.":::
+ :::image type="content" source="./media/setup-power-bi-scan-PowerShell/success-add-catalog-msi.png" alt-text="Screenshot showing successful addition of catalog managed identity.":::
-#### Associate the security group with the tenant
+### Associate the security group with Power BI tenant
1. Log into the [Power BI admin portal](https://app.powerbi.com/admin-portal/tenantSettings).
-1. Select the **Tenant settings** page.
+
+2. Select the **Tenant settings** page.
> [!Important] > You need to be a Power BI Admin to see the tenant settings page.
-1. Select **Admin API settings** > **Allow service principals to use read-only Power BI admin APIs (Preview)**.
-1. Select **Specific security groups**.
+3. Select **Admin API settings** > **Allow service principals to use read-only Power BI admin APIs (Preview)**.
+
+4. Select **Specific security groups**.
:::image type="content" source="./media/setup-power-bi-scan-PowerShell/allow-service-principals-power-bi-admin.png" alt-text="Image showing how to allow service principals to get read-only Power BI admin API permissions.":::
-1. Select **Admin API settings** > **Enhance admin APIs responses with detailed metadata** > Enable the toggle to allow Azure Purview Data Map automatically discover the detailed metadata of Power BI datasets as part of its scans.
+5. Select **Admin API settings** > **Enhance admin APIs responses with detailed metadata** > Enable the toggle to allow Azure Purview Data Map automatically discover the detailed metadata of Power BI datasets as part of its scans.
> [!IMPORTANT] > After you update the Admin API settings on your power bi tenant, wait around 15 minutes before registering a scan and test connection.
For both same-tenant and cross-tenant scenarios, to set up authentication, creat
> [!Note] > You can remove the security group from your developer settings, but the metadata previously extracted won't be removed from the Azure Purview account. You can delete it separately, if you wish.
-### Steps to register in the same tenant
+### Register same Power BI tenant
-Now that you've given the Purview-Managed Identity permissions to connect to the Admin API of your Power BI tenant, you can set up your scan from the Azure Purview Studio.
+This section describes how to register a Power BI tenant in Azure Purview for same-tenant scenario.
1. Select the **Data Map** on the left navigation.
Now that you've given the Purview-Managed Identity permissions to connect to the
The name must be between 3-63 characters long and must contain only letters, numbers, underscores, and hyphens. Spaces aren't allowed.
- By default, the system will find the Power BI tenant that exists in the same Azure subscription.
+ By default, the system will find the Power BI tenant that exists in the same Azure Active Directory tenant.
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-datasource-registered.png" alt-text="Image showing the registered Power BI data source.":::
- > [!Note]
- > For Power BI, data source registration and scan is allowed for only one instance.
+### Scan same Power BI tenant
+
+#### Scan same tenant using Azure IR and Managed Identity
-### Steps to register cross-tenant
+This is a suitable scenario, if both Azure Purview and Power PI tenant are configured to allow public access in the network settings.
-In a cross-tenant scenario, you can use PowerShell to register and scan your Power BI tenants. You can browse, and search assets of remote tenant using Azure Purview Studio through the UI experience.
+To create and run a new scan, do the following:
-Consider using this guide if the Azure AD tenant where Power BI tenant is located, is different than the Azure AD tenant where your Azure Purview account is being provisioned.
-Use the following steps to register and scan one or more Power BI tenants in Azure Purview in a cross-tenant scenario:
+1. In the Azure Purview Studio, navigate to the **Data map** in the left menu.
-1. Download the [Managed Scanning PowerShell Modules](https://github.com/Azure/Purview-Samples/blob/master/Cross-Tenant-Scan-PowerBI/ManagedScanningPowerShell.zip), and extract its contents to the location of your choice.
+1. Navigate to **Sources**.
-1. On your computer, enter **PowerShell** in the search box on the Windows taskbar. In the search list, select and hold (or right-click) **Windows PowerShell**, and then select **Run as administrator**.
+1. Select the registered Power BI source.
-1. Install and import module in your machine if it has not been installed yet.
+1. Select **+ New scan**.
- ```powershell
- Install-Module -name az
- Import-Module -name az
- Login-AzAccount
- ```
+2. Give your scan a name. Then select the option to include or exclude the personal workspaces.
-1. Sign into your Azure environment using the Azure AD Administrator credential where your Power BI tenant is located.
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-setup.png" alt-text="Image showing Power BI scan setup.":::
- ```powershell
- Login-AzAccount
- ```
+ > [!Note]
+ > Switching the configuration of a scan to include or exclude a personal workspace will trigger a full scan of PowerBI source.
-1. In the PowerShell window, enter the following command, replacing `<path-to-managed-scanning-powershell-modules>` with the folder path of the extracted modules such as `C:\Program Files\WindowsPowerShell\Modules\ManagedScanningPowerShell`
+3. Select **Test Connection** before continuing to next steps. If **Test Connection** failed, select **View Report** to see the detailed status and troubleshoot the problem.
+ 1. Access - Failed status means the user authentication failed. Scans using managed identity will always pass because no user authentication required.
+ 2. Assets (+ lineage) - Failed status means the Azure Purview - Power BI authorization has failed. Make sure the Azure Purview managed identity is added to the security group associated in Power BI admin portal.
+ 3. Detailed metadata (Enhanced) - Failed status means the Power BI admin portal is disabled for the following setting - **Enhance admin APIs responses with detailed metadata**
- ```powershell
- dir -Path <path-to-managed-scanning-powershell-modules> -Recurse | Unblock-File
- ```
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-test-connection-status-report.png" alt-text="Screenshot of test connection status report page.":::
-1. Enter the following command to install the PowerShell modules.
+4. Set up a scan trigger. Your options are **Recurring**, and **Once**.
- ```powershell
- Import-Module 'C:\Program Files\WindowsPowerShell\Modules\ManagedScanningPowerShell\Microsoft.DataCatalog.Management.Commands.dll'
- ```
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/scan-trigger.png" alt-text="Screenshot of the Azure Purview scan scheduler.":::
-1. Use the same PowerShell session to set the following parameters. Update `purview_tenant_id` with Azure AD tenant ID where Azure Purview is deployed, `powerbi_tenant_id` with your Azure AD tenant where Power BI tenant is located, and `purview_account_name` is your existing Azure Purview account.
+5. On **Review new scan**, select **Save and run** to launch your scan.
- ```powershell
- $azuretenantId = '<purview_tenant_id>'
- $powerBITenantIdToScan = '<powerbi_tenant_id>'
- $purviewaccount = '<purview_account_name>'
- ```
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/save-run-power-bi-scan-managed-identity.png" alt-text="Screenshot of Save and run Power BI source using Managed Identity.":::
-1. Create a cross-tenant Service Principal.
+#### Scan same tenant using Self-hosted IR and Delegated authentication
- 1. Create an App Registration in your Azure Active Directory tenant where Power BI is located. Make sure you update `password` field with a strong password and update `app_display_name` with a non-existent application name in your Azure AD tenant where Power BI tenant is hosted.
+This scenario can be used when Azure Purview and Power PI tenant or both, are configured to use private endpoint and deny public access. Additionally, this option is also applicable if Azure Purview and Power PI tenant are configured to allow public access.
- ```powershell
- $SecureStringPassword = ConvertTo-SecureString -String <'password'> -AsPlainText -Force
- $AppName = '<app_display_name>'
- New-AzADApplication -DisplayName $AppName -Password $SecureStringPassword
- ```
+> [!Note]
+> Additional configuration may be required for your Power BI tenant and Azure Purview account, if you are planning to scan Power BI tenant through private network where either Azure Purview account, Power BI tenant or both are configured with private endpoint with public access denied.
+> For more information related to Power BI network, see [How to configure private endpoints for accessing Power BI](/power-bi/admin/service-security-private-links.md).
+> For more information about Azure Purview network settings, see [Use private endpoints for your Azure Purview account](catalog-private-link.md).
- 1. From Azure Active Directory dashboard, select newly created application and then select **App registration**. Assign the application the following delegated permissions and grant admin consent for the tenant:
+To create and run a new scan, do the following:
- - Power BI Service Tenant.Read.All
- - Microsoft Graph openid
+1. Create a user account in Azure Active Directory tenant and assign the user to Azure Active Directory role, **Power BI Administrator**. Take note of username and login to change the password.
- :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-delegated-permissions.png" alt-text="Screenshot of delegated permissions for Power BI Service and Microsoft Graph.":::
+2. Navigate to your Azure key vault.
- 1. From Azure Active Directory dashboard, select newly created application and then select **Authentication**. Under **Supported account types** select **Accounts in any organizational directory (Any Azure AD directory - Multitenant)**.
+3. Select **Settings** > **Secrets** and select **+ Generate/Import**.
- :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-multitenant.png" alt-text="Screenshot of account type support multitenant.":::
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-key-vault.png" alt-text="Screenshot how to navigate to Azure Key Vault.":::
- 1. Under **Implicit grant and hybrid flows**, ensure to select **ID tokens (used for implicit and hybrid flows)**
-
- :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-id-token-hybrid-flows.png" alt-text="Screenshot of ID token hybrid flows.":::
+4. Enter a name for the secret and for **Value**, type the newly created password for the Azure AD user. Select **Create** to complete.
- 1. Construct tenant-specific sign-in URL for your service principal by running the following url in your web browser:
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-key-vault-secret.png" alt-text="Screenshot how to generate an Azure Key Vault secret.":::
- ```
- https://login.microsoftonline.com/<purview_tenant_id>/oauth2/v2.0/authorize?client_id=<client_id_to_delegate_the_pbi_admin>&scope=openid&response_type=id_token&response_mode=fragment&state=1234&nonce=67890
- ```
-
- Make sure you replace the parameters with correct information:
-
- - `<purview_tenant_id>` is the Azure Active Directory tenant ID (GUID) where Azure Purview account is provisioned.
- - `<client_id_to_delegate_the_pbi_admin>` is the application ID corresponding to your service principal
+5. If your key vault is not connected to Azure Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
- 1. Sign-in using any non-admin account. This is required to provision your service principal in the foreign tenant.
+6. Create an App Registration in your Azure Active Directory tenant. Take note of Client ID(App ID).
- 1. When prompted, accept permission requested for _View your basic profile_ and _Maintain access to data you have given it access to_.
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-create-service-principle.png" alt-text="Screenshot how to create a Service principle.":::
+
+7. From Azure Active Directory dashboard, select newly created application and then select App registration. Assign the application the following delegated permissions and grant admin consent for the tenant:
-1. Update `client_id_to_delegate_the_pbi_admin` with Application (client) ID of newly created application and run the following command in your PowerShell session:
+ - Power BI Service Tenant.Read.All
+ - Microsoft Graph openid
- ```powershell
- $ServicePrincipalId = '<client_id_to_delegate_the_pbi_admin>'
- ```
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-delegated-permissions.png" alt-text="Screenshot of delegated permissions for Power BI Service and Microsoft Graph.":::
-1. Create a user account in Azure Active Directory tenant where Power BI tenant is located and assign Azure AD role, **Power BI Administrator**. Update `pbi_admin_username` and `pbi_admin_password` with corresponding information and execute the following lines in the PowerShell terminal:
+8. In the Azure Purview Studio, navigate to the **Data map** in the left menu.
- ```powershell
- $UserName = '<pbi_admin_username>'
- $Password = '<pbi_admin_password>'
- ```
+9. Navigate to **Sources**.
- > [!Note]
- > If you create a user account in Azure Active Directory from the portal, the public client flow option is **No** by default. You need to toggle it to **Yes**:
- > <br>
- > :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-public-client-flows.png" alt-text="Screenshot of public client flows.":::
+10. Select the registered Power BI source.
+
+11. Select **+ New scan**.
+
+12. Give your scan a name. Then select the option to include or exclude the personal workspaces.
+
+ >[!Note]
+ > Switching the configuration of a scan to include or exclude a personal workspace will trigger a full scan of PowerBI source.
+
+13. Select your self-hosted integration runtime from the drop-down list.
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-shir.png" alt-text="Image showing Power BI scan setup using SHIR for same tenant.":::
+
+14. For the **Credential**, select **Delegated authentication** and click **+ New** to create a new credential.
+
+15. Create a new credential and provide required parameters:
-1. In Azure Purview Studio, assign _Data Source Admin_ to the Service Principal and the Power BI user at the root collection.
+ - **Name**: Provide a unique name for credential
+ - **Client ID**: Use Service Principal Client ID (App ID) you created earlier
+ - **User name**: Provide the username of Power BI Administrator you created earlier
+ - **Password**: Select the appropriate Key vault connection and the **Secret name** where the Power BI account password was saved earlier.
-1. To register the cross-tenant Power BI tenant as a new data source inside Azure Purview account, update `service_principal_key` and execute the following cmdlets in the PowerShell session:
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-delegated-authentication.png" alt-text="Image showing Power BI scan setup using Delegated authentication.":::
- ```powershell
- Set-AzDataCatalogSessionSettings -DataCatalogSession -TenantId $azuretenantId -ServicePrincipalAuthentication -ServicePrincipalApplicationId $ServicePrincipalId -ServicePrincipalKey '<service_principal_key>' -Environment Production -DataCatalogAccountName $purviewaccount
+16. Select **Test Connection** before continuing to next steps. If **Test Connection** failed, select **View Report** to see the detailed status and troubleshoot the problem
+ 1. Access - Failed status means the user authentication failed. Scans using managed identity will always pass because no user authentication required.
+ 2. Assets (+ lineage) - Failed status means the Azure Purview - Power BI authorization has failed. Make sure the Azure Purview managed identity is added to the security group associated in Power BI admin portal.
+ 3. Detailed metadata (Enhanced) - Failed status means the Power BI admin portal is disabled for the following setting - **Enhance admin APIs responses with detailed metadata**
- Set-AzDataCatalogDataSource -Name 'pbidatasource' -AccountType PowerBI -Tenant $powerBITenantIdToScan -Verbose
- ```
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-test-connection-status-report.png" alt-text="Screenshot of test connection status report page.":::
-## Scan
+17. Set up a scan trigger. Your options are **Recurring**, and **Once**.
-Follow the steps below to scan a Power BI tenant to automatically identify assets and classify your data. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md)
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/scan-trigger.png" alt-text="Screenshot of the Azure Purview scan scheduler.":::
-This guide covers both [same-tenant](#create-and-run-scan-for-same-tenant-power-bi) and [cross-tenant](#create-and-run-scan-for-cross-tenant-power-bi) scanning scenarios.
+18. On **Review new scan**, select **Save and run** to launch your scan.
-### Create and run scan for same-tenant Power BI
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/save-run-power-bi-scan.png" alt-text="Screenshot of Save and run Power BI source.":::
-To create and run a new scan, do the following:
+## Cross Power BI tenant registration and scan
-1. In the Azure Purview Studio, navigate to the **Data map** in the left menu.
+### Authentication options
+- Delegated Authentication
-1. Navigate to **Sources**.
+### Cross Power BI tenant registration
-1. Select the registered Power BI source.
+1. Select the **Data Map** on the left navigation.
-1. Select **+ New scan**.
+1. Then select **Register**.
-1. Give your scan a name. Then select the option to include or exclude the personal workspaces. Notice that the only authentication method supported is **Managed Identity**.
+ Select **Power BI** as your data source.
- :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-setup.png" alt-text="Image showing Power BI scan setup.":::
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/select-power-bi-data-source.png" alt-text="Image showing the list of data sources available to choose.":::
- > [!Note]
- > Switching the configuration of a scan to include or exclude a personal workspace will trigger a full scan of PowerBI source.
+1. Give your Power BI instance a friendly name. The name must be between 3-63 characters long and must contain only letters, numbers, underscores, and hyphens. Spaces aren't allowed.
-1. Select **Test Connection** before continuing to next steps. If **Test Connection** failed, select **View Report** to see the detailed status and troubleshoot the problem
- 1. Access - Failed status means the user authentication failed. Scans using managed identity will always pass because no user authentication required.
- 1. Assets (+ lineage) - Failed status means the Azure Purview - Power BI authorization has failed. Make sure the Purview-managed identity is added to the security group associated in Power BI admin portal.
- 1. Detailed metadata (Enhanced) - Failed status means the Power BI admin portal is disabled for the following setting - **Enhance admin APIs responses with detailed metadata**
+1. Edit the Tenant ID field to replace with cross Power BI tenant you want to register and scan. By default, Power BI tenant ID that exists in the same Azure Active Directory as Azure Purview will be populated.
- :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-test-connection-status-report.png" alt-text="Screenshot of test connection status report page.":::
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/register-cross-tenant.png" alt-text="Image showing the registration experience for cross tenant Power BI":::
-1. Set up a scan trigger. Your options are **Once**, **Every 7 days**, and **Every 30 days**.
+### Cross Power BI tenant scanning
- :::image type="content" source="media/setup-power-bi-scan-catalog-portal/scan-trigger.png" alt-text="Screenshot of the Azure Purview scan scheduler.":::
+#### Scan cross Power BI tenant using Delegated authentication
+
+Delegated authentication is the only supported option for cross-tenant scan option, however, you can use either Azure runtime or a self-hosted integration runtime to run a scan.
+
+To create and run a new scan using Azure runtime, perform the following steps:
+
+1. Create a user account in Azure Active Directory tenant where Power BI tenant is located and assign the user to Azure Active Directory role, **Power BI Administrator**. Take note of username and login to change the password.
+
+2. Navigate to your Azure key vault in the tenant where Azure Purview is created.
+
+3. Select **Settings** > **Secrets** and select **+ Generate/Import**.
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-key-vault.png" alt-text="Screenshot how to navigate to Azure Key Vault.":::
+
+5. Enter a name for the secret and for **Value**, type the newly created password for the Azure AD user. Select **Create** to complete.
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-key-vault-secret.png" alt-text="Screenshot how to generate an Azure Key Vault secret.":::
+
+6. If your key vault is not connected to Azure Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
+
+7. Create an App Registration in your Azure Active Directory tenant where Power BI is located. Take note of Client ID (App ID).
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-create-service-principle.png" alt-text="Screenshot how to create a Service Principle.":::
+
+8. From Azure Active Directory dashboard, select newly created application and then select App registration. Assign the application the following delegated permissions and grant admin consent for the tenant:
+
+ - Power BI Service Tenant.Read.All
+ - Microsoft Graph openid
-1. On **Review new scan**, select **Save and Run** to launch your scan.
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-delegated-permissions.png" alt-text="Screenshot of delegated permissions for Power BI Service and Microsoft Graph.":::
- :::image type="content" source="media/setup-power-bi-scan-catalog-portal/save-run-power-bi-scan.png" alt-text="Screenshot of save and run Power BI source.":::
+9. From Azure Active Directory dashboard, select newly created application and then select **Authentication**. Under **Supported account types** select **Accounts in any organizational directory (Any Azure AD directory - Multitenant)**.
-### Create and run scan for cross-tenant Power BI
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-multitenant.png" alt-text="Screenshot of account type support multitenant.":::
+
+10. Under **Implicit grant and hybrid flows**, ensure to select **ID tokens (used for implicit and hybrid flows)**
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-id-token-hybrid-flows.png" alt-text="Screenshot of ID token hybrid flows.":::
+
+In the Azure Purview Studio, navigate to the **Data map** in the left menu.
+
+9. Navigate to **Sources**.
+
+10. Select the registered Power BI source from cross tenant.
+
+11. Select **+ New scan**.
+
+12. Give your scan a name. Then select the option to include or exclude the personal workspaces.
+
+ > [!Note]
+ > Switching the configuration of a scan to include or exclude a personal workspace will trigger a full scan of PowerBI source.
+
+13. Select **Azure AutoResolveIntegrationRuntime** from the drop-down list.
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-cross-tenant.png" alt-text="Image showing Power BI scan setup using Azure IR for cross tenant.":::
-To create and run a new scan inside Azure Purview execute the following cmdlets in the PowerShell session:
+14. For the **Credential**, select **Delegated authentication** and click **+ New** to create a new credential.
- ```powershell
- Set-AzDataCatalogScan -DataSourceName 'pbidatasource' -Name 'pbiscan' -AuthorizationType PowerBIDelegated -ServicePrincipalId $ServicePrincipalId -UserName $UserName -Password $Password -IncludePersonalWorkspaces $true -Verbose
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-shir.png" alt-text="Image showing Power BI scan setup using SHIR.":::
+
+15. Create a new credential and provide required parameters:
+
+ - **Name**: Provide a unique name for credential.
+
+ - **Client ID**: Use Service Principal Client ID (App ID) you created earlier.
+
+ - **User name**: Provide the username of Power BI Administrator you created earlier.
+
+ - **Password**: Select the appropriate Key vault connection and the **Secret name** where the Power BI account password was saved earlier.
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-delegated-authentication.png" alt-text="Image showing Power BI scan setup using Delegated authentication.":::
+
+16. Select **Test Connection** before continuing to next steps. If **Test Connection** failed, select **View Report** to see the detailed status and troubleshoot the problem
+ 1. Access - Failed status means the user authentication failed. Scans using managed identity will always pass because no user authentication required.
+ 2. Assets (+ lineage) - Failed status means the Azure Purview - Power BI authorization has failed. Make sure the Azure Purview managed identity is added to the security group associated in Power BI admin portal.
+ 3. Detailed metadata (Enhanced) - Failed status means the Power BI admin portal is disabled for the following setting - **Enhance admin APIs responses with detailed metadata**
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-cross-tenant-test.png" alt-text="Screenshot of test connection status.":::
+
+17. Set up a scan trigger. Your options are **Recurring**, and **Once**.
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/scan-trigger.png" alt-text="Screenshot of the Azure Purview scan scheduler.":::
- Start-AzDataCatalogScan -DataSourceName 'pbidatasource' -Name 'pbiscan'
- ```
+18. On **Review new scan**, select **Save and run** to launch your scan.
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/save-run-power-bi-scan.png" alt-text="Screenshot of Save and run Power BI source.":::
## Next steps
purview Tutorial Purview Audit Logs Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-purview-audit-logs-diagnostics.md
Last updated 02/10/2022
# Azure Purview: Audit Logs, Diagnostics & Activity History
-This guide lists step-by-step configuration on how to enable and capture Azure Purview Audit & Diagnostics Logs via Azure Event Hubs.
+This guide lists step-by-step configuration on how to enable and capture Azure Purview Audit & Diagnostics Logs via Azure Event Hubs.
-## Customer Intent
+An Azure Purview administrator or Azure Purview data-source admin needs the ability to monitor audit and diagnostics logs captured from an [Azure Purview](https://azure.microsoft.com/services/purview/#get-started) service. Audit and diagnostics information consists of timestamped history of actions taken and changes made on the Azure Purview account by every user. Captured activity history includes actions in the [Azure Purview portal](https://ms.web.purview.azure.com) and outside the portal (such as calling [Azure Purview REST APIs](/rest/api/purview/) to perform write operations).
-As an Azure Purview administrator or Azure Purview data-source admin, I want the ability to capture, view and monitor audit and diagnostics logs captured from [Azure Purview](https://azure.microsoft.com/services/purview/#get-started) service. Audit and diagnostics information consists of timestamped activity history of actions taken and changes made on the Purview account by every user. Captured activity history includes actions on [Azure Purview portal](https://ms.web.purview.azure.com) as well as outside the portal (such as calling [Purview REST APIs](/rest/api/purview/) that perform write operations). To enable audit logging on Purview, let's go through a step-by-step guide on how to configure and capture streaming audit events from Azure Purview via Azure Diagnostics Event Hubs service.
+This guide will take you through the steps to enable audit logging on Azure Purview, amd how to configure and capture streaming audit events from Azure Purview via Azure Diagnostics Event Hubs service.
+## Azure Purview Audit History - Categorization of Events
-### Purview Audit History - Categorization of Events
+Some of the important categories of Azure Purview audit events that are currently available for capture and analysis are listed in the table.
-- Some of the important categories of Azure Purview audit events that are currently available for capture and analysis are listed in the table. -- More types and categories of activity audit events are being added to Purview in the coming months.
+More types and categories of activity audit events are being added to Azure Purview in the coming months.
-| Category | Activity | Operation |
-| | |-- |
-| Management | Scan Rule Set | Create |
-| Management | Scan Rule Set | Update |
-| Management | Scan Rule Set | Delete |
-| Management | Classification Rule | Create |
-| Management | Classification Rule | Update |
-| Management | Classification Rule | Delete |
-| Management | Scan | Create |
-| Management | Scan | Update |
-| Management | Scan | Delete |
-| Management | Scan | Run |
-| Management | Scan | Cancel |
-| Management | Scan | Create |
-| Management | Scan | Schedule |
-| Management | Data Source | Register |
-| Management | Data Source | Update |
-| Management | Data Source | Delete |
+| Category | Activity | Operation |
+|||--|
+| Management | Scan Rule Set | Create |
+| Management | Scan Rule Set | Update |
+| Management | Scan Rule Set | Delete |
+| Management | Classification Rule | Create |
+| Management | Classification Rule | Update |
+| Management | Classification Rule | Delete |
+| Management | Scan | Create |
+| Management | Scan | Update |
+| Management | Scan | Delete |
+| Management | Scan | Run |
+| Management | Scan | Cancel |
+| Management | Scan | Create |
+| Management | Scan | Schedule |
+| Management | Data Source | Register |
+| Management | Data Source | Update |
+| Management | Data Source | Delete |
-## Enable Azure Purview Audit & Diagnostics
+## Enable Azure Purview Audit & Diagnostics
### Configure Azure Event Hubs -- Create an [Azure Event Hubs Namespace and an Azure event hub using Azure ARM Template (GitHub)](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.eventhub/eventhubs-create-namespace-and-enable-capture). While this automated Azure ARM Template will deploy and finish creating your Event Hubs with the required configuration at t; follow these guides for more detailed step by step explanations and manual setup: [Azure Event Hubs: Use Azure Resource Manager Template to enable event hub capture](../event-hubs/event-hubs-resource-manager-namespace-event-hub-enable-capture.md) and [Azure Event Hubs: Enable capturing of events streaming manually using Azure portal](../event-hubs/event-hubs-capture-enable-through-portal.md)
+Create an [Azure Event Hubs Namespace using Azure ARM Template (GitHub)](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.eventhub/eventhubs-create-namespace-and-enable-capture). While this automated Azure ARM Template will deploy and finish creating your Event Hubs with the required configuration; follow these guides for more detailed step by step explanations and manual setup: [Azure Event Hubs: Use Azure Resource Manager Template to enable Event Hubs capture](../event-hubs/event-hubs-resource-manager-namespace-event-hub-enable-capture.md) and [Azure Event Hubs: Enable capturing of events streaming manually using Azure portal](../event-hubs/event-hubs-capture-enable-through-portal.md)
-### Connect Purview Account to Diagnostics Event Hubs
+### Connect Azure Purview Account to Diagnostics Event Hubs
-- Now that Event Hubs is deployed and created, connect Azure Purview diagnostics audit logging to this event hub.
+Now that Event Hubs is deployed and created, connect Azure Purview diagnostics audit logging to this Event Hubs:
- - Go To your Purview Account home page (where the overview information is displayed, not the Purview Studio home page.) and follow instructions as detailed below.
+1. Go To your Azure Purview Account home page (where the overview information is displayed, not the Azure Purview Studio home page.) and follow instructions as detailed below.
- - Click "Monitoring" -> "Diagnostic Settings" in the left navigation menu.
+1. Select "Monitoring" -> "Diagnostic Settings" in the left navigation menu.
:::image type="content" source="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-e.png" alt-text="Click Azure Purview Diagnostic Settings" lightbox="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-e.png":::
- - Click "Add Diagnostic Settings" or "Edit Setting". Adding more than one row of diagnostic setting in the context of Purview isn't recommended. In other words, if you already have a diagnostic setting row added, don't click "Add Diagnostic"; click "Edit" instead.
+1. Select "Add Diagnostic Settings" or "Edit Setting". Adding more than one row of diagnostic setting in the context of Azure Purview isn't recommended. In other words, if you already have a diagnostic setting row added, don't select "Add Diagnostic"; select "Edit" instead.
:::image type="content" source="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-f.png" alt-text="Add or Edit Diagnostic Settings screen." lightbox="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-f.png":::
- - Ensure to select checkbox "audit" and "allLogs" to enable collection of Purview audit logs. Optionally, select "allMetrics" if you wish to capture DataMap Capacity Units and Data Map size metrics of the Purview account as well.
+1. Ensure to select checkbox "audit" and "allLogs" to enable collection of Azure Purview audit logs. Optionally, select "allMetrics" if you wish to capture DataMap Capacity Units and Data Map size metrics of the Azure Purview account as well.
:::image type="content" source="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-g.png" alt-text="Configure Azure Purview Diagnostic settings - select diagnostics types" lightbox="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-g.png":::
- - Diagnostics Configuration on the Azure Purview account is complete.
+1. Diagnostics Configuration on the Azure Purview account is complete.
- - Next, go to [Azure portal](https://portal.azure.com) home page and search the name of the Event Hubs Namespace you created in *Step-1*.
+Now that Azure Purview diagnostics audit logging configuration is complete, configure the data capture and data retention settings for the Event Hubs:
- - Navigate to the Event Hubs Namespace. Select the event hub and click "Capture Data".
+1. Go to [Azure portal](https://portal.azure.com) home page and search the name of the Event Hubs Namespace you created earlier.
- - Supply the name of the Event Hubs Namespace and the event hub where you would like the audit and diagnostics to be captured and streamed. Modify the "Time Window" and "Size Window" values for retention period of streaming events. Click Save.
+1. Navigate to the Event Hubs Namespace. Select the Event Hubs and select "Capture Data".
- :::image type="content" source="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-h.png" alt-text="Capture Settings on Event Hubs Namespace and event hub." lightbox="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-h.png":::
+1. Supply the name of the Event Hubs Namespace and the Event Hubs where you would like the audit and diagnostics to be captured and streamed. Modify the "Time Window" and "Size Window" values for retention period of streaming events. Select Save.
- - Optionally, go to "Properties" on the left navigation menu and change the "Message Retention" to any value between 1-7 days. Retention period value depends on the frequency of scheduled jobs/scripts you've created to continuously listen and capture the streaming events. If you schedule a capture once every week, take the slider to 7 days.
+ :::image type="content" source="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-h.png" alt-text="Capture Settings on Event Hubs Namespace and Event Hubs." lightbox="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-h.png":::
+
+1. Optionally, go to "Properties" on the left navigation menu and change the "Message Retention" to any value between 1-7 days. Retention period value depends on the frequency of scheduled jobs/scripts you've created to continuously listen and capture the streaming events. If you schedule a capture once every week, take the slider to seven days.
:::image type="content" source="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-i.png" alt-text="Event Hubs properties - message retention period." lightbox="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-i.png":::
- - At this stage, the Event Hubs configuration will be complete. Purview will start streaming all its audit history and diagnostics data to this event hub. You can now proceed to read, extract and perform further analytics and operations on the captured diagnostics and audit events.
+1. At this stage, the Event Hubs configuration will be complete. Azure Purview will start streaming all its audit history and diagnostics data to this Event Hubs. You can now proceed to read, extract and perform further analytics and operations on the captured diagnostics and audit events.
### Reading captured "audit" events -- Analyzing and making sense of the captured Audit and Diagnostics log data from Purview.-
- - Navigate to "Process Data" on the Event Hubs page to see a preview of the captured Purview audit logs and diagnostics.
+Analyzing and making sense of the captured Audit and Diagnostics log data from Azure Purview:
+1. Navigate to "Process Data" on the Event Hubs page to see a preview of the captured Azure Purview audit logs and diagnostics.
+
:::image type="content" source="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-d.png" alt-text="Configure Event Hubs - Process Data." lightbox="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-d.png"::: :::image type="content" source="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-c.png" alt-text="Navigating Azure Event Hubs." lightbox="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-c.png":::
- - Switch between "Table" and "Raw" view of the JSON output.
+1. Switch between "Table" and "Raw" view of the JSON output.
+
+ :::image type="content" source="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-a.png" alt-text="Explore Azure Purview Audit Events on Event Hubs." lightbox="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-a.png":::
- :::image type="content" source="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-a.png" alt-text="Explore Purview Audit Events on Event Hubs." lightbox="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-a.png":::
+1. Select "Download Sample Data" to download and analyze the results carefully.
- - Click "Download Sample Data" to download and analyze the results carefully.
+ :::image type="content" source="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-b.png" alt-text="Query and Process Azure Purview Audit data on Event Hubs." lightbox="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-b.png":::
- :::image type="content" source="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-b.png" alt-text="Query and Process Purview Audit data on Event Hubs." lightbox="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-b.png":::
+Now that you know how to gather this information, you can use automatic, scheduled scripts to extract, read, and perform further analytics on the Event Hubs audit and diagnostics data. You can even build your own utilities and custom code to extract business value from captured audit events.
- - Lastly, you can use automatic, periodically scheduled scripts to extract, read and perform further analytics and operations on the Event Hubs audit and diagnostics data. You can even build your own utilities and custom code to extract business value out of the captured audit events. What's more, you can even use these audit logs and transform them to Excel, any database, Dataverse or Synapse, for analytics and reporting using Power BI. While you are free to use any programming or scripting language of your choice to read the event hub, here's one ready-made [Python-based script](https://github.com/Azure/Azure-Purview-API-PowerShell/blob/main/purview_atlas_eventhub_sample.py). Python tutorial on how to [Capture Event Hubs data in Azure Storage and read it by using Python (azure-eventhub)](../event-hubs/event-hubs-capture-python.md)
+These audit logs can also be transformed to Excel, any database, Dataverse, or Synapse Analytics database, for analytics and reporting using Power BI.
+While you're free to use any programming or scripting language of your choice to read the Event Hubs, here's one ready-made [Python-based script](https://github.com/Azure/Azure-Purview-API-PowerShell/blob/main/purview_atlas_eventhub_sample.py). Python tutorial on how to [capture Event Hubs data in Azure Storage and read it by using Python (azure-eventhub)](../event-hubs/event-hubs-capture-python.md).
## Next steps Kickstart your Azure Purview journey in less than 5 minutes. Enable Diagnostic Audit Logging from the beginning of your journey!
-> [!div class="nextstepaction"]
-> [Azure Purview: automated New Account Setup](https://aka.ms/PurviewKickstart)
+> [!div class="nextstepaction"]
+> [Azure Purview: automated New Account Setup](https://aka.ms/PurviewKickstart)
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/feature-availability.md
Previously updated : 12/14/2021 Last updated : 12/30/2021 # Cloud feature availability for commercial and US Government customers
The following tables display the current Microsoft Sentinel feature availability
| - [Microsoft 365 Defender incident integration](../../sentinel/microsoft-365-defender-sentinel-integration.md#incident-integration) |Public Preview |Public Preview| | - [Microsoft Teams integrations](../../sentinel/collaborate-in-microsoft-teams.md) |Public Preview |Not Available | |- [Bring Your Own ML (BYO-ML)](../../sentinel/bring-your-own-ml.md) | Public Preview | Public Preview |
+|- [Search large datasets](../../sentinel/investigate-large-datasets.md) | Public Preview | Not Available |
+|- [Restore historical data](../../sentinel/investigate-large-datasets.md) | Public Preview | Not Available |
| **Notebooks** | | | |- [Notebooks](../../sentinel/notebooks.md) | GA | GA | | - [Notebook integration with Azure Synapse](../../sentinel/notebooks-with-synapse.md) | Public Preview | Not Available|
The following tables display the current Microsoft Sentinel feature availability
| - [Azure Active Directory](../../sentinel/connect-azure-active-directory.md) | GA | GA | | - [Azure ADIP](../../sentinel/data-connectors-reference.md#azure-active-directory-identity-protection) | GA | GA | | - [Azure DDoS Protection](../../sentinel/data-connectors-reference.md#azure-ddos-protection) | GA | GA |
+| - [Azure Purview](../../sentinel/data-connectors-reference.md#azure-purview) | Public Preview | Not Available |
+| - [Microsoft Defender for Cloud](../../sentinel/connect-azure-security-center.md) | GA | GA |
+| - [Microsoft Defender for IoT](../../sentinel/data-connectors-reference.md#microsoft-defender-for-iot) | Public Preview | Not Available |
+| - [Microsoft Insider Risk Management](/azure/sentinel/sentinel-solutions-catalog#domain-solutions) | Public Preview | Not Available |
| - [Azure Firewall ](../../sentinel/data-connectors-reference.md#azure-firewall) | GA | GA | | - [Azure Information Protection](../../sentinel/data-connectors-reference.md#azure-information-protection-preview) | Public Preview | Not Available | | - [Azure Key Vault ](../../sentinel/data-connectors-reference.md#azure-key-vault) | Public Preview | Not Available |
sentinel Billing Monitor Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing-monitor-costs.md
+
+ Title: Manage and monitor costs for Microsoft Sentinel
+description: Learn how to manage and monitor costs and billing for Microsoft Sentinel by using cost analysis in the Azure portal and other methods.
++++ Last updated : 02/22/2022++
+# Manage and monitor costs for Microsoft Sentinel
+
+After you've started using Microsoft Sentinel resources, use Cost Management features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act.
+
+Costs for Microsoft Sentinel are only a portion of the monthly costs in your Azure bill. Although this article explains how to manage and monitor costs for Microsoft Sentinel, you're billed for all Azure services and resources your Azure subscription uses, including Partner services.
+
+## Prerequisites
+
+To view cost data and perform cost analysis in Cost Management, you must have a supported Azure account type, with at least read access.
+
+While cost analysis in Cost Management supports most Azure account types, not all are supported. To view the full list of supported account types, see [Understand Cost Management data](../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+
+For information about assigning access to Azure Cost Management data, see [Assign access to data](../cost-management/assign-access-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
++
+## View costs by using cost analysis
+
+As you use Azure resources with Microsoft Sentinel, you incur costs. Azure resource usage unit costs vary by time intervals such as seconds, minutes, hours, and days, or by unit usage, like bytes and megabytes. As soon as Microsoft Sentinel use starts, it incurs costs, and you can see the costs in [cost analysis](../cost-management/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+
+When you use cost analysis, you view Microsoft Sentinel costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. And you see where overspending might have occurred. If you've created budgets, you can also easily see where they're exceeded.
+
+The [Azure Cost Management + Billing](../cost-management-billing/costs/quick-acm-cost-analysis.md) hub provides useful functionality. After you open **Cost Management + Billing** in the Azure portal, select **Cost Management** in the left navigation and then select the [scope](..//cost-management-billing/costs/understand-work-scopes.md) or set of resources to investigate, such as an Azure subscription or resource group.
+
+The **Cost Analysis** screen shows detailed views of your Azure usage and costs, with the option to apply various controls and filters.
+
+For example, to see charts of your daily costs for a certain time frame:
+
+1. Select the drop-down caret in the **View** field and select **Accumulated costs** or **Daily costs**.
+1. Select the drop-down caret in the date field and select a date range.
+1. Select the drop-down caret next to **Granularity** and select **Daily**.
+
+ The costs shown in the following image are for example purposes only. They're not intended to reflect actual costs.
+
+ :::image type="content" source="media/billing-monitor-costs/cost-management.png" alt-text="Screenshot of a cost management + billing cost analysis screen." lightbox="media/billing-monitor-costs/cost-management.png":::
+
+You could also apply further controls. For example, to view only the costs associated with Microsoft Sentinel, select **Add filter**, select **Service name**, and then select the service names **Sentinel**, **log analytics**, and **azure monitor**.
+
+Microsoft Sentinel data ingestion volumes appear under **Security Insights** in some portal Usage Charts.
+
+The Microsoft Sentinel pricing tiers don't include Log Analytics charges. To change your pricing tier commitment for Log Analytics, see [Changing pricing tier](../azure-monitor/logs/manage-cost-storage.md#changing-pricing-tier).
+
+For more information, see [Create budgets](#create-budgets) and [Reduce costs in Microsoft Sentinel](billing-monitor-costs.md).
+
+## Using Azure Prepayment with Microsoft Sentinel
+
+You can pay for Microsoft Sentinel charges with your Azure Prepayment credit. However, you can't use Azure Prepayment credit to pay bills to third-party organizations for their products and services, or for products from the Azure Marketplace.
+
+## Run queries to understand your data ingestion
+
+Microsoft Sentinel uses an extensive query language to analyze, interact with, and derive insights from huge volumes of operational data in seconds. Here are some Kusto queries you can use to understand your data ingestion volume.
+
+Run the following query to show data ingestion volume by solution:
+
+```kusto
+Usage
+| where StartTime >= startofday(ago(31d)) and EndTime < startofday(now())
+| where IsBillable == true
+| summarize BillableDataGB = sum(Quantity) / 1000. by bin(StartTime, 1d), Solution
+| extend Solution = iif(Solution == "SecurityInsights", "AzureSentinel", Solution)
+| render columnchart
+```
+
+Run the following query to show data ingestion volume by data type:
+
+```kusto
+Usage
+| where StartTime >= startofday(ago(31d)) and EndTime < startofday(now())
+| where IsBillable == true
+| summarize BillableDataGB = sum(Quantity) / 1000. by bin(StartTime, 1d), DataType
+| render columnchart
+```
+
+Run the following query to show data ingestion volume by both solution and data type:
+
+```kusto
+Usage
+| where TimeGenerated > ago(32d)
+| where StartTime >= startofday(ago(31d)) and EndTime < startofday(now())
+| where IsBillable == true
+| summarize BillableDataGB = sum(Quantity) by Solution, DataType
+| extend Solution = iif(Solution == "SecurityInsights", "AzureSentinel", Solution)
+| sort by Solution asc, DataType asc
+```
+
+## Deploy a workbook to visualize data ingestion
+
+The **Workspace Usage Report workbook** provides your workspace's data consumption, cost, and usage statistics. The workbook gives the workspace's data ingestion status and amount of free and billable data. You can use the workbook logic to monitor data ingestion and costs, and to build custom views and rule-based alerts.
+
+This workbook also provides granular ingestion details. The workbook breaks down the data in your workspace by data table, and provides volumes per table and entry to help you better understand your ingestion patterns.
+
+To enable the Workspace Usage Report workbook:
+
+1. In the Microsoft Sentinel left navigation, select **Threat management** > **Workbooks**.
+1. Enter *workspace usage* in the Search bar, and then select **Workspace Usage Report**.
+1. Select **View template** to use the workbook as is, or select **Save** to create an editable copy of the workbook. If you save a copy, select **View saved workbook**.
+1. In the workbook, select the **Subscription** and **Workspace** you want to view, and then set the **TimeRange** to the time frame you want to see. You can set the **Show help** toggle to **Yes** to display in-place explanations in the workbook.
+
+## Export cost data
+
+You can also [export your cost data](../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. Exporting cost data is helpful when you need or others to do more data analysis for costs. For example, a finance team can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
+
+## Create budgets
+
+You can create [budgets](../cost-management/tutorial-acm-create-budgets.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that automatically notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an overall cost monitoring strategy.
+
+You can create budgets with filters for specific resources or services in Azure if you want more granularity in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you more money. For more information about the filter options available when you create a budget, see [Group and filter options](../cost-management-billing/costs/group-filter.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+
+## Use a playbook for cost management alerts
+
+To help you control your Microsoft Sentinel budget, you can create a cost management playbook. The playbook sends you an alert if your Microsoft Sentinel workspace exceeds a budget, which you define, within a given timeframe.
+
+The Microsoft Sentinel GitHub community provides the [`Send-IngestionCostAlert`](https://github.com/iwafula025/Azure-Sentinel/tree/master/Playbooks/Send-IngestionCostAlert) cost management playbook on GitHub. This playbook is activated by a recurrence trigger, and gives you a high level of flexibility. You can control execution frequency, ingestion volume, and the message to trigger, based on your requirements.
+
+## Define a data volume cap in Log Analytics
+
+In Log Analytics, you can enable a daily volume cap that limits the daily ingestion for your workspace. The daily cap can help you manage unexpected increases in data volume, stay within your limit, and limit unplanned charges.
+
+To define a daily volume cap, select **Usage and estimated costs** in the left navigation of your Log Analytics workspace, and then select **Daily cap**. Select **On**, enter a daily volume cap amount, and then select **OK**.
+
+![Screenshot showing the Usage and estimated costs screen and the Daily cap window.](media/billing-monitor-costs/daily-cap.png)
+
+The **Usage and estimated costs** screen also shows your ingested data volume trend in the past 31 days, and the total retained data volume.
+
+The daily cap doesn't limit collection of all data types. Security data is excluded from the cap. For more information about managing the daily cap in Log Analytics, see [Manage your maximum daily data volume](../azure-monitor/logs/manage-cost-storage.md#manage-your-maximum-daily-data-volume).
+
+## Next steps
+
+- [Reduce costs for Microsoft Sentinel](billing-reduce-costs.md)
+- Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn about how to [prevent unexpected costs](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
+- For more tips on reducing Log Analytics data volume, see [Tips for reducing data volume](../azure-monitor/logs/manage-cost-storage.md#tips-for-reducing-data-volume).
sentinel Billing Reduce Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing-reduce-costs.md
+
+ Title: Reduce costs for Microsoft Sentinel
+description: Learn how to reduce costs for Microsoft Sentinel by using different methods in the Azure portal.
++++ Last updated : 02/22/2022++
+# Reduce costs for Microsoft Sentinel
+
+Costs for Microsoft Sentinel are only a portion of the monthly costs in your Azure bill. Although this article explains how to reduce costs for Microsoft Sentinel, you're billed for all Azure services and resources your Azure subscription uses, including Partner services.
+
+## Set or change pricing tier
+
+To optimize for highest savings, monitor your ingestion volume to ensure you have the Commitment Tier that aligns most closely with your ingestion volume patterns. You can increase or decrease your Commitment Tier to align with changing data volumes.
+
+You can increase your Commitment Tier anytime, which restarts the 31-day commitment period. However, to move back to Pay-As-You-Go or to a lower Commitment Tier, you must wait until after the 31-day commitment period finishes. Billing for Commitment Tiers is on a daily basis.
+
+To see your current Microsoft Sentinel pricing tier, select **Settings** in the Microsoft Sentinel left navigation, and then select the **Pricing** tab. Your current pricing tier is marked **Current tier**.
+
+To change your pricing tier commitment, select one of the other tiers on the pricing page, and then select **Apply**. You must have **Contributor** or **Owner** role in Microsoft Sentinel to change the pricing tier.
++
+Microsoft Sentinel data ingestion volumes appear under **Security Insights** in some portal Usage Charts.
+
+The Microsoft Sentinel pricing tiers don't include Log Analytics charges. To change your pricing tier commitment for Log Analytics, see [Changing pricing tier](../azure-monitor/logs/manage-cost-storage.md#changing-pricing-tier).
+
+## Separate non-security data in a different workspace
+
+Microsoft Sentinel analyzes all the data ingested into Microsoft Sentinel-enabled Log Analytics workspaces. It's best to have a separate workspace for non-security operations data, to ensure it doesn't incur Microsoft Sentinel costs.
+
+When hunting or investigating threats in Microsoft Sentinel, you might need to access operational data stored in these standalone Azure Log Analytics workspaces. You can access this data by using cross-workspace querying in the log exploration experience and workbooks. However, you can't use cross-workspace analytics rules and hunting queries unless Microsoft Sentinel is enabled on all the workspaces.
+
+## Turn on basic logs data ingestion for data that's high-volume low security value (preview)
+
+Unlike analytics logs, [basic logs](../azure-monitor/logs/basic-logs-configure.md) are typically verbose. They contain a mix of high volume and low security value data, that isn't frequently used or accessed on demand for ad-hoc querying, investigations and search. Enable basic log data ingestion at a significantly reduced cost for eligible data tables. For more information, see [Microsoft Sentinel Pricing](https://azure.microsoft.com/pricing/details/microsoft-sentinel/).
+
+## Optimize Log Analytics costs with dedicated clusters
+
+If you ingest at least 500 GB into your Microsoft Sentinel workspace or workspaces in the same region, consider moving to a Log Analytics dedicated cluster to decrease costs. A Log Analytics dedicated cluster Commitment Tier aggregates data volume across workspaces that collectively ingest a total of 500 GB or more.
+
+Log Analytics dedicated clusters don't apply to Microsoft Sentinel Commitment Tiers. Microsoft Sentinel costs still apply per workspace in the dedicated cluster.
+
+You can add multiple Microsoft Sentinel workspaces to a Log Analytics dedicated cluster. There are a couple of advantages to using a Log Analytics dedicated cluster for Microsoft Sentinel:
+
+- Cross-workspace queries run faster if all the workspaces involved in the query are in the dedicated cluster. It's still best to have as few workspaces as possible in your environment, and a dedicated cluster still retains the [100 workspace limit](../azure-monitor/logs/cross-workspace-query.md) for inclusion in a single cross-workspace query.
+
+- All workspaces in the dedicated cluster can share the Log Analytics Commitment Tier set on the cluster. Not having to commit to separate Log Analytics Commitment Tiers for each workspace can allow for cost savings and efficiencies. By enabling a dedicated cluster, you commit to a minimum Log Analytics Commitment Tier of 500-GB ingestion per day.
+
+Here are some other considerations for moving to a dedicated cluster for cost optimization:
+
+- The maximum number of clusters per region and subscription is two.
+- All workspaces linked to a cluster must be in the same region.
+- The maximum of workspaces linked to a cluster is 1000.
+- You can unlink a linked workspace from your cluster. The number of link operations on a particular workspace is limited to two in a period of 30 days.
+- You can't move an existing workspace to a customer managed key (CMK) cluster. You must create the workspace in the cluster.
+- Moving a cluster to another resource group or subscription isn't currently supported.
+- A workspace link to a cluster fails if the workspace is linked to another cluster.
+
+For more information about dedicated clusters, see [Log Analytics dedicated clusters](../azure-monitor/logs/manage-cost-storage.md#log-analytics-dedicated-clusters).
+
+## Reduce long-term data retention costs with Azure Data Explorer or archived logs (preview)
+
+Microsoft Sentinel data retention is free for the first 90 days. To adjust the data retention period in Log Analytics, select **Usage and estimated costs** in the left navigation, then select **Data retention**, and then adjust the slider.
+
+Microsoft Sentinel security data might lose some of its value after a few months. Security operations center (SOC) users might not need to access older data as frequently as newer data, but still might need to access the data for sporadic investigations or audit purposes.
+
+To help you reduce Microsoft Sentinel data retention costs, Azure Monitor now offers archived logs. Archived logs store log data for long periods of time, up to seven years, at a reduced cost with limitations on its usage. Archived logs are in public preview. For more information, see [Configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-archive.md).
+
+Alternatively, you can use Azure Data Explorer for long-term data retention at lower cost. Azure Data Explorer provides the right balance of cost and usability for aged data that no longer needs Microsoft Sentinel security intelligence.
+
+With Azure Data Explorer, you can store data at a lower price, but still explore the data using the same Kusto Query Language (KQL) queries as in Microsoft Sentinel. You can also use the Azure Data Explorer proxy feature to do cross-platform queries. These queries aggregate and correlate data spread across Azure Data Explorer, Application Insights, Microsoft Sentinel, and Log Analytics.
+
+For more information, see [Integrate Azure Data Explorer for long-term log retention](store-logs-in-azure-data-explorer.md).
+
+## Use data collection rules for your Windows Security Events
+
+The [Windows Security Events connector](connect-windows-security-events.md?tabs=LAA) enables you to stream security events from any computer running Windows Server that's connected to your Microsoft Sentinel workspace, including physical, virtual, or on-premises servers, or in any cloud. This connector includes support for the Azure Monitor agent, which uses data collection rules to define the data to collect from each agent.
+
+Data collection rules enable you to manage collection settings at scale, while still allowing unique, scoped configurations for subsets of machines. For more information, see [Configure data collection for the Azure Monitor agent](../azure-monitor/agents/data-collection-rule-azure-monitor-agent.md).
+
+Besides for the predefined sets of events that you can select to ingest, such as All events, Minimal, or Common, data collection rules enable you to build custom filters and select specific events to ingest. The Azure Monitor Agent uses these rules to filter the data at the source, and then ingest only the events you've selected, while leaving everything else behind. Selecting specific events to ingest can help you optimize your costs and save more.
+
+## Next steps
+
+- Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn about how to [prevent unexpected costs](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
+- For more tips on reducing Log Analytics data volume, see [Tips for reducing data volume](../azure-monitor/logs/manage-cost-storage.md#tips-for-reducing-data-volume).
sentinel Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing.md
Title: Plan and manage costs for Microsoft Sentinel
-description: Learn how to plan, understand, and manage costs and billing for Microsoft Sentinel by using cost analysis in the Azure portal and other methods.
---
+ Title: Plan costs for Microsoft Sentinel
+description: Learn how to estimate your costs and billing for Microsoft Sentinel by using the pricing calculator and other methods.
+++ Previously updated : 11/09/2021 Last updated : 02/22/2022
-# Plan and manage costs for Microsoft Sentinel
+# Plan costs for Microsoft Sentinel
+Microsoft Sentinel provides intelligent security analytics across your enterprise. The data for this analysis is stored in an Azure Monitor Log Analytics workspace. Microsoft Sentinel is billed based on the volume of data for analysis in Microsoft Sentinel and storage in the Azure Monitor Log Analytics workspace. For more information, see the [Microsoft Sentinel Pricing Page](https://azure.microsoft.com/pricing/details/microsoft-sentinel/).
-This article describes how to plan for and manage costs for Microsoft Sentinel. First, you use the Azure pricing calculator to help plan for Microsoft Sentinel costs, before you add any resources for the service. Next, as you add Azure resources, review the estimated costs.
+Before you add any resources for the Microsoft Sentinel, use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) to help estimate your costs.
-After you've started using Microsoft Sentinel resources, use Cost Management features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act. This article describes several ways to manage and optimize Microsoft Sentinel costs.
+Costs for Microsoft Sentinel are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan costs and understand the billing for Microsoft Sentinel, you're billed for all Azure services and resources your Azure subscription uses, including Partner services.
-Costs for Microsoft Sentinel are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for Microsoft Sentinel, you're billed for all Azure services and resources your Azure subscription uses, including Partner services.
+## Free trial
-## Prerequisites
+Try Microsoft Sentinel free for the first 31 days. Microsoft Sentinel can be enabled at no extra cost on an Azure Monitor Log Analytics workspace, subject to the limits stated below:
+
+- **New Log Analytics workspaces** can ingest up to 10 GB/day of log data for the first 31-days at no cost. New workspaces include workspaces that are less than three days old.
+
+ Both Log Analytics data ingestion and Microsoft Sentinel charges are waived during the 31-day trial period. This free trial is subject to a 20 workspace limit per Azure tenant.
-- To view cost data and perform cost analysis in Cost Management, you must have a supported Azure account type, with at least read access.
+- **Existing Log Analytics workspaces** can enable Microsoft Sentinel at no extra cost. Existing workspaces include any workspaces created more than three days ago.
- While cost analysis in Cost Management supports most Azure account types, not all are supported. To view the full list of supported account types, see [Understand Cost Management data](../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+ Only the Microsoft Sentinel charges are waived during the 31-day trial period.
- For information about assigning access to Azure Cost Management data, see [Assign access to data](../cost-management/assign-access-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+Usage beyond these limits will be charged per the pricing listed on the [Microsoft Sentinel pricing](https://azure.microsoft.com/pricing/details/azure-sentinel) page. Charges related to extra capabilities for [automation](automation.md) and [bring your own machine learning](bring-your-own-ml.md) are still applicable during the free trial.
-- You must have details about your data sources. Microsoft Sentinel allows you to bring in data from one or more data sources. Some of these data sources are free, and others incur charges. For more information, see [Free data sources](#free-data-sources).
+During your free trial, find resources for cost management, training, and more on the **News & guides > Free trial** tab in Microsoft Sentinel. This tab also displays details about the dates of your free trial, and how many days you have left until it expires.
++
+## Identify data sources
+
+Identify the data sources you're ingesting or plan to ingest to your workspace in Microsoft Sentinel. Microsoft Sentinel allows you to bring in data from one or more data sources. Some of these data sources are free, and others incur charges. For more information, see [Free data sources](#free-data-sources).
## Estimate costs before using Microsoft Sentinel
If you're not yet using Microsoft Sentinel, you can use the [Microsoft Sentinel
For example, you can enter the GB of daily data you expect to ingest in Microsoft Sentinel, and the region for your workspace. The calculator provides the aggregate monthly cost across these components: -- Log Analytics data ingestion-- Microsoft Sentinel data analysis-- Log Analytics data retention-
-> [!NOTE]
-> The costs shown in this image are for example purposes only. They're not intended to reflect actual costs.
-
+- Azure Monitor data ingestion: Analytics logs and basic logs
+- Microsoft Sentinel data analytics: Analytics logs and basic logs
+- Data retention
+- Data archive (archived logs)
+- Basic logs queries
## Understand the full billing model for Microsoft Sentinel
Microsoft Sentinel runs on Azure infrastructure that accrues costs when you depl
### How you're charged for Microsoft Sentinel
-There are two ways to pay for the Microsoft Sentinel service: **Pay-As-You-Go** and **Commitment Tiers**.
+Microsoft Sentinel offers flexible pricing based on the types of logs ingested into a workspace. Analytics logs typically make up most of your high security value logs. Basic logs tend to be verbose with low security value.
+
+#### Analytics logs
+
+There are two ways to pay for the analytics logs: **Pay-As-You-Go** and **Commitment Tiers**.
- **Pay-As-You-Go** is the default model, based on the actual data volume stored and optionally for data retention beyond 90 days. Data volume is measured in GB (10^9 bytes).
There are two ways to pay for the Microsoft Sentinel service: **Pay-As-You-Go**
You can increase your commitment tier anytime, and decrease it every 31 days, to optimize costs as your data volume increases or decreases. To see your current Microsoft Sentinel pricing tier, select **Settings** in the Microsoft Sentinel left navigation, and then select the **Pricing** tab. Your current pricing tier is marked as **Current tier**.
- To set and change your Commitment Tier, see [Set or change pricing tier](#set-or-change-pricing-tier).
+ To set and change your Commitment Tier, see [Set or change pricing tier](billing-reduce-costs.md#set-or-change-pricing-tier).
+
+#### Basic logs (preview)
+
+Basic logs have a reduced price and are charged at a flat rate per GB. They have the following limitations:
+
+- Reduced querying capabilities
+- Eight-day retention
+- No support for scheduled alerts
+
+Basic logs are best suited for use in playbook automation, ad-hoc querying, investigations, and search. For more information, see [Configure Basic Logs in Azure Monitor](../azure-monitor/logs/basic-logs-configure.md).
### Understand your Microsoft Sentinel bill
Billable meters are the individual components of your service that appear on you
To see your Azure bill, select **Cost Analysis** in the left navigation of **Cost Management + Billing**. On the **Cost analysis** screen, select the drop-down caret in the **View** field, and select **Invoice details**.
-> [!NOTE]
-> The costs shown in this image are for example purposes only. They're not intended to reflect actual costs.
+The costs shown in the following image are for example purposes only. They're not intended to reflect actual costs.
![Screenshot showing the Microsoft Sentinel section of a sample Azure bill.](media/billing/sample-bill.png) Microsoft Sentinel and Log Analytics charges appear on your Azure bill as separate line items based on your selected pricing plan. If you exceed your workspace's Commitment Tier usage in a given month, the Azure bill shows one line item for the Commitment Tier with its associated fixed cost, and a separate line item for the ingestion beyond the Commitment Tier, billed at your same Commitment Tier rate.
-The following table shows how Microsoft Sentinel and Log Analytics costs appear in the **Service name** and **Meter** columns of your Azure invoice:
+The following tabs show how Microsoft Sentinel and Log Analytics costs appear in the **Service name** and **Meter** columns of your Azure bill depending on your pricing tier.
-| Cost | Service name | Meter |
+#### [Commitment tier](#tab/commitment-tier)
+
+If you're billed at the commitment tier rate, the following table shows how Microsoft Sentinel and Log Analytics costs appear in the **Service name** and **Meter** columns of your Azure bill.
+
+| Cost description | Service name | Meter |
|--|--|--| | Microsoft Sentinel Commitment Tier | `sentinel` | **`n` gb commitment tier** | | Log Analytics Commitment Tier | `azure monitor` | **`n` gb commitment tier** |
-| Microsoft Sentinel overage over the Commitment Tier, or Pay-As-You-Go| `sentinel` |**analysis**|
-| Log Analytics overage over the Commitment Tier, or Pay-As-You-Go| `log analytics` |**data ingestion**|
+| Microsoft Sentinel overage over the Commitment Tier| `sentinel` |**analysis**|
+| Log Analytics overage over the Commitment Tier| `log analytics` |**data ingestion**|
+| Basic logs data ingestion| `azure monitor` |**data ingestion - Basic Logs**|
+| Basic logs data analysis| `sentinel` |**Analysis - Basic Logs**|
+
+#### [Pay-As-You-Go](#tab/pay-as-you-go)
+
+If you're billed at Pay-As-You-Go rate, the following table shows how Microsoft Sentinel and Log Analytics costs appear in the **Service name** and **Meter** columns of your Azure bill.
+
+ Cost description | Service name | Meter |
+|--|--|--|
+| Pay-As-You-Go| `sentinel` |**analysis**|
+| Pay-As-You-Go| `log analytics` |**data ingestion**|
+| Basic logs data ingestion| `azure monitor` |**data ingestion - Basic Logs**|
+| Basic logs data analysis| `sentinel` |**Analysis - Basic Logs**|
+
+#### [Free data meters](#tab/free-data-meters)
+
+The following table shows how Microsoft Sentinel and Log Analytics costs appear in the **Service name** and **Meter** columns of your Azure bill for free data services.
+
+ Cost description | Service name | Meter |
+|--|--|--|
+| Microsoft Sentinel Free Trial ΓÇô Log Analytics data ingestion| `azure monitor` |**Data Ingestion ΓÇô Free Benefit ΓÇô Sentinel Trial**|
+| Microsoft Sentinel Free Trial ΓÇô Sentinel Analysis| `sentinel` |**Free trial**|
+| M365 Defender Benefit ΓÇô Data Ingestion| `azure monitor` |**Free Benefit - M365 Defender Data Ingestion**|
+| M365 Defender Benefit ΓÇô Data Analysis| `sentinel` |**Free Benefit - M365 Defender Analysis**|
++ For more information on viewing and downloading your Azure bill, see [Azure cost and billing information](../cost-management-billing/understand/download-azure-daily-usage.md).
-### Costs for other services
+## Costs for other services
Microsoft Sentinel integrates with many other Azure services to provide enhanced capabilities. These services include Azure Logic Apps, Azure Notebooks, and bring your own machine learning (BYOML) models. Some of these services may have extra charges. Some of Microsoft Sentinel's data connectors and solutions use Azure Functions for data ingestion, which also has a separate associated cost.
For pricing details for these services, see:
Any other services you use could have associated costs.
-### Data retention costs
+## Data retention and archived logs costs
After you enable Microsoft Sentinel on a Log Analytics workspace, you can retain all data ingested into the workspace at no charge for the first 90 days. Retention beyond 90 days is charged per the standard [Log Analytics retention prices](https://azure.microsoft.com/pricing/details/monitor/).
-You can specify different retention settings for individual data types. For more information, see [Retention by data type](../azure-monitor/logs/manage-cost-storage.md#retention-by-data-type).
+You can specify different retention settings for individual data types. For more information, see [Retention by data type](../azure-monitor/logs/manage-cost-storage.md#retention-by-data-type). You can also enable long-term retention for your data and have access to historical logs by enabling archived logs. Data archive is a low-cost retention layer for archival storage. It's charged based on the volume of data stored and scanned. For more information, see [Configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-archive.md). Archived logs are in public preview.
+
+The 90 day retention doesn't apply to basic logs. If you want to extend data retention for basic logs beyond eight days, you can store that data in archived logs for up to seven years.
-### Other CEF ingestion costs
+## Other CEF ingestion costs
CEF is a supported Syslog events format in Microsoft Sentinel. You can use CEF to bring in valuable security information from various sources to your Microsoft Sentinel workspace. CEF logs land in the CommonSecurityLog table in Microsoft Sentinel, which includes all the standard up-to-date CEF fields. Many devices and data sources allow for logging fields beyond the standard CEF schema. These extra fields land in the AdditionalExtensions table. These fields could have higher ingestion volumes than the standard CEF fields, because the event content within these fields can be variable.
-### Costs that might accrue after resource deletion
+## Costs that might accrue after resource deletion
Removing Microsoft Sentinel doesn't remove the Log Analytics workspace Microsoft Sentinel was deployed on, or any separate charges that workspace might be incurring.
-### Free trial
-
-Try Microsoft Sentinel free for the first 31 days. Microsoft Sentinel can be enabled at no extra cost on an Azure Monitor Log Analytics workspace, subject to the limits stated below:
--- **New Log Analytics workspaces** can ingest up to 10 GB/day of log data for the first 31-days at no cost. New workspaces include workspaces that are less than three days old.-
- Both Log Analytics data ingestion and Microsoft Sentinel charges are waived during the 31-day trial period. This free trial is subject to a 20 workspace limit per Azure tenant.
--- **Existing Log Analytics workspaces** can enable Microsoft Sentinel at no extra cost. Existing workspaces include any workspaces created more than three days ago.-
- Only the Microsoft Sentinel charges are waived during the 31-day trial period.
-
-Usage beyond these limits will be charged per the pricing listed on the [Microsoft Sentinel pricing](https://azure.microsoft.com/pricing/details/azure-sentinel) page. Charges related to extra capabilities for [automation](automation.md) and [bring your own machine learning](bring-your-own-ml.md) are still applicable during the free trial.
-
-> [!TIP]
-> During your free trial, find resources for cost management, training, and more on the **News & guides > Free trial** tab in Microsoft Sentinel.
->
-> This tab also displays details about the dates of your free trial, and how many days you have left until it expires.
->
-### Free data sources
+## Free data sources
The following data sources are free with Microsoft Sentinel:
The following data sources are free with Microsoft Sentinel:
- Security alerts, including alerts from Microsoft Defender for Cloud, Microsoft 365 Defender, Microsoft Defender for Office 365, Microsoft Defender for Identity, and Microsoft Defender for Endpoint. - Microsoft Defender for Cloud and Microsoft Defender for Cloud Apps alerts.
-> [!NOTE]
-> Although alerts are free, the raw logs for some Microsoft 365 Defender, Defender for Cloud Apps, Azure Active Directory (Azure AD), and Azure Information Protection (AIP) data types are paid.
->
+Although alerts are free, the raw logs for some Microsoft 365 Defender, Defender for Cloud Apps, Azure Active Directory (Azure AD), and Azure Information Protection (AIP) data types are paid.
+ The following table lists the free data sources you can enable in Microsoft Sentinel. Some of the data connectors, such as Microsoft 365 Defender and Defender for Cloud Apps, include both free and paid data types. | Microsoft Sentinel Data Connector | Data type | Free or paid |
The following table lists the free data sources you can enable in Microsoft Sent
For data connectors that include both free and paid data types, you can select which data types you want to enable.
-![Screenshot showing the Data connector page for Defender for Cloud Apps, with the free Security Alerts selected and the paid MCASShadowITReporting not selected.](media/billing/data-types.png)
For more information about free and paid data sources and connectors, see [Connect data sources](connect-data-sources.md).
-> [!NOTE]
-> Data connectors listed as Public Preview do not generate cost. Data connectors generate cost only once becoming Generally Available (GA).
->
--
-## Manage and monitor Microsoft Sentinel costs
-
-As you use Azure resources with Microsoft Sentinel, you incur costs. Azure resource usage unit costs vary by time intervals such as seconds, minutes, hours, and days, or by unit usage, like bytes and megabytes. As soon as Microsoft Sentinel use starts, it incurs costs, and you can see the costs in [cost analysis](../cost-management/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
-
-When you use cost analysis, you view Microsoft Sentinel costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. And you see where overspending might have occurred. If you've created budgets, you can also easily see where they're exceeded.
-
-The [Azure Cost Management + Billing](../cost-management-billing/costs/quick-acm-cost-analysis.md) hub provides useful functionality. After you open **Cost Management + Billing** in the Azure portal, select **Cost Management** in the left navigation and then select the [scope](..//cost-management-billing/costs/understand-work-scopes.md) or set of resources to investigate, such as an Azure subscription or resource group.
-
-The **Cost Analysis** screen shows detailed views of your Azure usage and costs, with the option to apply various controls and filters.
-
-For example, to see charts of your daily costs for a certain time frame:
-
-1. Select the drop-down caret in the **View** field and select **Accumulated costs** or **Daily costs**.
-1. Select the drop-down caret in the date field and select a date range.
-1. Select the drop-down caret next to **Granularity** and select **Daily**.
-
-> [!NOTE]
-> Microsoft Sentinel data ingestion volumes appear under **Security Insights** in some portal Usage Charts.
-
-The Microsoft Sentinel pricing tiers don't include Log Analytics charges. To change your pricing tier commitment for Log Analytics, see [Changing pricing tier](../azure-monitor/logs/manage-cost-storage.md#changing-pricing-tier).
-
-For more information, see [Create budgets](#create-budgets) and [Other ways to manage and reduce Microsoft Sentinel costs](#other-ways-to-manage-and-reduce-microsoft-sentinel-costs).
-
-### Using Azure Prepayment with Microsoft Sentinel
-
-You can pay for Microsoft Sentinel charges with your Azure Prepayment credit. However, you can't use Azure Prepayment credit to pay bills to third-party organizations for their products and services, or for products from the Azure Marketplace.
-
-### Run queries to understand your data ingestion
-
-Microsoft Sentinel uses an extensive query language to analyze, interact with, and derive insights from huge volumes of operational data in seconds. Here are some Kusto queries you can use to understand your data ingestion volume.
-
-Run the following query to show data ingestion volume by solution:
-
-```kusto
-Usage
-| where StartTime >= startofday(ago(31d)) and EndTime < startofday(now())
-| where IsBillable == true
-| summarize BillableDataGB = sum(Quantity) / 1000. by bin(StartTime, 1d), Solution
-| extend Solution = iif(Solution == "SecurityInsights", "AzureSentinel", Solution)
-| render columnchart
-```
-
-Run the following query to show data ingestion volume by data type:
-
-```kusto
-Usage
-| where StartTime >= startofday(ago(31d)) and EndTime < startofday(now())
-| where IsBillable == true
-| summarize BillableDataGB = sum(Quantity) / 1000. by bin(StartTime, 1d), DataType
-| render columnchart
-```
-
-Run the following query to show data ingestion volume by both solution and data type:
-
-```kusto
-Usage
-| where TimeGenerated > ago(32d)
-| where StartTime >= startofday(ago(31d)) and EndTime < startofday(now())
-| where IsBillable == true
-| summarize BillableDataGB = sum(Quantity) by Solution, DataType
-| extend Solution = iif(Solution == "SecurityInsights", "AzureSentinel", Solution)
-| sort by Solution asc, DataType asc
-```
-
-### Deploy a workbook to visualize data ingestion
-
-The **Workspace Usage Report workbook** provides your workspace's data consumption, cost, and usage statistics. The workbook gives the workspace's data ingestion status and amount of free and billable data. You can use the workbook logic to monitor data ingestion and costs, and to build custom views and rule-based alerts.
-
-This workbook also provides granular ingestion details. The workbook breaks down the data in your workspace by data table, and provides volumes per table and entry to help you better understand your ingestion patterns.
-
-To enable the Workspace Usage Report workbook:
-
-1. In the Microsoft Sentinel left navigation, select **Threat management** > **Workbooks**.
-1. Enter *workspace usage* in the Search bar, and then select **Workspace Usage Report**.
-1. Select **View template** to use the workbook as is, or select **Save** to create an editable copy of the workbook. If you save a copy, select **View saved workbook**.
-1. In the workbook, select the **Subscription** and **Workspace** you want to view, and then set the **TimeRange** to the time frame you want to see. You can set the **Show help** toggle to **Yes** to display in-place explanations in the workbook.
-
-## Export cost data
-
-You can also [export your cost data](../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. Exporting cost data is helpful when you need or others to do more data analysis for costs. For example, a finance team can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
-
-## Create budgets
-
-You can create [budgets](../cost-management/tutorial-acm-create-budgets.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that automatically notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an overall cost monitoring strategy.
-
-You can create budgets with filters for specific resources or services in Azure if you want more granularity in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you more money. For more information about the filter options available when you create a budget, see [Group and filter options](../cost-management-billing/costs/group-filter.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
-
-### Use a playbook for cost management alerts
-
-To help you control your Microsoft Sentinel budget, you can create a cost management playbook. The playbook sends you an alert if your Microsoft Sentinel workspace exceeds a budget, which you define, within a given timeframe.
-
-The Microsoft Sentinel GitHub community provides the [`Send-IngestionCostAlert`](https://github.com/iwafula025/Azure-Sentinel/tree/master/Playbooks/Send-IngestionCostAlert) cost management playbook on GitHub. This playbook is activated by a recurrence trigger, and gives you a high level of flexibility. You can control execution frequency, ingestion volume, and the message to trigger, based on your requirements.
-
-### Define a data volume cap in Log Analytics
-
-In Log Analytics, you can enable a daily volume cap that limits the daily ingestion for your workspace. The daily cap can help you manage unexpected increases in data volume, stay within your limit, and limit unplanned charges.
-
-To define a daily volume cap, select **Usage and estimated costs** in the left navigation of your Log Analytics workspace, and then select **Daily cap**. Select **On**, enter a daily volume cap amount, and then select **OK**.
-
-![Screenshot showing the Usage and estimated costs screen and the Daily cap window.](media/billing/daily-cap.png)
-
-The **Usage and estimated costs** screen also shows your ingested data volume trend in the past 31 days, and the total retained data volume.
-
-> [!IMPORTANT]
-> The daily cap doesn't limit collection of all data types. Security data is excluded from the cap. For more information about managing the daily cap in Log Analytics, see [Manage your maximum daily data volume](../azure-monitor/logs/manage-cost-storage.md#manage-your-maximum-daily-data-volume).
-
-## Other ways to manage and reduce Microsoft Sentinel costs
-
-To manage data ingestion and retention costs:
--- [Use Commitment Tier pricing to optimize costs](#set-or-change-pricing-tier) based on your data ingestion volume.-- [Separate non-security data in a different workspace](#separate-non-security-data-in-a-different-workspace).-- [Optimize Log Analytics costs with dedicated clusters](#optimize-log-analytics-costs-with-dedicated-clusters).-- [Reduce long-term data retention costs with Azure Data Explorer (ADX)](#reduce-long-term-data-retention-costs-with-adx).-- [Use Data Collection Rules for your Windows Security Events](#use-data-collection-rules-for-your-windows-security-events).-
-### Set or change pricing tier
-
-To optimize for highest savings, monitor your ingestion volume to ensure you have the Commitment Tier that aligns most closely with your ingestion volume patterns. You can increase or decrease your Commitment Tier to align with changing data volumes.
-
-You can increase your Commitment Tier anytime, which restarts the 31-day commitment period. However, to move back to Pay-As-You-Go or to a lower Commitment Tier, you must wait until after the 31-day commitment period finishes. Billing for Commitment Tiers is on a daily basis.
-
-To see your current Microsoft Sentinel pricing tier, select **Settings** in the Microsoft Sentinel left navigation, and then select the **Pricing** tab. Your current pricing tier is marked **Current tier**.
-
-To change your pricing tier commitment, select one of the other tiers on the pricing page, and then select **Apply**. You must have **Contributor** or **Owner** role in Microsoft Sentinel to change the pricing tier.
-
-![Screenshot showing the Pricing page in Microsoft Sentinel Settings, with Pay-As-You-Go indicated as the current pricing tier.](media/billing/pricing.png)
-
-> [!NOTE]
-> Microsoft Sentinel data ingestion volumes appear under **Security Insights** in some portal Usage Charts.
-
-The Microsoft Sentinel pricing tiers don't include Log Analytics charges. To change your pricing tier commitment for Log Analytics, see [Changing pricing tier](../azure-monitor/logs/manage-cost-storage.md#changing-pricing-tier).
-
-### Separate non-security data in a different workspace
-
-Microsoft Sentinel analyzes all the data ingested into Microsoft Sentinel-enabled Log Analytics workspaces. It's best to have a separate workspace for non-security operations data, to ensure it doesn't incur Microsoft Sentinel costs.
-
-When hunting or investigating threats in Microsoft Sentinel, you might need to access operational data stored in these standalone Azure Log Analytics workspaces. You can access this data by using cross-workspace querying in the log exploration experience and workbooks. However, you can't use cross-workspace analytics rules and hunting queries unless Microsoft Sentinel is enabled on all the workspaces.
-
-### Optimize Log Analytics costs with dedicated clusters
-
-If you ingest at least 500 GB into your Microsoft Sentinel workspace or workspaces in the same region, consider moving to a Log Analytics dedicated cluster to decrease costs. A Log Analytics dedicated cluster Commitment Tier aggregates data volume across workspaces that collectively ingest a total of 500 GB or more.
-
-Log Analytics dedicated clusters don't apply to Microsoft Sentinel Commitment Tiers. Microsoft Sentinel costs still apply per workspace in the dedicated cluster.
-
-You can add multiple Microsoft Sentinel workspaces to a Log Analytics dedicated cluster. There are a couple of advantages to using a Log Analytics dedicated cluster for Microsoft Sentinel:
--- Cross-workspace queries run faster if all the workspaces involved in the query are in the dedicated cluster. It's still best to have as few workspaces as possible in your environment, and a dedicated cluster still retains the [100 workspace limit](../azure-monitor/logs/cross-workspace-query.md) for inclusion in a single cross-workspace query.--- All workspaces in the dedicated cluster can share the Log Analytics Commitment Tier set on the cluster. Not having to commit to separate Log Analytics Commitment Tiers for each workspace can allow for cost savings and efficiencies. By enabling a dedicated cluster, you commit to a minimum Log Analytics Commitment Tier of 500 GB ingestion per day.-
-Here are some other considerations for moving to a dedicated cluster for cost optimization:
--- The maximum number of clusters per region and subscription is two.-- All workspaces linked to a cluster must be in the same region.-- The maximum of workspaces linked to a cluster is 1000.-- You can unlink a linked workspace from your cluster. The number of link operations on a particular workspace is limited to two in a period of 30 days.-- You can't move an existing workspace to a customer managed key (CMK) cluster. You must create the workspace in the cluster.-- Moving a cluster to another resource group or subscription isn't currently supported.-- A workspace link to a cluster fails if the workspace is linked to another cluster.-
-For more information about dedicated clusters, see [Log Analytics dedicated clusters](../azure-monitor/logs/manage-cost-storage.md#log-analytics-dedicated-clusters).
-
-### Reduce long-term data retention costs with ADX
-
-Microsoft Sentinel data retention is free for the first 90 days. To adjust the data retention time period in Log Analytics, select **Usage and estimated costs** in the left navigation, then select **Data retention**, and then adjust the slider.
-
-Microsoft Sentinel security data might lose some of its value after a few months. Security operations center (SOC) users might not need to access older data as frequently as newer data, but still might need to access the data for sporadic investigations or audit purposes. To reduce Microsoft Sentinel data retention costs, you can use Azure Data Explorer for long-term data retention at lower cost. ADX provides the right balance of cost and usability for aged data that no longer needs Microsoft Sentinel security intelligence.
-
-With ADX, you can store data at a lower price, but still explore the data using the same Kusto Query Language (KQL) queries as in Microsoft Sentinel. You can also use the ADX proxy feature to do cross-platform queries. These queries aggregate and correlate data spread across ADX, Application Insights, Microsoft Sentinel, and Log Analytics.
-
-For more information, see [Integrate Azure Data Explorer for long-term log retention](store-logs-in-azure-data-explorer.md).
-
-### Use data collection rules for your Windows Security Events
-
-The [Windows Security Events connector](connect-windows-security-events.md?tabs=LAA) enables you to stream security events from any computer running Windows Server that's connected to your Microsoft Sentinel workspace, including physical, virtual, or on-premises servers, or in any cloud. This connector includes support for the Azure Monitor agent, which uses data collection rules to define the data to collect from each agent.
-
-Data collection rules enable you to manage collection settings at scale, while still allowing unique, scoped configurations for subsets of machines. For more information, see [Configure data collection for the Azure Monitor agent](../azure-monitor/agents/data-collection-rule-azure-monitor-agent.md).
-
-Besides for the predefined sets of events that you can select to ingest, such as All events, Minimal, or Common, data collection rules enable you to build custom filters and select specific events to ingest. The Azure Monitor Agent uses these rules to filter the data at the source, and then ingest only the events you've selected, while leaving everything else behind. Selecting specific events to ingest can help you optimize your costs and save more.
-
-> [!NOTE]
-> The costs shown in this image are for example purposes only. They're not intended to reflect actual costs.
-
-![Screenshot showing a Cost Management + Billing Cost analysis screen.](media/billing/cost-management.png)
+Data connectors listed as public preview don't generate cost. Data connectors generate cost only once becoming Generally Available (GA).
-You could also apply further controls. For example, to view only the costs associated with Microsoft Sentinel, select **Add filter**, select **Service name**, and then select the service names **Sentinel**, **log analytics**, and **azure monitor**.
## Next steps
+- [Monitor costs for Microsoft Sentinel](billing-monitor-costs.md)
+- [Reduce costs for Microsoft Sentinel](billing-reduce-costs.md)
- Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
sentinel Connect Azure Windows Microsoft Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-azure-windows-microsoft-services.md
To ingest data into Microsoft Sentinel:
1. Select **Save** at the top of the screen.
+For more information, see also [Create diagnostic settings to send Azure Monitor platform logs and metrics to different destinations](/azure/azure-monitor/essentials/diagnostic-settings) in the Azure Monitor documentation.
+ # [Azure Policy](#tab/AP) ### Prerequisites
See below how to create data collection rules.
1. On the **Collect** tab, choose the events you would like to collect: select **All events** or **Custom** to specify other logs or to filter events using [XPath queries](../azure-monitor/agents/data-collection-rule-azure-monitor-agent.md#limit-data-collection-with-custom-xpath-queries) (see note below). Enter expressions in the box that evaluate to specific XML criteria for events to collect, then select **Add**. You can enter up to 20 expressions in a single box, and up to 100 boxes in a rule.
- Learn more about [data collection rules](../azure-monitor/agents/data-collection-rule-overview.md#create-a-dcr) from the Azure Monitor documentation.
+ Learn more about [data collection rules](../azure-monitor/essentials/data-collection-rule-overview.md) from the Azure Monitor documentation.
> [!NOTE] >
PUT https://management.azure.com/subscriptions/703362b3-f278-4e4b-9179-c76eaf41f
} ```
-See this [complete description of data collection rules](../azure-monitor/agents/data-collection-rule-overview.md) from the Azure Monitor documentation.
+See this [complete description of data collection rules](../azure-monitor/essentials/data-collection-rule-overview.md) from the Azure Monitor documentation.
# [Log Analytics Agent (Legacy)](#tab/LAA)
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
For more information, see the [Azure Information Protection documentation](/azur
| **Supported by** | Microsoft | | | |
+## Azure Purview
+
+| Connector attribute | Description |
+| | |
+| **Data ingestion method** | **Azure service-to-service integration: <br>[Diagnostic settings-based connections](connect-azure-windows-microsoft-services.md?tabs=AP#diagnostic-settings-based-connections)**<br><br>For more information, see [Tutorial: Integrate Microsoft Sentinel and Azure Purview](purview-solution.md). |
+| **Log Analytics table(s)** | PurviewDataSensitivityLogs |
+| **Supported by** | Microsoft |
+| | |
++ ## Azure SQL Databases | Connector attribute | Description |
sentinel Design Your Workspace Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/design-your-workspace-architecture.md
Before working through the decision tree, make sure you have the following infor
|**Regulatory requirements related to Azure data residency** | Microsoft Sentinel can run on workspaces in most, but not all regions [supported in GA for Log Analytics](https://azure.microsoft.com/global-infrastructure/services/?products=monitor). Newly supported Log Analytics regions may take some time to onboard the Microsoft Sentinel service. <br><br> Data generated by Microsoft Sentinel, such as incidents, bookmarks, and analytics rules, may contain some customer data sourced from the customer's Log Analytics workspaces.<br><br> For more information, see [Geographical availability and data residency](quickstart-onboard.md#geographical-availability-and-data-residency).| |**Data sources** | Find out which [data sources](connect-data-sources.md) you need to connect, including built-in connectors to both Microsoft and non-Microsoft solutions. You can also use Common Event Format (CEF), Syslog or REST-API to connect your data sources with Microsoft Sentinel. <br><br>If you have Azure VMs in multiple Azure locations that you need to collect the logs from and the saving on data egress cost is important to you, you need to calculate the data egress cost using [Bandwidth pricing calculator](https://azure.microsoft.com/pricing/details/bandwidth/#overview) for each Azure location. | |**User roles and data access levels/permissions** | Microsoft Sentinel uses [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md) to provide [built-in roles](../role-based-access-control/built-in-roles.md) that can be assigned to users, groups, and services in Azure. <br><br>All Microsoft Sentinel built-in roles grant read access to the data in your Microsoft Sentinel workspace. Therefore, you need to find out whether there is a need to control data access per data source or row-level as that will impact the workspace design decision. For more information, see [Custom roles and advanced Azure RBAC](roles.md#custom-roles-and-advanced-azure-rbac). |
-|**Daily ingestion rate** | The daily ingestion rate,usually in GB/day, is one of the key factors in cost management and planning considerations and workspace design for Microsoft Sentinel. <br><br>In most cloud and hybrid environments, networking devices, such as firewalls or proxies,and Windows and Linux servers produce the most ingested data. To obtain the most accurate results, Microsoft recommends an exhaustive inventory of data sources. <br><br>Alternatively, the Microsoft Sentinel [cost calculator](https://cloudpartners.transform.microsoft.com/download?assetname=assets%2FAzure_Sentinel_Calculator.xlsx&download=1) includes tables useful in estimating footprints of data sources. <br><br>**Important**: These estimates are a starting point, and log verbosity settings and workload will produce variances. We recommend that you monitor your system regularly to track any changes. Regular monitoring is recommended based on your scenario. <br><br>For more information, see [Manage usage and costs with Azure Monitor Logs](../azure-monitor/logs/manage-cost-storage.md). |
+|**Daily ingestion rate** | The daily ingestion rate, usually in GB/day, is one of the key factors in cost management and planning considerations and workspace design for Microsoft Sentinel. <br><br>In most cloud and hybrid environments, networking devices, such as firewalls or proxies, and Windows and Linux servers produce the most ingested data. To obtain the most accurate results, Microsoft recommends an exhaustive inventory of data sources. <br><br>Alternatively, the Microsoft Sentinel [cost calculator](https://cloudpartners.transform.microsoft.com/download?assetname=assets%2FAzure_Sentinel_Calculator.xlsx&download=1) includes tables useful in estimating footprints of data sources. <br><br>**Important**: These estimates are a starting point, and log verbosity settings and workload will produce variances. We recommend that you monitor your system regularly to track any changes. Regular monitoring is recommended based on your scenario. <br><br>For more information, see [Manage usage and costs with Azure Monitor Logs](../azure-monitor/logs/manage-cost-storage.md). |
| | | ## Decision tree
If you need to split your billing or charge-back, consider whether the usage rep
- **Yes**: Proceed with [step 6](#step-6-multiple-regions) for further evaluation. - **No**: We do not recommend using the same workspace for the sake of cost efficiency. Proceed with [step 6](#step-6-multiple-regions) for further evaluation.
- In either case , for more information, see [note 10](#note10).
+ In either case, for more information, see [note 10](#note10).
**If you have *no* overlapping data**, consider whether the ingestion for *both* SOC and non-SOC data individually is less than 100 GB / day, but more than 100 GB / day when combined:
The following table compares workspace options with and without separate workspa
|Workspace architecture |Description | |||
-|The SOC team has its own workspace, with Microsoft Sentinel enabled. <br><br>The Ops team has its own workspace, without Microsoft Sentinel enabled. | **SOC team**: <br>Microsoft Sentinel cost for 50GB/day is $6,500 per month.<br>First three months of retention are free. <br><br>**Ops team**:<br>- Cost of Log Analytics at 50GB/day is around $3,500 per month.<br>- First 31 days of retention are free.<br><br>The total cost for both equals $10,000 per month. |
+|The SOC team has its own workspace, with Microsoft Sentinel enabled. <br><br>The Ops team has its own workspace, without Microsoft Sentinel enabled. | **SOC team**: <br>Microsoft Sentinel cost for 50 GB/day is $6,500 per month.<br>First three months of retention are free. <br><br>**Ops team**:<br>- Cost of Log Analytics at 50 GB/day is around $3,500 per month.<br>- First 31 days of retention are free.<br><br>The total cost for both equals $10,000 per month. |
|Both SOC and Ops teams share the same workspace with Microsoft Sentinel enabled. |By combining both logs, ingestion will be 100 GB / day, qualifying for eligibility for Commitment Tier (50% for Sentinel and 15% for LA). <br><br>Cost of Microsoft Sentinel for 100 GB / day equals $9,000 per month. | | | | In this example, you'd have a cost savings of $1,000 per month by combining both workspaces, and the Ops team will also enjoy 3 months of free retention instead of only 31 days.
-This example is relevant only when both SOC and non-SOC data each have an ingestion size of >=50GB/day and <100GB/day.
+This example is relevant only when both SOC and non-SOC data each have an ingestion size of >=50 GB/day and <100 GB/day.
<a name="note10"></a>[Decision tree note #10](#decision-tree): We recommend using a separate workspace for non-SOC data so that non-SOC data isn't subjected to Microsoft Sentinel costs.
-However, this recommendation for separate workspaces for non-SOC data comes from a purely cost-based perspective, and there are other key design factors to examine when determining whether to use a single or multiple workspaces. To avoid double ingestion costs, consider collecting overlapped data on a single workspace only with table-level Azure RBAC .
+However, this recommendation for separate workspaces for non-SOC data comes from a purely cost-based perspective, and there are other key design factors to examine when determining whether to use a single or multiple workspaces. To avoid double ingestion costs, consider collecting overlapped data on a single workspace only with table-level Azure RBAC.
### Step 6: Multiple regions?
However, this recommendation for separate workspaces for non-SOC data comes from
- If the data egress cost is enough of a concern to make maintaining separate workspaces worthwhile, use a separate Microsoft Sentinel workspace for each region where you need reduce the data egress cost.
- <a name="note5"></a>[Decision tree note #5](#decision-tree): We recommend that you have as few workspaces as possible. Use the [Azure pricing calculator](billing.md#estimate-costs-before-using-microsoft-sentinel) to estimate the cost and determine which regions you actually need, and combine workspaces for regions with low egress costs. Bandwidth costs may be only a small part of your Azure bill when compared with separate Microsoft Sentinel and Log Analytics ingestion costs.
+ <a name="note5"></a>[Decision tree note #5](#decision-tree): We recommend that you have as few workspaces as possible. Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=azure-sentinel) to estimate the cost and determine which regions you actually need, and combine workspaces for regions with low egress costs. Bandwidth costs may be only a small part of your Azure bill when compared with separate Microsoft Sentinel and Log Analytics ingestion costs.
For example, your cost might be estimated as follows: - 1,000 VMs, each generating 1 GB / day;
- - Sending data from a US region to a EU region;
+ - Sending data from a US region to an EU region;
- Using a 2:1 compression rate in the agent The calculation for this estimated cost would be: `1000 VMs * (1GB/day ├╖ 2) * 30 days/month * $0.05/GB = $750/month bandwidth cost`
However, this recommendation for separate workspaces for non-SOC data comes from
#### Considerations for resource-context or table-level RBAC
-When planning to use resource-context or table level RBAC, consider the following:
+When planning to use resource-context or table level RBAC, consider the following information:
- <a name="note7"></a>[Decision tree note #7](#decision-tree): To configure resource-context RBAC for non-Azure resources, you may want to associate a Resource ID to the data when sending to Microsoft Sentinel, so that the permission can be scoped using resource-context RBAC. For more information, see [Explicitly configure resource-context RBAC](resource-context-rbac.md#explicitly-configure-resource-context-rbac) and [Access modes by deployment](../azure-monitor/logs/design-logs-deployment.md).
sentinel Investigate Large Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/investigate-large-datasets.md
+
+ Title: Start an investigation by searching large datasets - Microsoft Sentinel
+description: Learn about search jobs and restoring archived data in Microsoft Sentinel.
++ Last updated : 01/21/2022+++
+# Start an investigation by searching for events in large datasets (preview)
+
+One of the primary activities of a security team is to search logs for specific events. For example, you might search logs for the activities of a specific user within a given time-frame.
+
+In Microsoft Sentinel, you can search across long time periods in extremely large datasets by using a search job. While you can run a search job on any type of log, search jobs are ideally suited to search archived logs. If you need to do a full investigation on archived data, you can restore that data into the hot cache to run high performing queries and analytics.
+
+> [!IMPORTANT]
+> The search job and restore features are currently in **PREVIEW**. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+## Search large datasets
+
+Use a search job when you start an investigation to find specific events in logs within a given time frame. You can search all your logs to find events that match your criteria and filter through the results.
+
+Search in Microsoft Sentinel is built on top of search jobs. Search jobs are asynchronous queries that fetch records. The results are returned to a search table that's created in your Log Analytics workspace after you start the search job. The search job uses parallel processing to run the search across long time spans, in extremely large datasets. So search jobs don't impact the workspace's performance or availability.
+
+Search results remain in a search results table that has a *_SRCH suffix.
+
+The following image shows example search criteria for a search job.
++
+### Supported log types
+
+Use search to find events in any of the following log types:
+
+- [Analytics logs](../azure-monitor/logs/data-platform-logs.md)
+- [Basic logs (preview)](../azure-monitor/logs/basic-logs-configure.md)
+
+You can also search analytics or basic log data stored in [archived logs (preview)](../azure-monitor/logs/data-retention-archive.md).
+
+### Limitations of a search job
+
+Before you start a search job, be aware of the following limitations:
+
+- Optimized to query one table at a time.
+- Search date range is up to one year.
+- Supports long running searches up to a 24-hour time-out.
+- Results are limited to one million records in the record set.
+- Concurrent execution is limited to five search jobs per workspace.
+- Limited to 100 search results tables per workspace.
+- Limited to 100 search job executions per day per workspace.
+
+To learn more, see [Search job in Azure Monitor](../azure-monitor/logs/search-jobs.md) in the Azure Monitor documentation.
+
+## Restore historical data from archived logs
+
+When you need to do a full investigation on data stored in archived logs, restore a table from the **Search** page in Microsoft Sentinel. Specify a target table and time range for the data you want to restore. Within a few minutes, the log data is restored and available within the Log Analytics workspace. Then you can use the data in high-performance queries that support full KQL.
+
+A restored log table is available in a new table that has a *_RST suffix. The restored data is available as long as the underlying source data is available. But you can delete restored tables at any time without deleting the underlying source data. To save costs, we recommend you delete the restored table when you no longer need it.
+
+The following image shows the restore option on a saved search.
++
+### Limitations of log restore
+
+Before you start to restore an archived log table, be aware of the following limitations:
++
+- Restore data for a minimum of two days.
+- Restore data more than 14 days old.
+- Restore up to 60 TB.
+- Restore is limited to one active restore per table.
+- Restore up to four archived tables per workspace per week.
+- Limited to two concurrent restore jobs per workspace.
+
+To learn more, see [Restore logs in Azure Monitor](../azure-monitor/logs/restore.md).
+
+## Bookmark search results or restored data rows
+
+Similar to the [threat hunting dashboard](hunting.md#use-the-hunting-dashboard), bookmark rows that contain information you find interesting so you can attach them to an incident or refer to them later. For more information, see [Create bookmarks](hunting.md#create-bookmarks).
+
+## Next steps
+
+- [Search across long time spans in large datasets (preview)](search-jobs.md)
+- [Restore archived logs from search (preview)](restore.md)
sentinel Normalization About Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-about-schemas.md
The currently supported list of vendors and products used in the [EventVendor](#
| Corelight | Zeek | | GCP | Cloud DNS | | Infoblox | NIOS |
-| Microsoft | - AAD<br> - Azure Defender for IoT<br> - Azure Firewall<br> - Azure File Storage<br> - DNS Server<br> - M365 Defender for Endpoint<br> - NSGFlow <br> - Security Events<br> - Sharepoint 365<br>- Sysmon<br> - Sysmon for Linux<br> - VMConnection<br> - Windows Firewall<br> - WireData <br>
+| Microsoft | - AAD<br> - Azure Firewall<br> - Azure File Storage<br> - DNS Server<br> - Microsoft 365 Defender for Endpoint<br> - Microsoft Defender for IoT<br> - NSGFlow <br> - Security Events<br> - Sharepoint 365<br>- Sysmon<br> - Sysmon for Linux<br> - VMConnection<br> - Windows Firewall<br> - WireData <br>
| Okta | Okta | | Palo Alto | - PanOS<br> - CDL<br> | | Zscaler | - ZIA DNS<br> - ZIA Firewall<br> - ZIA Proxy |
sentinel Purview Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/purview-solution.md
+
+ Title: Integrate Microsoft Sentinel and Azure Purview | Microsoft Docs
+description: This tutorial describes how to use the Microsoft Sentinel data connector and solution for Azure Purview to enable data sensitivity insights, create rules to monitor when classifications have been detected, and get an overview about data found by Azure Purview, and where sensitive data resides in your organization.
++ Last updated : 02/08/2022+++
+# Tutorial: Integrate Microsoft Sentinel and Azure Purview (Public Preview)
+
+> [!IMPORTANT]
+>
+> The *Azure Purview* solution is in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+[Azure Purview](/azure/purview/) provides organizations with visibility into where sensitive information is stored, helping prioritize at-risk data for protection.
+
+Integrate Azure Purview with Microsoft Sentinel to help narrow down the high volume of incidents and threats surfaced in Microsoft Sentinel, and understand the most critical areas to start.
+
+Start by ingesting your Azure Purview logs into Microsoft Sentinel through a data connector. Then use a Microsoft Sentinel workbook to view data such as assets scanned, classifications found, and labels applied by Azure Purview. Use analytics rules to create alerts for changes within data sensitivity.
+
+Customize the Azure Purview workbook and analytics rules to best suit the needs of your organization, and combine Azure Purview logs with data ingested from other sources to create enriched insights within Microsoft Sentinel.
+
+In this tutorial, you:
+
+> [!div class="checklist"]
+>
+> * Install the Microsoft Sentinel solution for Azure Purview
+> * Enable your Azure Purview data connector
+> * Learn about the workbook and analytics rules deployed to your Microsoft Sentinel workspace with the Azure Purview solution
+
+## Prerequisites
+
+Before you start, make sure you have both a [Microsoft Sentinel workspace](quickstart-onboard.md) and [Azure Purview](/azure/purview/create-catalog-portal) onboarded, and that your user has the following roles:
+
+- **An Azure Purview account [Owner](/azure/role-based-access-control/built-in-roles) or [Contributor](/azure/role-based-access-control/built-in-roles) role**, to set up diagnostic settings and configure the data connector.
+
+- **A [Microsoft Sentinel Contributor](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) role**, with write permissions to enable data connector, view the workbook, and create analytic rules.
+
+## Install the Azure Purview solution
+
+The **Azure Purview** solution is a set of bundled content, including a data connector, workbook, and analytics rules configured specifically for Azure Purview data.
+
+> [!TIP]
+> Microsoft Sentinel [solutions](sentinel-solutions.md) can help you onboard Microsoft Sentinel security content for a specific data connector using a single process.
+
+**To install the solution**
+
+1. In Microsoft Sentinel, under **Content management**, select **Content hub** and then locate the **Azure Purview** solution.
+
+1. At the bottom right, select **View details**, and then **Create**. Select the subscription, resource group, and workspace where you want to install the solution, and then review the data connector and related security content that will be deployed.
+
+ When you're done, select **Review + Create** to install the solution.
+
+For more information, see [About Microsoft Sentinel content and solutions](sentinel-solutions.md) and [Centrally discover and deploy out-of-the-box content and solutions](sentinel-solutions-deploy.md).
++
+## Start ingesting Azure Purview data in Microsoft Sentinel
+
+Configure diagnostic settings to have Azure Purview data sensitivity logs flow into Microsoft Sentinel, and then run an Azure Purview scan to start ingesting your data.
+
+Diagnostics settings send log events only after a full scan is run, or when a change is detected during an incremental scan. It typically takes about 10-15 minutes for the logs to start appearing in Microsoft Sentinel.
+
+> [!TIP]
+> Instructions for enabling your data connector also available in Microsoft Sentinel, on the **Azure Purview** data connector page.
+>
+
+**To enable data sensitivity logs to flow into Microsoft Sentinel**:
+
+1. Navigate to your Azure Purview account in the Azure portal and select **Diagnostic settings**.
+
+ :::image type="content" source="media/purview-solution/diagnostics-settings.png" alt-text="Screenshot of an Azure Purview account Diagnostics settings page.":::
+
+1. Select **+ Add diagnostic setting** and configure the new setting to send logs from Azure Purview to Microsoft Sentinel:
+
+ - Enter a meaningful name for your setting.
+ - Under **Logs**, select **DataSensitivityLogEvent**.
+ - Under **Destination details**, select **Send to Log Analytics workspace**, and select the subscription and workspace details used for Microsoft Sentinel.
+
+1. Select **Save**.
+
+For more information, see [Connect Microsoft Sentinel to Azure, Windows, Microsoft, and Amazon services](connect-azure-windows-microsoft-services.md#diagnostic-settings-based-connections).
+
+**To run an Azure Purview scan and view data in Microsoft Sentinel**:
+
+1. In Azure Purview, run a full scan of your resources. For more information, see [Manage data sources in Azure Purview](/azure/purview/manage-data-sources).
+
+1. After your Azure Purview scans have completed, go back to the Azure Purview data connector in Microsoft Sentinel and confirm that data has been received.
+
+## View recent data discovered by Azure Purview
+
+The Azure Purview solution provides two analytics rule templates out-of-the-box that you can enable, including a generic rule and a customized rule.
+
+- The generic version, *Sensitive Data Discovered in the Last 24 Hours*, monitors for the detection of any classifications found across your data estate during an Azure Purview scan.
+- The customized version, *Sensitive Data Discovered in the Last 24 Hours - Customized*, monitors and generates alerts each time the specified classification, such as Social Security Number, has been detected.
+
+Use this procedure to customize the Azure Purview analytics rules' queries to detect assets with specific classification, sensitivity label, source region, and more. Combine the data generated with other data in Microsoft Sentinel to enrich your detections and alerts.
+
+> [!NOTE]
+> Microsoft Sentinel analytics rules are KQL queries that trigger alerts when suspicious activity has been detected. Customize and group your rules together to create incidents for your SOC team to investigate.
+>
+
+### Modify the Azure Purview analytics rule templates
+
+1. In Microsoft Sentinel, under **Configuration** select **Analytics** > **Active rules**, and search for a rule named **Sensitive Data Discovered in the Last 24 Hours - Customized**.
+
+ By default, analytics rules created by Microsoft Sentinel solutions are set to disabled. Make sure to enable the rule for your workspace before continuing:
+
+ 1. Select the rule, and then at the bottom right, select **Edit**.
+
+ 1. In the analytics rule wizard, at the bottom of the **General** tab, toggle the **Status** to **Enabled**.
+
+1. On the **Set rule logic** tab, adjust the **Rule query** to query for the data fields and classifications you want to generate alerts for. For more information on what can be included in your query, see:
+
+ - Supported data fields are the columns of the [PurviewDataSensitivityLogs](/azure/azure-monitor/reference/tables/purviewdatasensitivitylogs) table
+ - [Supported classifications](/azure/purview/supported-classifications)
+
+ Formatted queries have the following syntax: `| where {data-field} contains {specified-string}`.
+
+ For example:
+
+ ```Kusto
+ PurviewDataSensitivityLogs
+ | where Classification contains ΓÇ£Social Security NumberΓÇ¥
+ | where SourceRegion contains ΓÇ£westeuropeΓÇ¥
+ | where SourceType contains ΓÇ£AmazonΓÇ¥
+ | where TimeGenerated > ago (24h)
+ ```
+
+1. Under **Query scheduling**, define settings so that the rules show data discovered in the last 24 hours. We also recommend that you set **Event grouping** to group all events into a single alert.
+
+ :::image type="content" source="media/purview-solution/analytics-rule-wizard.png" alt-text="Screenshot of the analytics rule wizard defined to show data detected in the last 24 hours.":::
+
+1. If needed, customize the **Incident settings** and **Automated response** tabs. For example, in the **Incidents settings** tab, verify that **Create incidents from alerts triggered by this analytics rule** is selected.
+
+1. On the **Review and update** tab, select **Save**.
+
+For more information, see [Create custom analytics rules to detect threats](detect-threats-custom.md).
+
+### View Azure Purview data in Microsoft Sentinel workbooks
+
+In Microsoft Sentinel, under **Threat management**, select **Workbooks** > **My workbooks**, and locate the **Azure Purview** workbook deployed with the **Azure Purview** solution. Open the workbook and customize any parameters as needed.
++
+The Azure Purview workbook displays the following tabs:
+
+- **Overview**: Displays the regions and resources types where the data is located.
+- **Classifications**: Displays assets that contain specified classifications, like Credit Card Numbers.
+- **Sensitivity labels**: Displays the assets that have confidential labels, and the assets that currently have no labels.
+
+To drill down in the Azure Purview workbook:
+
+- Select a specific data source to jump to that resource in Azure.
+- Select an asset path link to show more details, with all the data fields shared in the ingested logs.
+- Select a row in the **Data Source**, **Classification**, or **Sensitivity Label** tables to filter the Asset Level data as configured.
+
+### Investigate incidents triggered by Azure Purview events
+
+When investigating incidents triggered by the Azure Purview analytics rules, find detailed information on the assets and classifications found in the incident's **Events**.
+
+For example:
++
+## Next steps
+
+For more information, see:
+
+- [Visualize collected data](get-visibility.md)
+- [Create custom analytics rules to detect threats](detect-threats-custom.md)
+- [Investigate incidents with Microsoft Sentinel](investigate-cases.md)
+- [About Microsoft Sentinel content and solutions](sentinel-solutions.md)
+- [Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions (Public preview)](sentinel-solutions-deploy.md)
sentinel Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/restore.md
+
+ Title: Restore archived logs from search - Microsoft Sentinel
+description: Learn how to restore archived logs from search job results.
++ Last updated : 01/20/2022+++
+# Restore archived logs from search (preview)
+
+Restore data from an archived log to use in high performing queries and analytics.
+
+Before you restore data in an archived log, see [Start an investigation by searching large datasets (preview)](investigate-large-datasets.md) and [Restore in Azure Monitor](../azure-monitor/logs/restore.md).
+
+> [!IMPORTANT]
+> The search job and restore features are currently in **PREVIEW**. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+## Restore archived log data
+
+To restore archived log data in Microsoft Sentinel, specify the table and time range for the data you want to restore. Within a few minutes, the log data is available within the Log Analytics workspace. Then you can use the data in high-performance queries that support full KQL.
+
+You can restore archived data directly from the **Search (preview)** page or from a saved search.
+
+1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace.
+1. Under **General**, select **Search (preview)**.
+1. Restore log data in one of two ways:
+ - At the top of **Search** page, select **Restore**.
+ :::image type="content" source="media/restore/search-page-restore.png" alt-text="Screenshot of restore button at the top of the search page.":::
+ - Select the **Saved Searches** tab and **Restore** on the appropriate search.
+ :::image type="content" source="media/restore/search-results-restore.png" alt-text="Screenshot of the restore link on a saved search.":::
+
+1. Select the table you want to restore.
+1. Select the time range of the data that you want restore.
+1. Select **Restore**.
+
+ :::image type="content" source="media/restore/restoration-page.png" alt-text="Screenshot of the restoration page with table and time range selected.":::
+
+1. Wait for the log data to be restored. View the status of your restoration job by selecting on the **Restoration** tab.
+
+## View restored log data
+
+View the status and results of the log data restore by going to the **Restoration** tab. You can view the restored data when the status of the restore job shows **Data Available**.
+
+1. In your Microsoft Sentinel workspace, select **Search** > **Restoration**.
+
+ :::image type="content" source="media/restore/restoration-tab.png" alt-text="Screenshot of the restoration tab on the search page.":::
+
+1. When your restore job is complete, select the table name.
+
+ :::image type="content" source="media/restore/data-available-select-table.png" alt-text="Screenshot that shows rows with completed restore jobs and a table selected.":::
+
+1. Review the results.
+
+ :::image type="content" source="media/restore/restored-data-logs-view.png" alt-text="Screenshot that shows the logs query pane with the restored table results.":::
+
+ The Logs query pane shows the name of table containing the restored data. The **Time range** is set to a custom time range that uses the start and end times of the restored data.
+
+## Delete restored data tables
+
+To save costs, we recommend you delete the restored table when you no longer need it. When you delete a restored table, Azure doesn't delete the underlying source data.
++
+1. In your Microsoft Sentinel workspace, select **Search** > **Restoration**.
+1. Identify the table you want to delete.
+1. Select **Delete** for that table row.
+
+ :::image type="content" source="media/restore/delete-restored-table.png" alt-text="Screenshot of restoration tab that shows the delete button on each row.":::
+
+## Next steps
+
+- [Hunt with bookmarks](bookmarks.md)
sentinel Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/roles.md
Users with particular job requirements may need to be assigned additional roles
- **Creating and deleting workbooks**
- For a user to create and delete a Microsoft Sentinel workbook, the user will also need to be assigned with the Azure Monitor role of [Monitoring Contributor](../role-based-access-control/built-in-roles.md#monitoring-contributor). This role is not necessary for *using* workbooks, but only for creating and deleting.
+ To create and delete a Microsoft Sentinel workbook, the user requires either the Microsoft Sentinel Contributor role or a lesser Microsoft Sentinel role plus the Azure Monitor role of [Workbook Contributor](../role-based-access-control/built-in-roles.md#workbook-contributor). This role is not necessary for *using* workbooks, but only for creating and deleting.
### Other roles you might see assigned
For example, a user who is assigned the **Microsoft Sentinel Reader** role, but
The following table summarizes the Microsoft Sentinel roles and their allowed actions in Microsoft Sentinel.
-| Role | Create and run playbooks| Create and edit analytic rules and other Microsoft Sentinel resources [*](#workbooks) | Manage incidents (dismiss, assign, etc.) | View data, incidents, workbooks, and other Microsoft Sentinel resources |
+| Role | Create and run playbooks| Create and edit analytics rules, workbooks, and other Microsoft Sentinel resources | Manage incidents (dismiss, assign, etc.) | View data, incidents, workbooks, and other Microsoft Sentinel resources |
||||||
-| Microsoft Sentinel Reader | -- | -- | -- | &#10003; |
-| Microsoft Sentinel Responder | -- | -- | &#10003; | &#10003; |
+| Microsoft Sentinel Reader | -- | --[*](#workbooks) | -- | &#10003; |
+| Microsoft Sentinel Responder | -- | --[*](#workbooks) | &#10003; | &#10003; |
| Microsoft Sentinel Contributor | -- | &#10003; | &#10003; | &#10003; | | Microsoft Sentinel Contributor + Logic App Contributor | &#10003; | &#10003; | &#10003; | &#10003; | | | | | | |
-<a name=workbooks></a>* Creating and deleting workbooks requires the additional [Monitoring Contributor](../role-based-access-control/built-in-roles.md#monitoring-contributor) role. For more information, see [Additional roles and permissions](#additional-roles-and-permissions).
+<a name=workbooks></a>* Users with these roles can create and delete workbooks with the additional [Workbook Contributor](../role-based-access-control/built-in-roles.md#workbook-contributor) role. For more information, see [Additional roles and permissions](#additional-roles-and-permissions).
+
+Consult the [Role recommendations](#role-recommendations) section for best practices in which roles to assign to which users in your SOC.
+ ## Custom roles and advanced Azure RBAC - **Custom roles**. In addition to, or instead of, using Azure built-in roles, you can create Azure custom roles for Microsoft Sentinel. Azure custom roles for Microsoft Sentinel are created the same way you create other [Azure custom roles](../role-based-access-control/custom-roles-rest.md#create-a-custom-role), based on [specific permissions to Microsoft Sentinel](../role-based-access-control/resource-provider-operations.md#microsoftsecurityinsights) and to [Azure Log Analytics resources](../role-based-access-control/resource-provider-operations.md#microsoftoperationalinsights).
sentinel Search Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/search-jobs.md
+
+ Title: Search across long time spans in large datasets - Microsoft Sentinel
+description: Learn how to use search jobs to search extremely large datasets.
++ Last updated : 01/14/2022+++
+# Search across long time spans in large datasets (preview)
+
+Use a search job when you start an investigation to find specific events in logs within a given time frame. You can search all your logs, filter through them, and look for events that match your criteria.
+
+Before you start a search job, see [Start an investigation by searching large datasets (preview)](investigate-large-datasets.md) and [Search jobs in Azure Monitor](../azure-monitor/logs/search-jobs.md).
+
+> [!IMPORTANT]
+> The search job feature is currently in **PREVIEW**. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+## Start a search job
+
+Go to **Search** in Microsoft Sentinel to enter your search criteria.
+
+1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace.
+1. Under **General**, select **Search (preview)**.
+1. In the **Search** box, enter the search term.
+1. Select the appropriate **Time range**.
+1. Select the **Table** that you want to search.
+1. When you're ready to start the search job, select **Search**.
+
+ :::image type="content" source="media/search-jobs/search-job-criteria.png" alt-text="Screenshot of search page with search criteria of administrator, timerange last 90 days, and table selected.":::
+
+ When the search job starts, a notification and the job status show on the search page.
+
+1. Wait for your search job to complete. Depending on your dataset and search criteria, the search job may take a few minutes or up to 24 hours to complete. If your search job takes longer than 24 hours, it will time out. If that happens, refine your search criteria and try again.
+
+## View search job results
+
+View the status and results of your search job by going to the **Saved Searches** tab.
+
+1. In your Microsoft Sentinel workspace, select **Search** > **Saved Searches**.
+
+ :::image type="content" source="media/search-jobs/saved-searches-tab.png" alt-text="Screenshot that shows saved searches tab on the search page.":::
+
+1. On the search card, select **View search results**.
+
+ :::image type="content" source="media/search-jobs/view-search-results.png" alt-text="Screenshot that shows the link to view search results at the bottom of the search job card.":::
+
+1. By default, you see all the results that match your original search criteria.
+
+ :::image type="content" source="media/search-jobs/search-job-results.png" alt-text="Screenshot that shows the logs page with search job results.":::
+
+ In the search query, notice the time columns referenced.
+
+ - `TimeGenerated` is the date and time the data was ingested into the search table.
+ - `_OriginalTimeGenerated` is the date and time the record was created.
+1. To refine the list of results returned from the search table, edit the KQL query.
+
+1. As you're reviewing your search job results, bookmark rows that contain information you find interesting so you can attach them to an incident or refer to them later.
++
+## Next steps
+
+To learn more, see the following topics.
+
+- [Hunt with bookmarks](bookmarks.md)
+- [Restore archived logs](restore.md)
+- [Configure data retention and archive policies in Azure Monitor Logs (Preview)](../azure-monitor/logs/data-retention-archive.md)
sentinel Sentinel Solutions Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solutions-catalog.md
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|Name |Includes |Categories |Supported by | ||||| |**Azure Firewall Solution for Sentinel**| [Data connector](data-connectors-reference.md#azure-firewall), workbook, analytics rules, playbooks, hunting queries, custom Logic App connector |Security - Network Security, Networking | Community|
+| **Azure Purview** | [Data connector](data-connectors-reference.md#azure-purview), workbook, analytics rules <br><br>For more information, see [Tutorial: Integrate Microsoft Sentinel and Azure Purview](purview-solution.md). | Compliance, Security- Cloud Security, and Security- Information Protection | Microsoft |
|**Microsoft Sentinel for SQL PaaS** | [Data connector](data-connectors-reference.md#azure-sql-databases), workbook, analytics rules, playbooks, hunting queries | Application | Community | |**Microsoft Sentinel Training Lab** |Workbook, analytics rules, playbooks, hunting queries | Training and tutorials |Microsoft | |**Azure SQL** | [Data connector](data-connectors-reference.md#azure-sql-databases), workbook, analytics, playbooks, hunting queries | Application |Microsoft |
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
If you're looking for items older than six months, you'll find them in the [Arch
## February 2022
+- [View Azure Purview data in Microsoft Sentinel](#view-azure-purview-data-in-microsoft-sentinel-public-preview)
- [Manually run playbooks based on the incident trigger (Public preview)](#manually-run-playbooks-based-on-the-incident-trigger-public-preview)
+- [Search across long time spans in large datasets (public preview)](#search-across-long-time-spans-in-large-datasets-public-preview)
+- [Restore archived logs from search (public preview)](#restore-archived-logs-from-search-public-preview)
+
+### View Azure Purview data in Microsoft Sentinel (Public Preview)
+
+Microsoft Sentinel now integrates directly with Azure Purview by providing an out-of-the-box solution.
+
+The Azure Purview solution includes the Azure Purview data connector, related analytics rule templates, and a workbook that you can use to visualize sensitivity data detected by Azure Purview, together with other data ingested in Microsoft Sentinel.
++
+For more information, see [Tutorial: Integrate Microsoft Sentinel and Azure Purview](purview-solution.md).
### Manually run playbooks based on the incident trigger (Public preview)
For these and other reasons, Microsoft Sentinel now allows you to [**run playboo
Learn more about [running incident-trigger playbooks manually](tutorial-respond-threats-playbook.md#run-a-playbook-manually-on-an-incident).
+### Search across long time spans in large datasets (public preview)
+
+Use a search job when you start an investigation to find specific events in logs within a given time frame. You can search all your logs, filter through them, and look for events that match your criteria.
+
+Search jobs are asynchronous queries that fetch records. The results are returned to a search table that's created in your Log Analytics workspace after you start the search job. The search job uses parallel processing to run the search across long time spans, in extremely large datasets. So search jobs don't impact the workspace's performance or availability.
+
+Use search to find events in any of the following log types:
+
+- [Analytics logs](../azure-monitor/logs/data-platform-logs.md)
+- [Basic logs (preview)](../azure-monitor/logs/basic-logs-configure.md)
+
+You can also search analytics or basic log data stored in [archived logs (preview)](../azure-monitor/logs/data-retention-archive.md).
+
+For more information, see:
+
+- [Start an investigation by searching large datasets (preview)](investigate-large-datasets.md)
+- [Search across long time spans in large datasets (preview)](search-jobs.md)
+
+For information about billing for basic logs or log data stored in archived logs, see [Plan costs for Microsoft Sentinel](billing.md#understand-the-full-billing-model-for-microsoft-sentinel).
+
+### Restore archived logs from search (public preview)
+
+When you need to do a full investigation on data stored in archived logs, restore a table from the **Search** page in Microsoft Sentinel. Specify a target table and time range for the data you want to restore. Within a few minutes, the log data is restored and available within the Log Analytics workspace. Then you can use the data in high-performance queries that support full KQL.
+
+For more information, see:
+
+- [Start an investigation by searching large datasets (preview)](investigate-large-datasets.md)
+- [Restore archived logs from search (preview)](restore.md)
+ ## January 2022 - [Support for MITRE ATT&CK techniques (Public preview)](#support-for-mitre-attck-techniques-public-preview)
service-fabric Service Fabric Powershell Add Application Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-add-application-certificate.md
Title: Add application cert to a cluster in Powershell
+ Title: Add application cert to a cluster in PowerShell
description: Azure PowerShell Script Sample - Add an application certificate to a Service Fabric cluster. documentationcenter:
This script uses the following commands: Each command in the table links to comm
For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/).
-Additional Azure Powershell samples for Azure Service Fabric can be found in the [Azure PowerShell samples](../service-fabric-powershell-samples.md).
+Additional Azure PowerShell samples for Azure Service Fabric can be found in the [Azure PowerShell samples](../service-fabric-powershell-samples.md).
service-fabric Service Fabric Powershell Add Nsg Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-add-nsg-rule.md
 Title: Add a network security group rule in Powershell
+ Title: Add a network security group rule in PowerShell
description: Azure PowerShell Script Sample - Adds a network security group to allow inbound traffic on a specific port. documentationcenter:
service-fabric Service Fabric Powershell Change Rdp Port Range https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-change-rdp-port-range.md
This script uses the following commands. Each command in the table links to comm
For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/).
-Additional Azure Powershell samples for Azure Service Fabric can be found in the [Azure PowerShell samples](../service-fabric-powershell-samples.md).
+Additional Azure PowerShell samples for Azure Service Fabric can be found in the [Azure PowerShell samples](../service-fabric-powershell-samples.md).
service-fabric Service Fabric Powershell Change Rdp User And Pw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-change-rdp-user-and-pw.md
A single node type with five nodes, for example, has a duration of 45 to 60 minu
For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/).
-Additional Azure Powershell samples for Azure Service Fabric can be found in the [Azure PowerShell samples](../service-fabric-powershell-samples.md).
+Additional Azure PowerShell samples for Azure Service Fabric can be found in the [Azure PowerShell samples](../service-fabric-powershell-samples.md).
service-fabric Service Fabric Powershell Create Secure Cluster Cert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-create-secure-cluster-cert.md
Title: Create a Service Fabric cluster in Powershell
+ Title: Create a Service Fabric cluster in PowerShell
description: Azure PowerShell Script Sample - Create a Service Fabric cluster secured with an X.509 certificate. documentationcenter:
This script uses the following commands. Each command in the table links to comm
For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/).
-Additional Azure Powershell samples for Azure Service Fabric can be found in the [Azure PowerShell samples](../service-fabric-powershell-samples.md).
+Additional Azure PowerShell samples for Azure Service Fabric can be found in the [Azure PowerShell samples](../service-fabric-powershell-samples.md).
service-fabric Service Fabric Powershell Deploy Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-deploy-application.md
Title: Deploy application to a cluster in Powershell
+ Title: Deploy application to a cluster in PowerShell
description: Azure PowerShell Script Sample - Deploy an application to a Service Fabric cluster. documentationcenter:
This script uses the following commands. Each command in the table links to comm
For more information on the Service Fabric PowerShell module, see [Azure PowerShell documentation](/powershell/azure/service-fabric/overview).
-Additional Powershell samples for Azure Service Fabric can be found in the [Azure PowerShell samples](../service-fabric-powershell-samples.md).
+Additional PowerShell samples for Azure Service Fabric can be found in the [Azure PowerShell samples](../service-fabric-powershell-samples.md).
service-fabric Service Fabric Powershell Open Port In Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-open-port-in-load-balancer.md
Title: Open application port in load balancer in Powershell
+ Title: Open application port in load balancer in PowerShell
description: Azure PowerShell Script Sample - Open a port in the Azure load balancer for a Service Fabric application. documentationcenter:
This script uses the following commands. Each command in the table links to comm
For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/).
-Additional Powershell samples for Azure Service Fabric can be found in the [Azure PowerShell samples](../service-fabric-powershell-samples.md).
+Additional PowerShell samples for Azure Service Fabric can be found in the [Azure PowerShell samples](../service-fabric-powershell-samples.md).
service-fabric Service Fabric Powershell Remove Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-remove-application.md
Title: Remove application from a cluster in Powershell
+ Title: Remove application from a cluster in PowerShell
description: Azure PowerShell Script Sample - Remove an application from a Service Fabric cluster. documentationcenter:
-# Remove an application from a Service Fabric cluster using Powershell
+# Remove an application from a Service Fabric cluster using PowerShell
This sample script deletes a running Service Fabric application instance and unregisters an application type and version from the cluster. Deleting the application instance also deletes all the running service instances associated with that application. Customize the parameters as needed.
This script uses the following commands. Each command in the table links to comm
For more information on the Service Fabric PowerShell module, see [Azure PowerShell documentation](/powershell/azure/service-fabric/overview).
-Additional Powershell samples for Azure Service Fabric can be found in the [Azure PowerShell samples](../service-fabric-powershell-samples.md).
+Additional PowerShell samples for Azure Service Fabric can be found in the [Azure PowerShell samples](../service-fabric-powershell-samples.md).
service-fabric Service Fabric Powershell Upgrade Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-upgrade-application.md
Title: Upgrade a Service Fabric application in Powershell
-description: Azure PowerShell Script Sample - Upgrade and monitor an Azure Service Fabric application using Powershell.
+ Title: Upgrade a Service Fabric application in PowerShell
+description: Azure PowerShell Script Sample - Upgrade and monitor an Azure Service Fabric application using PowerShell.
documentationcenter:
This script uses the following commands. Each command in the table links to comm
For more information on the Service Fabric PowerShell module, see [Azure PowerShell documentation](/powershell/azure/service-fabric/overview).
-Additional Powershell samples for Azure Service Fabric can be found in the [Azure PowerShell samples](../service-fabric-powershell-samples.md).
+Additional PowerShell samples for Azure Service Fabric can be found in the [Azure PowerShell samples](../service-fabric-powershell-samples.md).
service-fabric Service Fabric Application Upgrade Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-upgrade-advanced.md
During rollback, the value of *UpgradeReplicaSetCheckTimeout* and the mode can s
## Next steps [Upgrading your Application Using Visual Studio](service-fabric-application-upgrade-tutorial.md) walks you through an application upgrade using Visual Studio.
-[Upgrading your Application Using Powershell](service-fabric-application-upgrade-tutorial-powershell.md) walks you through an application upgrade using PowerShell.
+[Upgrading your Application Using PowerShell](service-fabric-application-upgrade-tutorial-powershell.md) walks you through an application upgrade using PowerShell.
Control how your application upgrades by using [Upgrade Parameters](service-fabric-application-upgrade-parameters.md).
service-fabric Service Fabric Application Upgrade Data Serialization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-upgrade-data-serialization.md
Data Contract is the recommended solution for ensuring that your data is compati
## Next steps [Upgrading your Application Using Visual Studio](service-fabric-application-upgrade-tutorial.md) walks you through an application upgrade using Visual Studio.
-[Upgrading your Application Using Powershell](service-fabric-application-upgrade-tutorial-powershell.md) walks you through an application upgrade using PowerShell.
+[Upgrading your Application Using PowerShell](service-fabric-application-upgrade-tutorial-powershell.md) walks you through an application upgrade using PowerShell.
Control how your application upgrades by using [Upgrade Parameters](service-fabric-application-upgrade-parameters.md).
service-fabric Service Fabric Application Upgrade Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-upgrade-parameters.md
warning-as-error | Allowed values are **True** and **False**. Default value is *
## Next steps [Upgrading your Application Using Visual Studio](service-fabric-application-upgrade-tutorial.md) walks you through an application upgrade using Visual Studio.
-[Upgrading your Application Using Powershell](service-fabric-application-upgrade-tutorial-powershell.md) walks you through an application upgrade using PowerShell.
+[Upgrading your Application Using PowerShell](service-fabric-application-upgrade-tutorial-powershell.md) walks you through an application upgrade using PowerShell.
[Upgrading your Application using Service Fabric CLI on Linux](service-fabric-application-lifecycle-sfctl.md#upgrade-application) walks you through an application upgrade using Service Fabric CLI.
service-fabric Service Fabric Application Upgrade Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-upgrade-troubleshooting.md
The upgrade time for an upgrade domain is limited by *UpgradeDomainTimeout*. If
[Upgrading your Application Using Visual Studio](service-fabric-application-upgrade-tutorial.md) walks you through an application upgrade using Visual Studio.
-[Upgrading your Application Using Powershell](service-fabric-application-upgrade-tutorial-powershell.md) walks you through an application upgrade using PowerShell.
+[Upgrading your Application Using PowerShell](service-fabric-application-upgrade-tutorial-powershell.md) walks you through an application upgrade using PowerShell.
Control how your application upgrades by using [Upgrade Parameters](service-fabric-application-upgrade-parameters.md).
service-fabric Service Fabric Cluster Resource Manager Application Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-resource-manager-application-groups.md
In the left example, the application doesnΓÇÖt have a maximum number of nodes de
The parameter that controls this behavior is called MaximumNodes. This parameter can be set during application creation, or updated for an application instance that was already running.
-Powershell
+PowerShell
``` posh New-ServiceFabricApplication -ApplicationName fabric:/AppName -ApplicationTypeName AppType1 -ApplicationTypeVersion 1.0.0.0 -MaximumNodes 3
For each application metric, there are two values that can be set:
- **Maximum Node Capacity** ΓÇô This setting specifies the maximum total load for the application on a single node. If load goes over this capacity, the Cluster Resource Manager moves replicas to other nodes so that the load decreases.
-Powershell:
+PowerShell:
``` posh New-ServiceFabricApplication -ApplicationName fabric:/AppName -ApplicationTypeName AppType1 -ApplicationTypeVersion 1.0.0.0 -Metrics @("MetricName:Metric1,MaximumNodeCapacity:100,MaximumApplicationCapacity:1000")
In the example on the right, let's say that Application1 was created with the fo
- An application Metric defined with - NodeReservationCapacity of 20
-Powershell
+PowerShell
``` posh New-ServiceFabricApplication -ApplicationName fabric:/AppName -ApplicationTypeName AppType1 -ApplicationTypeVersion 1.0.0.0 -MinimumNodes 2 -Metrics @("MetricName:Metric1,NodeReservationCapacity:20")
Service Fabric reserves capacity on two nodes for Application1, and doesn't allo
## Obtaining the application load information For each application that has an Application Capacity defined for one or more metrics you can obtain the information about the aggregate load reported by replicas of its services.
-Powershell:
+PowerShell:
``` posh Get-ServiceFabricApplicationLoadInformation ΓÇôApplicationName fabric:/MyApplication1
service-fabric Service Fabric Cluster Resource Manager Autoscaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-resource-manager-autoscaling.md
serviceDescription.ScalingPolicies.Add(policy);
serviceDescription.ServicePackageActivationMode = ServicePackageActivationMode.ExclusiveProcess await fabricClient.ServiceManager.CreateServiceAsync(serviceDescription); ```
-### Using Powershell
+### Using PowerShell
```posh $mechanism = New-Object -TypeName System.Fabric.Description.PartitionInstanceCountScaleMechanism $mechanism.MinInstanceCount = 1
serviceUpdate.ScalingPolicies = new List<ScalingPolicyDescription>;
serviceUpdate.ScalingPolicies.Add(policy); await fabricClient.ServiceManager.UpdateServiceAsync(new Uri("fabric:/AppName/ServiceName"), serviceUpdate); ```
-### Using Powershell
+### Using PowerShell
```posh $mechanism = New-Object -TypeName System.Fabric.Description.AddRemoveIncrementalNamedPartitionScalingMechanism $mechanism.MinPartitionCount = 1
service-health Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Service Health description: Sample Azure Resource Graph queries for Azure Service Health showing use of resource types and tables to access Azure Service Health related resources and properties. Previously updated : 01/20/2022 Last updated : 02/16/2022
site-recovery Site Recovery Deployment Planner History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-deployment-planner-history.md
This article provides history of all versions of Azure Site Recovery Deployment
**Fixes:** -- For VMware virtual machines and physical machines, recommendation is updated to be based on replication to Managed Disks.
+- For VMware virtual machines, recommendation is updated to be based on replication to Managed Disks.
- Added support for Windows 10 (x64), Windows 8.1 (x64), Windows 8 (x64), Windows 7 (x64) SP1 or later ## Version 2.4
spring-cloud Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/faq.md
Azure Spring Cloud intelligently schedules your applications on the underlying K
### In which regions is Azure Spring Cloud available?
-East US, East US 2, Central US, South Central US, North Central US, West US, West US 2, West Europe, North Europe, UK South, Southeast Asia, Australia East, Canada Central, UAE North, Central India, Korea Central, East Asia, Japan East, South Africa North, Brazil South, France Central, and China East 2(Mooncake). [Learn More](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud)
+East US, East US 2, Central US, South Central US, North Central US, West US, West US 2, West Europe, North Europe, UK South, Southeast Asia, Australia East, Canada Central, UAE North, Central India, Korea Central, East Asia, Japan East, South Africa North, Brazil South, France Central, China East 2(Mooncake), and China North 2(Mooncake). [Learn More](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud)
### Is any customer data stored outside of the specified region?
We will enhance this part and avoid this error from usersΓÇÖ applications in sho
## Next steps
-If you have further questions, see the [Azure Spring Cloud troubleshooting guide](./troubleshoot.md).
+If you have further questions, see the [Azure Spring Cloud troubleshooting guide](./troubleshoot.md).
spring-cloud How To Capture Dumps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-capture-dumps.md
az spring-cloud app deployment start-jfr \
The default value for `duration` is 60 seconds.
+## Generate a dump using the Azure portal
+
+Use the following steps to generate a heap or thread dump of your app in Azure Spring Cloud.
+
+1. In the Azure portal, navigate to your target app, then select **Troubleshooting**.
+2. In the **Troubleshooting** pane, select the app instance and the type of dump you'd like to collect.
+3. In the **File path** field, specify the mount path of your persistent storage.
+4. Select **Collect**.
+ ## Get your diagnostic files Navigate to the target file path in your persistent storage and find your dump/JFR. From there, you can download them to your local machine. The name of the generated file will be similar to *`<app-instance>_heapdump_<time-stamp>.hprof`* for the heap dump, *`<app-instance>_threaddump_<time-stamp>.txt`* for the thread dump, and *`<app-instance>_JFR_<time-stamp>.jfr`* for the JFR file. ## Next steps -- [Use the diagnostic settings of JVM options for advanced troubleshooting in Azure Spring Cloud](how-to-dump-jvm-options.md)
+* [Use the diagnostic settings of JVM options for advanced troubleshooting in Azure Spring Cloud](how-to-dump-jvm-options.md)
spring-cloud How To Custom Persistent Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-custom-persistent-storage.md
description: How to bring your own storage as persistent storages in Azure Sprin
Previously updated : 10/28/2021 Last updated : 2/18/2022
With Bring Your Own Storage, these artifacts are uploaded into a storage account
* An existing Azure Storage Account and a pre-created Azure File Share. If you need to create a storage account and file share in Azure, see [Create an Azure file share](../storage/files/storage-how-to-create-file-share.md). * The [Azure Spring Cloud extension](/cli/azure/azure-cli-extensions-overview) for the Azure CLI
+> [!IMPORTANT]
+> If you deployed your Azure Spring Cloud in your own virtual network and you want the storage account to be accessed only from the virtual network, consult the following guidance:
+> - [Use private endpoints for Azure Storage](../storage/common/storage-private-endpoints.md)
+> - [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md), especially the [Grant access from a virtual network using service endpoint](../storage/common/storage-network-security.md#grant-access-from-a-virtual-network) section
+ ## Mount your own extra persistent storage to applications > [!NOTE]
The following are frequently asked questions (FAQ) about using your own persiste
*The `mountOptions` property is optional. The default values for above mount options are: ["uid=0", "gid=0", "file_mode=0777", "dir_mode=0777"]*
+* I'm using the service endpoint to configure the storage account to allow access only from my own virtual network. Why did I receive *Permission Denied* while trying to mount custom persistent storage to my applications?
+
+ *A service endpoint provides network access on a subnet level only. Be sure you've added both subnets used by the Azure Spring Cloud instance to the scope of the service endpoint.*
+ ## Next steps * [How to use Logback to write logs to custom persistent storage](how-to-write-log-to-custom-persistent-storage.md).
spring-cloud Quickstart Deploy Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-deploy-apps.md
Compiling the project takes 5-10 minutes. Once completed, you should have indivi
1. Create the 2 core Spring applications for PetClinic: API gateway and customers-service. ```azurecli
- az spring-cloud app create --name api-gateway --instance-count 1 --memory 2 --assign-endpoint
- az spring-cloud app create --name customers-service --instance-count 1 --memory 2
+ az spring-cloud app create --name api-gateway --instance-count 1 --memory 2Gi --assign-endpoint
+ az spring-cloud app create --name customers-service --instance-count 1 --memory 2Gi
``` 1. Deploy the JAR files built in the previous step.
Access the app gateway and customers service from browser with the **Public Url*
To get the PetClinic app functioning with all features like Admin Server, Visits and Veterinarians, you can deploy the other apps with following commands: ```azurecli
-az spring-cloud app create --name admin-server --instance-count 1 --memory 2 --assign-endpoint
-az spring-cloud app create --name vets-service --instance-count 1 --memory 2
-az spring-cloud app create --name visits-service --instance-count 1 --memory 2
+az spring-cloud app create --name admin-server --instance-count 1 --memory 2Gi --assign-endpoint
+az spring-cloud app create --name vets-service --instance-count 1 --memory 2Gi
+az spring-cloud app create --name visits-service --instance-count 1 --memory 2Gi
az spring-cloud app deploy --name admin-server --jar-path spring-petclinic-admin-server/target/spring-petclinic-admin-server-2.5.1.jar --jvm-options="-Xms2048m -Xmx2048m" az spring-cloud app deploy --name vets-service --jar-path spring-petclinic-vets-service/target/spring-petclinic-vets-service-2.5.1.jar --jvm-options="-Xms2048m -Xmx2048m" az spring-cloud app deploy --name visits-service --jar-path spring-petclinic-visits-service/target/spring-petclinic-visits-service-2.5.1.jar --jvm-options="-Xms2048m -Xmx2048m"
static-web-apps Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/configuration.md
The following example configuration blocks anonymous access and redirects all un
```json {
- "routes": {
- "route": "/*",
- "allowedRoles": ["authenticated"]
- },
+ "routes": [
+ {
+ "route": "/*",
+ "allowedRoles": ["authenticated"]
+ }
+ ],
"responseOverrides": { "401": { "statusCode": 302,
static-web-apps Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/getting-started.md
If you don't already have the [Azure Static Web Apps extension for Visual Studio
# [Blazor](#tab/blazor)
- :::image type="content" source="media/getting-started/extension-presets-blazor.png" alt-text="Application presets: Blazor":::
+ :::image type="content" source="media/getting-started/extension-presets-blazor.png" alt-text="A screenshot showing the application presets for Blazor":::
Enter **Client** as the location for the application files, since this is the root folder of the Blazor project.
If you're not going to continue to use this application, you can delete the Azur
In the Visual Studio Code Explorer window, return to the _Static Web Apps_ section and right-click on **my-first-static-web-app** and select **Delete**. ## Next steps
storage Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Storage description: Sample Azure Resource Graph queries for Azure Storage showing use of resource types and tables to access Azure Storage related resources and properties. Previously updated : 01/20/2022 Last updated : 02/16/2022
synapse-analytics Migrate To Synapse Analytics Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/migrate-to-synapse-analytics-guide.md
Rather than Azure Synapse Analytics, consider other options for operational (OLT
- Row-by-row processing needs. - Incompatible formats (for example, JSON and XML).
-## Azure Synapse Pathway
-
-One of the critical blockers customers face is translating their database objects when they migrate from one system to another. [Azure Synapse Pathway](/sql/tools/synapse-pathway/azure-synapse-pathway-overview) helps you upgrade to a modern data warehouse platform by automating the object translation of your existing data warehouse. It's a free, intuitive, and easy-to-use tool that automates the code translation to enable a quicker migration to Azure Synapse Analytics.
-
-## Prerequisites
-
-# [Migrate from SQL Server](#tab/migratefromSQLServer)
-
-To migrate your SQL Server data warehouse to Azure Synapse Analytics, make sure you've met the following prerequisites:
--- Have a data warehouse or analytics workload.-- Download the latest version of [Azure Synapse Pathway](https://www.microsoft.com/en-us/download/details.aspx?id=103061) to migrate SQL Server objects to Azure Synapse objects.-- Have a [dedicated SQL pool](../get-started-create-workspace.md) in an Azure Synapse workspace.-
-# [Migrate from Netezza](#tab/migratefromNetezza)
-
-To migrate your Netezza data warehouse to Azure Synapse Analytics, make sure you've met the following prerequisites:
--- Download the latest version of [Azure Synapse Pathway](https://www.microsoft.com/en-us/download/details.aspx?id=103061) to migrate SQL Server objects to Azure Synapse objects.-- Have a [dedicated SQL pool](../get-started-create-workspace.md) in an Azure Synapse workspace.-
-For more information, see [Azure Synapse Analytics solutions and migration for Netezza](/azure/cloud-adoption-framework/migrate/azure-best-practices/analytics/analytics-solutions-netezza).
-
-# [Migrate from Snowflake](#tab/migratefromSnowflake)
-
-To migrate your Snowflake data warehouse to Azure Synapse Analytics, make sure you've met the following prerequisites:
--- Download the latest version of [Azure Synapse Pathway](https://www.microsoft.com/en-us/download/details.aspx?id=103061) to migrate Snowflake objects to Azure Synapse objects.-- Have a [dedicated SQL pool](../get-started-create-workspace.md) in an Azure Synapse workspace.-
-# [Migrate from Oracle](#tab/migratefromOracle)
-
-To migrate your Oracle data warehouse to Azure Synapse Analytics, make sure you've met the following prerequisites:
--- Have a data warehouse or analytics workload.-- Download SQL Server Migration Assistant for Oracle to convert Oracle objects to SQL Server. For more information, see [Migrating Oracle Databases to SQL Server (OracleToSQL)](/sql/ssma/oracle/migrating-oracle-databases-to-sql-server-oracletosql).-- Download the latest version of [Azure Synapse Pathway](https://www.microsoft.com/download/details.aspx?id=103061) to migrate SQL Server objects to Azure Synapse objects.-- Have a [dedicated SQL pool](../get-started-create-workspace.md) in an Azure Synapse workspace.-
-For more information, see [Azure Synapse Analytics solutions and migration for an Oracle data warehouse](/azure/cloud-adoption-framework/migrate/azure-best-practices/analytics/analytics-solutions-exadata).
-- ## Pre-migration
synapse-analytics Overview What Is https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/overview-what-is.md
**Apache Spark for Azure Synapse** deeply and seamlessly integrates Apache Spark--the most popular open source big data engine used for data preparation, data engineering, ETL, and machine learning.
-* ML models with SparkML algorithms and AzureML integration for Apache Spark 2.4 with built-in support for Linux Foundation Delta Lake.
+* ML models with SparkML algorithms and AzureML integration for Apache Spark 3.1 with built-in support for Linux Foundation Delta Lake.
* Simplified resource model that frees you from having to worry about managing clusters. * Fast Spark start-up and aggressive autoscaling. * Built-in support for .NET for Spark allowing you to reuse your C# expertise and existing .NET code within a Spark application.
synapse-analytics Create Use External Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/create-use-external-tables.md
The following table lists the data formats supported:
|Data format (Native external tables) |Serverless SQL pool |Dedicated SQL pool | ||||
-|Paraquet | Yes (GA) | Yes (public preview) |
+|Parquet | Yes (GA) | Yes (public preview) |
|CSV | Yes | No (Alternatively, use [Hadoop external tables](develop-tables-external-tables.md?tabs=hadoop)) | |delta | Yes | No | |Spark | Yes | No |
traffic-manager Traffic Manager Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-monitoring.md
To configure endpoint monitoring, you must specify the following settings on you
## How endpoint monitoring works
-When the monitoring protocol is set as HTTP or HTTPS, the Traffic Manager probing agent makes a GET request to the endpoint using the protocol, port, and relative path given. An endpoint is considered healthy if probing agent receives a 200-OK response, or any of the responses configured in the **Expected status code \*ranges**. If the response is a different value or no response get received within the timeout period, the Traffic Manager probing agent reattempts according to the Tolerated Number of Failures setting. No reattempts are done if this setting is 0. The endpoint is marked unhealthy if the number of consecutive failures is higher than the Tolerated Number of Failures setting.
+When the monitoring protocol is set as HTTP or HTTPS, the Traffic Manager probing agent makes a GET request to the endpoint using the protocol, port, and relative path given. An endpoint is considered healthy if probing agent receives a 200-OK response, or any of the responses configured in the **Expected status code ranges**. If the response is a different value or no response get received within the timeout period, the Traffic Manager probing agent reattempts according to the Tolerated Number of Failures setting. No reattempts are done if this setting is 0. The endpoint is marked unhealthy if the number of consecutive failures is higher than the Tolerated Number of Failures setting.
When the monitoring protocol is TCP, the Traffic Manager probing agent creates a TCP connection request using the port specified. If the endpoint responds to the request with a response to establish the connection, that health check is marked as a success. The Traffic Manager probing agent resets the TCP connection. In cases where the response is a different value or no response get received within the timeout period, the Traffic Manager probing agent reattempts according to the Tolerated Number of Failures setting. No reattempts are made if this setting is 0. If the number of consecutive failures is higher than the Tolerated Number of Failures setting, then that endpoint is marked unhealthy.
virtual-machines Dsc Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/dsc-template.md
This article describes the Azure Resource Manager template for the [Desired Stat
as a String) and **RegistrationKey** (provided as a [PSCredential](/dotnet/api/system.management.automation.pscredential) to onboard with Azure Automation. For details about obtaining those values, see
-[Use DSC metaconfiguration to register hybrid machines](/automation/automation-dsc-onboarding.md#Use-DSC-metaconfiguration-to-register-hybrid-machines).
+[Use DSC metaconfiguration to register hybrid machines](../../automation/automation-dsc-onboarding.md#use-dsc-metaconfiguration-to-register-hybrid-machines).
> [!NOTE] > You might encounter slightly different schema examples. The change in schema occurred in the October 2016 release. For details, see [Update from a previous format](#update-from-a-previous-format).
virtual-machines Monitor Vm Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/monitor-vm-reference.md
For more information, see a list of [platform metrics that are supported in Azur
## Metric dimensions
-For more information about metric dimensions, see [Multi-dimensional metrics](/azure/azure-monitor/platform/data-platform-metrics#multi-dimensional-metrics).
+For more information about metric dimensions, see [Multi-dimensional metrics](/azure/azure-monitor/essentials/data-platform-metrics#multi-dimensional-metrics).
Azure virtual machines and virtual machine scale sets have the following dimensions that are associated with their metrics.
virtual-machines Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Virtual Machines description: Sample Azure Resource Graph queries for Azure Virtual Machines showing use of resource types and tables to access Azure Virtual Machines related resources and properties. Previously updated : 01/20/2022 Last updated : 02/16/2022
virtual-machines Automation Bom Get Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-bom-get-files.md
This guide is for configurations that use either the SAP Application (DB) or HAN
- An SAP account with permissions to download the SAP software and access the Maintenance Planner. - An installation of the [SAP download manager](https://support.sap.com/en/my-support/software-downloads.html) on your computer. - Information about your SAP system:
- - SAP account username and password
+ - SAP account username and password. The SAP account cannot be linked to a SAP Universal ID.
- The SAP system product to deploy (such as **S/4HANA**) - The SAP System Identifier (SAP SID) - Any language pack requirements
virtual-network Virtual Network Test Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-test-latency.md
Run the following commands:
sudo apt-get install -y autotools-dev sudo apt-get install -y automake sudo apt-get install -y autoconf
+ sudo apt-get install -y libtool
``` #### For all distros
virtual-wan Sd Wan Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/sd-wan-connectivity-architecture.md
Previously updated : 10/07/2020 Last updated : 02/23/2022 # SD-WAN connectivity architecture with Azure Virtual WAN
-Azure Virtual WAN is a networking service that brings together many cloud connectivity and security services with a single operational interface. These services include branch (via Site-to-site VPN), remote user (Point-to-site VPN), private (ExpressRoute) connectivity, intra-cloud transitive connectivity for Vnets, VPN and ExpressRoute interconnectivity, routing, Azure Firewall, and encryption for private connectivity.
+Azure Virtual WAN is a networking service that brings together many cloud connectivity and security services with a single operational interface. These services include branch (via Site-to-site VPN), remote user (Point-to-site VPN), private (ExpressRoute) connectivity, intra-cloud transitive connectivity for VNets, VPN and ExpressRoute interconnectivity, routing, Azure Firewall, and encryption for private connectivity.
Although Azure Virtual WAN is a cloud-based SD-WAN that provides a rich suite of Azure first-party connectivity, routing, and security services, Azure Virtual WAN also is designed to enable seamless interconnection with premises-based SD-WAN and SASE technologies and services. Many such services are offered by our [Virtual WAN](virtual-wan-locations-partners.md) ecosystem and Azure Networking Managed Services partners [(MSPs)](../networking/networking-partners-msp.md). Enterprises that are transforming their private WAN to SD-WAN have options when interconnecting their private SD-WAN with Azure Virtual WAN. Enterprises can choose from these options:
-* Direct Interconnect Model
-* Direct Interconnect Model with NVA-in-VWAN-hub
-* Indirect Interconnect Model
-* Managed Hybrid WAN Model using their favorite managed service provider [MSP](../networking/networking-partners-msp.md)
+* Direct Interconnect model
+* Direct Interconnect model with NVA-in-VWAN-hub
+* Indirect Interconnect model
+* Managed Hybrid WAN model using their favorite managed service provider [MSP](../networking/networking-partners-msp.md)
In all of these cases, the interconnection of Virtual WAN with SD-WAN is similar from the connectivity side, but may vary on the orchestration and operational side. ## <a name="direct"></a>Direct Interconnect model In this architecture model, the SD-WAN branch customer-premises equipment (CPE) is directly connected to Virtual WAN hubs via IPsec connections. The branch CPE may also be connected to other branches via the private SD-WAN, or use Virtual WAN for branch to branch connectivity. Branches that need to access their workloads in Azure will be able to directly and securely access Azure via the IPsec tunnel(s) that are terminated in the Virtual WAN hub(s). SD-WAN CPE partners can enable automation in order to automate the normally tedious and error-prone IPsec connectivity from their respective CPE devices. Automation allows the SD-WAN controller to talk to Azure via the Virtual WAN API to configure the Virtual WAN sites, and push necessary IPsec tunnel configuration to the branch CPEs. See [Automation guidelines](virtual-wan-configure-automation-providers.md) for the description of Virtual WAN interconnection automation by various SD-WAN partners.
-The SD-WAN CPE continues to be the place where traffic optimization and path selection is implemented and enforced.
+The SD-WAN CPE continues to be the place where traffic optimization and path selection is implemented and enforced.
In this model, some vendor proprietary traffic optimization based on real-time traffic characteristics may not be supported because the connectivity to Virtual WAN is over IPsec and the IPsec VPN is terminated on the Virtual WAN VPN gateway. For example, dynamic path selection at the branch CPE is feasible due to the branch device exchanging various network packet information with another SD-WAN node, hence identifying the best link to use for various prioritized traffic dynamically at the branch. This feature may be useful in areas where last mile optimization (branch to the closest Microsoft POP) is required. With Virtual WAN, users can get Azure Path Selection, which is policy-based path selection across multiple ISP links from the branch CPE to Virtual WAN VPN gateways. Virtual WAN allows for the setup of multiple links (paths) from the same SD-WAN branch CPE; each link represents a dual tunnel connection from a unique public IP of the SD-WAN CPE to two different instances of Azure Virtual WAN VPN gateway. SD-WAN vendors can implement the most optimal path to Azure, based on traffic policies set by their policy engine on the CPE links. On the Azure end, all connections coming in are treated equally.
-## <a name="direct-nva"></a>Direct Interconnect Model with NVA-in-VWAN-hub
+## <a name="direct-nva"></a>Direct Interconnect model with NVA-in-VWAN-hub
This architecture model supports the deployment of a third-party [Network Virtual Appliance (NVA) directly into the virtual hub](./about-nva-hub.md). This allows customers who want to connect their branch CPE to the same brand NVA in the virtual hub so that they can take advantage of proprietary end-to-end SD-WAN capabilities when connecting to Azure workloads.
-Several Virtual WAN Partners have worked to provide an experience that configures the NVA automatically as part of the deployment process. Once the NVA has been provisioned into the virtual hub, any additional configuration that may be required for the NVA must be done via the NVA partners portal or management application. Direct access to the NVA is not available. The NVAs that are available to be deployed directly into the Azure Virtual WAN hub are engineered specifically to be used in the virtual hub. For partners that support NVA in VWAN hub and their deployment guides, please see the [Virtual WAN Partners](virtual-wan-locations-partners.md#partners-with-integrated-virtual-hub-offerings) article.
+Several Virtual WAN Partners have worked to provide an experience that configures the NVA automatically as part of the deployment process. Once the NVA has been provisioned into the virtual hub, any additional configuration that may be required for the NVA must be done via the NVA partners portal or management application. Direct access to the NVA isn't available. The NVAs that are available to be deployed directly into the Azure Virtual WAN hub are engineered specifically to be used in the virtual hub. For partners that support NVA in VWAN hub and their deployment guides, please see the [Virtual WAN Partners](virtual-wan-locations-partners.md#partners-with-integrated-virtual-hub-offerings) article.
The SD-WAN CPE continues to be the place where traffic optimization and path selection is implemented and enforced. In this model, vendor proprietary traffic optimization based on real-time traffic characteristics is supported because the connectivity to Virtual WAN is via the SD-WAN NVA in the hub. ## <a name="indirect"></a>Indirect Interconnect model In this architecture model, SD-WAN branch CPEs are indirectly connected to Virtual WAN hubs. As the figure shows, an SD-WAN virtual CPE is deployed in an enterprise VNet. This virtual CPE is, in turn, connected to the Virtual WAN hub(s) using IPsec. The virtual CPE serves as an SD-WAN gateway into Azure. Branches that need to access their workloads in Azure will be able access them via the v-CPE gateway.
Since the connectivity to Azure is via the v-CPE gateway (NVA), all traffic to a
## <a name="hybrid"></a>Managed Hybrid WAN model In this architecture model, enterprises can leverage a managed SD-WAN service offered by a Managed Service Provider (MSP) partner. This model is similar to the direct or indirect models described above. However, in this model, the SD-WAN design, orchestration, and operations are delivered by the SD-WAN Provider. [Azure Networking MSP partners](../networking/networking-partners-msp.md) can use [Azure Lighthouse](https://azure.microsoft.com/services/azure-lighthouse/) to implement the SD-WAN and Virtual WAN service in the enterprise customerΓÇÖs Azure subscription, as well as operate the end-to-end hybrid WAN on behalf of the customer. These MSPs may also be able to implement Azure ExpressRoute into the Virtual WAN and operate it as an end-to-end managed service.
-## Additional Information
+## Additional information
* [Azure Virtual WAN FAQ](virtual-wan-faq.md) * [Solving Remote Connectivity](work-remotely-support.md)