Updates from: 06/01/2022 01:13:00
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Azure Active Directory](../active-directory/fundamentals/whats-new.md). +
+## May 2022
+
+### Updated articles
+
+- [Set redirect URLs to b2clogin.com for Azure Active Directory B2C](b2clogin.md)
+- [Enable custom domains for Azure Active Directory B2C](custom-domain.md)
+- [Configure xID with Azure Active Directory B2C for passwordless authentication](partner-xid.md)
+- [UserJourneys](userjourneys.md)
+- [Secure your API used an API connector in Azure AD B2C](secure-rest-api.md)
+ ## April 2022 ### New articles
active-directory Tutorial Enable Cloud Sync Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-cloud-sync-sspr-writeback.md
Previously updated : 10/25/2021 Last updated : 05/31/2022
With password writeback enabled in Azure AD Connect cloud sync, now verify, and
To verify and enable password writeback in SSPR, complete the following steps:
-1. Sign into the Azure portal using a global administrator account.
+1. Sign into the Azure portal using a [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator) account.
1. Navigate to Azure Active Directory, select **Password reset**, then choose **On-premises integration**. 1. Verify the Azure AD Connect cloud sync agent set up is complete. 1. Set **Write back passwords to your on-premises directory?** to **Yes**.
To verify and enable password writeback in SSPR, complete the following steps:
If you no longer want to use the SSPR password writeback functionality you have configured as part of this document, complete the following steps:
-1. Sign into the Azure portal using a global administrator account.
+1. Sign into the Azure portal using a [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator) account.
1. Search for and select Azure Active Directory, select **Password reset**, then choose **On-premises integration**. 1. Set **Write back passwords to your on-premises directory?** to **No**. 1. Set **Allow users to unlock accounts without resetting their password?** to **No**.
-From your Azure AD Connect cloud sync server, run `Set-AADCloudSyncPasswordWritebackConfiguration` using global administrator credentials to disable password writeback with Azure AD Connect cloud sync.
+From your Azure AD Connect cloud sync server, run `Set-AADCloudSyncPasswordWritebackConfiguration` using Hybrid Identity Administrator credentials to disable password writeback with Azure AD Connect cloud sync.
```powershell Import-Module ΓÇÿC:\\Program Files\\Microsoft Azure AD Connect Provisioning Agent\\Microsoft.CloudSync.Powershell.dllΓÇÖ
active-directory Tutorial Enable Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-sspr-writeback.md
Previously updated : 11/11/2021 Last updated : 05/31/2022
To complete this tutorial, you need the following resources and privileges:
* A working Azure AD tenant with at least an Azure AD Premium P1 or trial license enabled. * If needed, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * For more information, see [Licensing requirements for Azure AD SSPR](concept-sspr-licensing.md).
-* An account with *global administrator* privileges.
+* An account with [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator).
* Azure AD configured for self-service password reset. * If needed, [complete the previous tutorial to enable Azure AD SSPR](tutorial-enable-sspr.md). * An existing on-premises AD DS environment configured with a current version of Azure AD Connect.
With password writeback enabled in Azure AD Connect, now configure Azure AD SSPR
To enable password writeback in SSPR, complete the following steps:
-1. Sign in to the [Azure portal](https://portal.azure.com) using a global administrator account.
+1. Sign in to the [Azure portal](https://portal.azure.com) using a Hybrid Identity Administrator account.
1. Search for and select **Azure Active Directory**, select **Password reset**, then choose **On-premises integration**. 1. Set the option for **Write back passwords to your on-premises directory?** to *Yes*. 1. Set the option for **Allow users to unlock accounts without resetting their password?** to *Yes*.
active-directory All Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/all-reports.md
+
+ Title: View a list and description of all system reports available in Permissions Management reports
+description: View a list and description of all system reports available in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View a list and description of system reports
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some of the information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+Permissions Management has various types of system reports that capture specific sets of data. These reports allow management, auditors, and administrators to:
+
+- Make timely decisions.
+- Analyze trends and system/user performance.
+- Identify trends in data and high risk areas so that management can address issues more quickly and improve their efficiency.
+
+This article provides you with a list and description of the system reports available in Permissions Management. Depending on the report, you can download it in comma-separated values (**CSV**) format, portable document format (**PDF**), or Microsoft Excel Open XML Spreadsheet (**XLSX**) format.
+
+## Download a system report
+
+1. In the Permissions Management home page, select the **Reports** tab, and then select the **Systems Reports** subtab.
+1. In the **Report Name** column, find the report you want, and then select the down arrow to the right of the report name to download the report.
+
+ Or, from the ellipses **(...)** menu, select **Download**.
+
+ The following message displays: **Successfully Started To Generate On Demand Report.**
++
+## Summary of available system reports
+
+| Report name | Type of the report | File format | Description | Availability | Collated report? |
+|-|--|--|| -|-|
+| Access Key Entitlements and Usage Report | Summary </p>Detailed | CSV | This report displays: </p> - Access key age, last rotation date, and last usage date availability in the summary report. Use this report to decide when to rotate access keys. </p> - Granted task and Permissions creep index (PCI) score. This report provides supporting information when you want to take the action on the keys. | AWS</p>Azure</p>GCP | Yes |
+| All Permissions for Identity | Detailed | CSV | This report lists all the assigned permissions for the selected identities. | Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) | N/A |
+| Group Entitlements and Usage | Summary | CSV | This report tracks all group level entitlements and the permission assignment, PCI. The number of members is also listed as part of this report. | AWS, Azure, or GCP | Yes |
+| Identity Permissions | Summary | CSV | This report tracks any, or specific, task usage per **User**, **Group**, **Role**, or **App**. | AWS, Azure, or GCP | No |
+| NIST 800-53 | Detailed </p>Summary </p>Dashboard | CSV </p>PDF | **Dashboard**: This report helps track the overall progress of the NIST 800-53 benchmark. It lists the percentage passing, overall pass or fail of test control along with the breakup of L1/L2 per Auth system. </p>**Summary**: For each authorized system, this report lists the test control pass or fail per authorized system and the number of resources evaluated for each test control. </p>**Detailed**: This report helps auditors and administrators to track the resource level pass or fail per test control. | AWS, Azure, or GCP | Yes |
+| PCI DSS | Detailed </p>Summary </p>Dashboard | CSV | **Dashboard**: This report helps track the overall progress of the PCI-DSS benchmark. It lists the percentage passing, overall pass or fail of test control along with the breakup of L1/L2 per Auth system. </p>**Summary**: For each authorized system, this report lists the test control pass or fail per authorized system and the number of resources evaluated for each test control. </p>**Detailed**: This report helps auditors and administrators to track the resource level pass or fail per test control. | AWS, Azure, or GCP | Yes |
+| PCI History | Summary | CSV | This report helps track **Monthly PCI History** for each authorized system. It can be used to plot the trend of the PCI. | AWS, Azure, or GCP | Yes |
+| Permissions Analytics Report (PAR) | Summary | PDF | This report helps monitor the **Identity Privilege** related activity across the authorized systems. It captures any Identity permission change. </p>This report has the following main sections: **User Summary**, **Group Summary**, **Role Summary & Delete Task Summary**. </p>The **User Summary** lists the current granted permissions along with high-risk permissions and resources accessed in 1-day, 7-day, or 30-days durations. There are subsections for newly added or deleted users, users with PCI change, high-risk active/inactive users. </p>The **Group Summary** lists the administrator level groups with the current granted permissions along with high-risk permissions and resources accessed in 1-day, 7-day, or 30-day durations. There are subsections for newly added or deleted groups, groups with PCI change, High-risk active/inactive groups. </p>The **Role Summary** and the **Group Summary** list similar details. </p>The **Delete Task** summary section lists the number of times the **Delete Task** has been executed in the given period. | AWS, Azure, or GCP | No |
+| Permissions Analytics Report (PAR) | Detailed | CSV | This report lists the different key findings in the selected authorized systems. The key findings include **Super identities**, **Inactive identities**, **Over-provisioned active identities**, **Storage bucket hygiene**, **Access key age (AWS)**, and so on. </p>This report helps administrators to visualize the findings across the organization and make decisions. | AWS, Azure, or GCP | Yes |
+| Role/Policy Details | Summary | CSV | This report captures **Assigned/Unassigned** and **Custom/system policy with used/unused condition** for specific or all AWS accounts. </p>Similar data can be captured for Azure and GCP for assigned and unassigned roles. | AWS, Azure, or GCP | No |
+| User Entitlements and Usage | Detailed <p>Summary | CSV | This report provides a summary and details of **User entitlements and usage**. </p>**Data displayed on Usage Analytics** screen is downloaded as part of the **Summary** report. </p>**Detailed permissions usage per User** is listed in the Detailed report. | AWS, Azure, or GCP | Yes |
++
+## Next steps
+
+- For information on how to view system reports in the **Reports** dashboard, see [View system reports in the Reports dashboard](product-reports.md).
+- For information about how to create, view, and share a system report, see [Create, view, and share a custom report](report-view-system-report.md).
+- For information about how to create and view a custom report, see [Generate and view a custom report](report-create-custom-report.md).
+- For information about how to create and view the Permissions analytics report, see [Generate and download the Permissions analytics report](product-permissions-analytics-reports.md).
active-directory Cloudknox Onboard Enable Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-enable-tenant.md
- Title: Enable CloudKnox Permissions Management in your organization
-description: How to enable CloudKnox Permissions Management in your organization.
------- Previously updated : 04/20/2022---
-# Enable CloudKnox in your organization
-
-> [!IMPORTANT]
-> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
--
-> [!NOTE]
-> The CloudKnox Permissions Management (CloudKnox) PREVIEW is currently not available for tenants hosted in the European Union (EU).
---
-This article describes how to enable CloudKnox Permissions Management (CloudKnox) in your organization. Once you've enabled CloudKnox, you can connect it to your Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) platforms.
-
-> [!NOTE]
-> To complete this task, you must have *global administrator* permissions as a user in that tenant. You can't enable CloudKnox as a user from other tenant who has signed in via B2B or via Azure Lighthouse.
-
-## Prerequisites
-
-To enable CloudKnox in your organization:
--- You must have an Azure AD tenant. If you don't already have one, [create a free account](https://azure.microsoft.com/free/).-- You must be eligible for or have an active assignment to the global administrator role as a user in that tenant.-
-> [!NOTE]
-> During public preview, CloudKnox doesn't perform a license check.
-
-## View a training video on enabling CloudKnox
--- To view a video on how to enable CloudKnox in your Azure AD tenant, select [Enable CloudKnox in your Azure AD tenant](https://www.youtube.com/watch?v=-fkfeZyevoo).-- To view a video on how to configure and onboard AWS accounts in CloudKnox, select [Configure and onboard AWS accounts](https://www.youtube.com/watch?v=R6K21wiWYmE).-- To view a video on how to configure and onboard GCP accounts in CloudKnox, select [Configure and onboard GCP accounts](https://www.youtube.com/watch?app=desktop&v=W3epcOaec28).--
-## How to enable CloudKnox on your Azure AD tenant
-
-1. In your browser:
- 1. Go to [Azure services](https://portal.azure.com) and use your credentials to sign in to [Azure Active Directory](https://ms.portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview).
- 1. If you aren't already authenticated, sign in as a global administrator user.
- 1. If needed, activate the global administrator role in your Azure AD tenant.
- 1. In the Azure AD portal, select **Features highlights**, and then select **CloudKnox Permissions Management**.
-
- 1. If you're prompted to select a sign in account, sign in as a global administrator for a specified tenant.
-
- The **Welcome to CloudKnox Permissions Management** screen appears, displaying information on how to enable CloudKnox on your tenant.
-
-1. To provide access to the CloudKnox application, create a service principal.
-
- An Azure service principal is a security identity used by user-created apps, services, and automation tools to access specific Azure resources.
-
- > [!NOTE]
- > To complete this step, you must have Azure CLI or Azure PowerShell on your system, or an Azure subscription where you can run Cloud Shell.
-
- - To create a service principal that points to the CloudKnox application via Cloud Shell:
-
- 1. Copy the script on the **Welcome** screen:
-
- `az ad sp create --id b46c3ac5-9da6-418f-a849-0a07a10b3c6c`
-
- 1. If you have an Azure subscription, return to the Azure AD portal and select **Cloud Shell** on the navigation bar.
- If you don't have an Azure subscription, open a command prompt on a Windows Server.
- 1. If you have an Azure subscription, paste the script into Cloud Shell and press **Enter**.
-
- - For information on how to create a service principal through the Azure portal, see [Create an Azure service principal with the Azure CLI](/cli/azure/create-an-azure-service-principal-azure-cli).
-
- - For information on the **az** command and how to sign in with the no subscriptions flag, see [az login](/cli/azure/reference-index?view=azure-cli-latest#az-login&preserve-view=true).
-
- - For information on how to create a service principal via Azure PowerShell, see [Create an Azure service principal with Azure PowerShell](/powershell/azure/create-azure-service-principal-azureps?view=azps-7.1.0&preserve-view=true).
-
- 1. After the script runs successfully, the service principal attributes for CloudKnox display. Confirm the attributes.
-
- The **Cloud Infrastructure Entitlement Management** application displays in the Azure AD portal under **Enterprise applications**.
-
-1. Return to the **Welcome to CloudKnox** screen and select **Enable CloudKnox Permissions Management**.
-
- You have now completed enabling CloudKnox on your tenant. CloudKnox launches with the **Data Collectors** dashboard.
-
-## Configure data collection settings
-
-Use the **Data Collectors** dashboard in CloudKnox to configure data collection settings for your authorization system.
-
-1. If the **Data Collectors** dashboard isn't displayed when CloudKnox launches:
-
- - In the CloudKnox home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab.
-
-1. Select the authorization system you want: **AWS**, **Azure**, or **GCP**.
-
-1. For information on how to onboard an AWS account, Azure subscription, or GCP project into CloudKnox, select one of the following articles and follow the instructions:
-
- - [Onboard an AWS account](cloudknox-onboard-aws.md)
- - [Onboard an Azure subscription](cloudknox-onboard-azure.md)
- - [Onboard a GCP project](cloudknox-onboard-gcp.md)
-
-## Next steps
--- For an overview of CloudKnox, see [What's CloudKnox Permissions Management?](cloudknox-overview.md)-- For a list of frequently asked questions (FAQs) about CloudKnox, see [FAQs](cloudknox-faqs.md).-- For information on how to start viewing information about your authorization system in CloudKnox, see [View key statistics and data about your authorization system](cloudknox-ui-dashboard.md).
active-directory Cloudknox Product Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-reports.md
- Title: View system reports in the Reports dashboard in CloudKnox Permissions Management
-description: How to view system reports in the Reports dashboard in CloudKnox Permissions Management.
------- Previously updated : 02/23/2022---
-# View system reports in the Reports dashboard
-
-> [!IMPORTANT]
-> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
-
-CloudKnox Permissions Management (CloudKnox) has various types of system report types available that capture specific sets of data. These reports allow management to:
--- Make timely decisions.-- Analyze trends and system/user performance.-- Identify trends in data and high risk areas so that management can address issues more quickly and improve their efficiency.-
-## Explore the Reports dashboard
-
-The **Reports** dashboard provides a table of information with both system reports and custom reports. The **Reports** dashboard defaults to the **System Reports** tab, which has the following details:
--- **Report Name**: The name of the report.-- **Category**: The type of report. For example, **Permission**.-- **Authorization Systems**: Displays which authorizations the custom report applies to.-- **Format**: Displays the output format the report can be generated in. For example, comma-separated values (CSV) format, portable document format (PDF), or Microsoft Excel Open XML Spreadsheet (XLSX) format.-
- - To download a report, select the down arrow to the right of the report name, or from the ellipses **(...)** menu, select **Download**.
-
- The following message displays across the top of the screen in green if the download is successful: **Successfully Started To Generate On Demand Report**.
-
-## Available system reports
-
-CloudKnox offers the following reports for management associated with the authorization systems noted in parenthesis:
--- **Access Key Entitlements And Usage**:
- - **Summary of report**: Provides information about access key, for example, permissions, usage, and rotation date.
- - **Applies to**: Amazon Web Services (AWS) and Microsoft Azure
- - **Report output type**: CSV
- - **Ability to collate report**: Yes
- - **Type of report**: **Summary** or **Detailed**
- - **Use cases**:
- - The access key age, last rotation date, and last usage date is available in the summary report to help with key rotation.
- - The granted task and Permissions creep index (PCI) score to take action on the keys.
--- **User Entitlements And Usage**:
- - **Summary of report**: Provides information about the identities' permissions, for example, entitlement, usage, and PCI.
- - **Applies to**: AWS, Azure, and Google Cloud Platform (GCP)
- - **Report output type**: CSV
- - **Ability to collate report**: Yes
- - **Type of report**: **Summary** or **Detailed**
- - **Use cases**:
- - The data displayed on the **Usage Analytics** screen is downloaded as part of the **Summary** report. The user's detailed permissions usage is listed in the **Detailed** report.
--- **Group Entitlements And Usage**:
- - **Summary of report**: Provides information about the group's permissions, for example, entitlement, usage, and PCI.
- - **Applies to**: AWS, Azure, and GCP
- - **Report output type**: CSV
- - **Ability to collate report**: Yes
- - **Type of report**: **Summary**
- - **Use cases**:
- - All group level entitlements and permission assignments, PCIs, and the number of members are listed as part of this report.
--- **Identity Permissions**:
- - **Summary of report**: Report on identities that have specific permissions, for example, identities that have permission to delete any S3 buckets.
- - **Applies to**: AWS, Azure, and GCP
- - **Report output type**: CSV
- - **Ability to collate report**: No
- - **Type of report**: **Summary**
- - **Use cases**:
- - Any task usage or specific task usage via User/Group/Role/App can be tracked with this report.
--- **Identity privilege activity report**
- - **Summary of report**: Provides information about permission changes that have occurred in the selected duration.
- - **Applies to**: AWS, Azure, and GCP
- - **Report output type**: PDF
- - **Ability to collate report**: No
- - **Type of report**: **Summary**
- - **Use cases**:
- - Any identity permission change can be captured using this report.
- - The **Identity Privilege Activity** report has the following main sections: **User Summary**, **Group Summary**, **Role Summary**, and **Delete Task Summary**.
- - The **User** summary lists the current granted permissions and high-risk permissions and resources accessed in 1 day, 7 days, or 30 days. There are subsections for newly added or deleted users, users with PCI change, and High-risk active/inactive users.
- - The **Group** summary lists the administrator level groups with the current granted permissions and high-risk permissions and resources accessed in 1 day, 7 days, or 30 days. There are subsections for newly added or deleted groups, groups with PCI change, and High-risk active/inactive groups.
- - The **Role summary** lists similar details as **Group Summary**.
- - The **Delete Task summary** section lists the number of times the **Delete task** has been executed in the given time period.
--- **Permissions Analytics Report**
- - **Summary of report**: Provides information about the violation of key security best practices.
- - **Applies to**: AWS, Azure, and GCP
- - **Report output type**: CSV
- - **Ability to collate report**: Yes
- - **Type of report**: **Detailed**
- - **Use cases**:
- - This report lists the different key findings in the selected auth systems. The key findings include super identities, inactive identities, over provisioned active identities, storage bucket hygiene, and access key age (for AWS only). The report helps administrators to visualize the findings across the organization.
-
- For more information about this report, see [Permissions analytics report](cloudknox-product-permissions-analytics-reports.md).
--- **Role/Policy Details**
- - **Summary of report**: Provides information about roles and policies.
- - **Applies to**: AWS, Azure, GCP
- - **Report output type**: CSV
- - **Ability to collate report**: No
- - **Type of report**: **Summary**
- - **Use cases**:
- - Assigned/Unassigned, custom/system policy, and the used/unused condition is captured in this report for any specific, or all, AWS accounts. Similar data can be captured for Azure/GCP for the assigned/unassigned roles.
--- **PCI History**
- - **Summary of report**: Provides a report of privilege creep index (PCI) history.
- - **Applies to**: AWS, Azure, GCP
- - **Report output type**: CSV
- - **Ability to collate report**: Yes
- - **Type of report**: **Summary**
- - **Use cases**:
- - This report plots the trend of the PCI by displaying the monthly PCI history for each authorization system.
--- **All Permissions for Identity**
- - **Summary of report**: Provides results of all permissions for identities.
- - **Applies to**: AWS, Azure, GCP
- - **Report output type**: CSV
- - **Ability to collate report**: Yes
- - **Type of report**: **Detailed**
- - **Use cases**:
- - This report lists all the assigned permissions for the selected identities.
----
-## Next steps
--- For a detailed overview of available system reports, see [View a list and description of system reports](cloudknox-all-reports.md).-- For information about how to create, view, and share a system report, see [Create, view, and share a custom report](cloudknox-report-view-system-report.md).-- For information about how to create and view a custom report, see [Generate and view a custom report](cloudknox-report-create-custom-report.md).-- For information about how to create and view the Permissions analytics report, see [Generate and download the Permissions analytics report](cloudknox-product-permissions-analytics-reports.md).
active-directory Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/faqs.md
+
+ Title: Frequently asked questions (FAQs) about CloudKnox Permissions Management
+description: Frequently asked questions (FAQs) about CloudKnox Permissions Management.
+++++++ Last updated : 04/20/2022+++
+# Frequently asked questions (FAQs)
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+> [!NOTE]
+> The CloudKnox Permissions Management (CloudKnox) PREVIEW is currently not available for tenants hosted in the European Union (EU).
++
+This article answers frequently asked questions (FAQs) about CloudKnox Permissions Management (CloudKnox).
+
+## What's CloudKnox Permissions Management?
+
+CloudKnox is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. For example, over-privileged workload and user identities, actions, and resources across multi-cloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). CloudKnox detects, automatically right-sizes, and continuously monitors unused and excessive permissions. It deepens the Zero Trust security strategy by augmenting the least privilege access principle.
++
+## What are the prerequisites to use CloudKnox?
+
+CloudKnox supports data collection from AWS, GCP, and/or Microsoft Azure. For data collection and analysis, customers are required to have an Azure Active Directory (Azure AD) account to use CloudKnox.
+
+## Can a customer use CloudKnox if they have other identities with access to their IaaS platform that aren't yet in Azure AD (for example, if part of their business has Okta or AWS Identity & Access Management (IAM))?
+
+Yes, a customer can detect, mitigate, and monitor the risk of 'backdoor' accounts that are local to AWS IAM, GCP, or from other identity providers such as Okta or AWS IAM.
+
+## Where can customers access CloudKnox?
+
+Customers can access the CloudKnox interface with a link from the Azure AD extension in the Azure portal.
+
+## Can non-cloud customers use CloudKnox on-premises?
+
+No, CloudKnox is a hosted cloud offering.
+
+## Can non-Azure customers use CloudKnox?
+
+Yes, non-Azure customers can use our solution. CloudKnox is a multi-cloud solution so even customers who have no subscription to Azure can benefit from it.
+
+## Is CloudKnox available for tenants hosted in the European Union (EU)?
+
+No, the CloudKnox Permissions Management (CloudKnox) PREVIEW is currently not available for tenants hosted in the European Union (EU).
+
+## If I'm already using Azure AD Privileged Identity Management (PIM) for Azure, what value does CloudKnox provide?
+
+CloudKnox complements Azure AD PIM. Azure AD PIM provides just-in-time access for admin roles in Azure (as well as Microsoft Online Services and apps that use groups), while CloudKnox allows multi-cloud discovery, remediation, and monitoring of privileged access across Azure, AWS, and GCP.
+
+## What languages does CloudKnox support?
+
+CloudKnox currently supports English.
+
+## What public cloud infrastructures are supported by CloudKnox?
+
+CloudKnox currently supports the three major public clouds: Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure.
+
+## Does CloudKnox support hybrid environments?
+
+CloudKnox currently doesn't support hybrid environments.
+
+## What types of identities are supported by CloudKnox?
+
+CloudKnox supports user identities (for example, employees, customers, external partners) and workload identities (for example, virtual machines, containers, web apps, serverless functions).
+
+<!## Is CloudKnox General Data Protection Regulation (GDPR) compliant?
+
+CloudKnox is currently not GDPR compliant.>
+
+## Is CloudKnox available in Government Cloud?
+
+No, CloudKnox is currently not available in Government clouds.
+
+## Is CloudKnox available for sovereign clouds?
+
+No, CloudKnox is currently not available in sovereign Clouds.
+
+## How does CloudKnox collect insights about permissions usage?
+
+CloudKnox has a data collector that collects access permissions assigned to various identities, activity logs, and resources metadata. This gathers full visibility into permissions granted to all identities to access the resources and details on usage of granted permissions.
+
+## How does CloudKnox evaluate cloud permissions risk?
+
+CloudKnox offers granular visibility into all identities and their permissions granted versus used, across cloud infrastructures to uncover any action performed by any identity on any resource. This isn't limited to just user identities, but also workload identities such as virtual machines, access keys, containers, and scripts. The dashboard gives an overview of permission profile to locate the riskiest identities and resources.
+
+## What is the Permissions Creep Index?
+
+The Permissions Creep Index (PCI) is a quantitative measure of risk associated with an identity or role determined by comparing permissions granted versus permissions exercised. It allows users to instantly evaluate the level of risk associated with the number of unused or over-provisioned permissions across identities and resources. It measures how much damage identities can cause based on the permissions they have.
+
+## How can customers use CloudKnox to delete unused or excessive permissions?
+
+CloudKnox allows users to right-size excessive permissions and automate least privilege policy enforcement with just a few clicks. The solution continuously analyzes historical permission usage data for each identity and gives customers the ability to right-size permissions of that identity to only the permissions that are being used for day-to-day operations. All unused and other risky permissions can be automatically removed.
+
+## How can customers grant permissions on-demand with CloudKnox?
+
+For any break-glass or one-off scenarios where an identity needs to perform a specific set of actions on a set of specific resources, the identity can request those permissions on-demand for a limited period with a self-service workflow. Customers can either use the built-in workflow engine or their IT service management (ITSM) tool. The user experience is the same for any identity type, identity source (local, enterprise directory, or federated) and cloud.
+
+## What is the difference between permissions on-demand and just-in-time access?
+
+Just-in-time (JIT) access is a method used to enforce the principle of least privilege to ensure identities are given the minimum level of permissions to perform the task at hand. Permissions on-demand are a type of JIT access that allows the temporary elevation of permissions, enabling identities to access resources on a by-request, timed basis.
+
+## How can customers monitor permissions usage with CloudKnox?
+
+Customers only need to track the evolution of their Permission Creep Index to monitor permissions usage. They can do this in the "Analytics" tab in their CloudKnox dashboard where they can see how the PCI of each identity or resource is evolving over time.
+
+## Can customers generate permissions usage reports?
+
+Yes, CloudKnox has various types of system report available that capture specific data sets. These reports allow customers to:
+- Make timely decisions.
+- Analyze usage trends and system/user performance.
+- Identify high-risk areas.
+
+For information about permissions usage reports, see [Generate and download the Permissions analytics report](product-permissions-analytics-reports.md).
+
+## Does CloudKnox integrate with third-party ITSM (Information Technology Security Management) tools?
+
+CloudKnox integrates with ServiceNow.
++
+## How is CloudKnox being deployed?
+
+Customers with Global Admin role have first to onboard CloudKnox on their Azure AD tenant, and then onboard their AWS accounts, GCP projects, and Azure subscriptions. More details about onboarding can be found in our product documentation.
+
+## How long does it take to deploy CloudKnox?
+
+It depends on each customer and how many AWS accounts, GCP projects, and Azure subscriptions they have.
+
+## Once CloudKnox is deployed, how fast can I get permissions insights?
+
+Once fully onboarded with data collection set up, customers can access permissions usage insights within hours. Our machine-learning engine refreshes the Permission Creep Index every hour so that customers can start their risk assessment right away.
+
+## Is CloudKnox collecting and storing sensitive personal data?
+
+No, CloudKnox doesn't have access to sensitive personal data.
+
+## Where can I find more information about CloudKnox?
+
+You can read our blog and visit our web page. You can also get in touch with your Microsoft point of contact to schedule a demo.
+
+## Resources
+
+- [Public Preview announcement blog](https://www.aka.ms/CloudKnox-Public-Preview-Blog)
+- [CloudKnox Permissions Management web page](https://microsoft.com/security/business/identity-access-management/permissions-management)
+++
+## Next steps
+
+- For an overview of CloudKnox, see [What's CloudKnox Permissions Management?](overview.md).
+- For information on how to onboard CloudKnox in your organization, see [Enable CloudKnox in your organization](onboard-enable-tenant.md).
active-directory How To Add Remove Role Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-add-remove-role-task.md
+
+ Title: Add and remove roles and tasks for groups, users, and service accounts for Microsoft Azure and Google Cloud Platform (GCP) identities in the Remediation dashboard in Permissions Management
+description: How to attach and detach permissions for groups, users, and service accounts for Microsoft Azure and Google Cloud Platform (GCP) identities in the Remediation dashboard in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Add and remove roles and tasks for Microsoft Azure and Google Cloud Platform (GCP) identities
++
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management (Entra) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how you can add and remove roles and tasks for Microsoft Azure and Google Cloud Platform (GCP) identities using the **Remediation** dashboard.
+
+> [!NOTE]
+> To view the **Remediation** tab, your must have **Viewer**, **Controller**, or **Administrator** permissions. To make changes on this tab, you must have **Controller** or **Administrator** permissions. If you don't have these permissions, contact your system administrator.
+
+## View permissions
+
+1. On the Entra home page, select the **Remediation** tab, and then select the **Permissions** subtab.
+1. From the **Authorization System Type** dropdown, select **Azure** or **GCP**.
+1. From the **Authorization System** dropdown, select the accounts you want to access.
+1. From the **Search For** dropdown, select **Group**, **User**, or **APP**.
+1. To search for more parameters, you can make a selection from the **User States**, **Permission Creep Index**, and **Task Usage** dropdowns.
+1. Select **Apply**.
+ Entra displays a list of groups, users, and service accounts that match your criteria.
+1. In **Enter a username**, enter or select a user.
+1. In **Enter a Group Name**, enter or select a group, then select **Apply**.
+1. Make a selection from the results list.
+
+ The table displays the **Username** **Domain/Account**, **Source**, **Resource** and **Current Role**.
++
+## Add a role
+
+1. On the Entra home page, select the **Remediation** tab, and then select the **Permissions** subtab.
+1. From the **Authorization System Type** dropdown, select **Azure** or **GCP**.
+1. From the **Authorization System** dropdown, select the accounts you want to access.
+1. From the **Search For** dropdown, select **Group**, **User**, or **APP/Service Account**, and then select **Apply**.
+1. Make a selection from the results list.
+
+1. To attach a role, select **Add role**.
+1. In the **Add Role** page, from the **Available Roles** list, select the plus sign **(+)** to move the role to the **Selected Roles** list.
+1. When you have finished adding roles, select **Submit**.
+1. When the following message displays: **Are you sure you want to change permission?**, select:
+ - **Generate Script** to generate a script where you can manually add/remove the permissions you selected.
+ - **Execute** to change the permission.
+ - **Close** to cancel the action.
+
+## Remove a role
+
+1. On the Entra home page, select the **Remediation** tab, and then select the **Permissions** subtab.
+1. From the **Authorization System Type** dropdown, select **Azure** or **GCP**.
+1. From the **Authorization System** dropdown, select the accounts you want to access.
+1. From the **Search For** dropdown, select **Group**, **User**, or **APP/Service Account**, and then select **Apply**.
+1. Make a selection from the results list.
+
+1. To remove a role, select **Remove Role**.
+1. In the **Remove Role** page, from the **Available Roles** list, select the plus sign **(+)** to move the role to the **Selected Roles** list.
+1. When you have finished selecting roles, select **Submit**.
+1. When the following message displays: **Are you sure you want to change permission?**, select:
+ - **Generate Script** to generate a script where you can manually add/remove the permissions you selected.
+ - **Execute** to change the permission.
+ - **Close** to cancel the action.
+
+## Add a task
+
+1. On the Permissions Management home page, select the **Remediation** tab, and then select the **Permissions** subtab.
+1. From the **Authorization System Type** dropdown, select **Azure** or **GCP**.
+1. From the **Authorization System** dropdown, select the accounts you want to access.
+1. From the **Search For** dropdown, select **Group**, **User**, or **APP/Service Account**, and then select **Apply**.
+1. Make a selection from the results list.
+
+1. To attach a role, select **Add Tasks**.
+1. In the **Add Tasks** page, from the **Available Tasks** list, select the plus sign **(+)** to move the task to the **Selected Tasks** list.
+1. When you have finished adding tasks, select **Submit**.
+1. When the following message displays: **Are you sure you want to change permission?**, select:
+ - **Generate Script** to generate a script where you can manually add/remove the permissions you selected.
+ - **Execute** to change the permission.
+ - **Close** to cancel the action.
+
+## Remove a task
+
+1. On the Entra home page, select the **Remediation** tab, and then select the **Permissions** subtab.
+1. From the **Authorization System Type** dropdown, select **Azure** or **GCP**.
+1. From the **Authorization System** dropdown, select the accounts you want to access.
+1. From the **Search For** dropdown, select **Group**, **User**, or **APP/Service Account**, and then select **Apply**.
+1. Make a selection from the results list.
+
+1. To remove a task, select **Remove Tasks**.
+1. In the **Remove Tasks** page, from the **Available Tasks** list, select the plus sign **(+)** to move the task to the **Selected Tasks** list.
+1. When you have finished selecting tasks, select **Submit**.
+1. When the following message displays: **Are you sure you want to change permission?**, select:
+ - **Generate Script** to generate a script where you can manually add/remove the permissions you selected.
+ - **Execute** to change the permission.
+ - **Close** to cancel the action.
+
+## Next steps
++
+- For information on how to view existing roles/policies, requests, and permissions, see [View roles/policies, requests, and permission in the Remediation dashboard](ui-remediation.md).
+- For information on how to create a role/policy, see [Create a role/policy](how-to-create-role-policy.md).
+- For information on how to clone a role/policy, see [Clone a role/policy](how-to-clone-role-policy.md).
+- For information on how to delete a role/policy, see [Delete a role/policy](how-to-delete-role-policy.md).
+- For information on how to modify a role/policy, see Modify a role/policy](how-to-modify-role-policy.md).
+- To view information about roles/policies, see [View information about roles/policies](how-to-view-role-policy.md).
+- For information on how to attach and detach permissions for Amazon Web Services (AWS) identities, see [Attach and detach policies for AWS identities](how-to-attach-detach-permissions.md).
+- For information on how to revoke high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities, see [Revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities](how-to-revoke-task-readonly-status.md)
+For information on how to create or approve a request for permissions, see [Create or approve a request for permissions](how-to-create-approve-privilege-request.md).
active-directory How To Attach Detach Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-attach-detach-permissions.md
+
+ Title: Attach and detach permissions for users, roles, and groups for Amazon Web Services (AWS) identities in the Remediation dashboard in Permissions Management
+description: How to attach and detach permissions for users, roles, and groups for Amazon Web Services (AWS) identities in the Remediation dashboard in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Attach and detach policies for Amazon Web Services (AWS) identities
++
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how you can attach and detach permissions for users, roles, and groups for Amazon Web Services (AWS) identities using the **Remediation** dashboard.
+
+> [!NOTE]
+> To view the **Remediation** tab, your must have **Viewer**, **Controller**, or **Administrator** permissions. To make changes on this tab, you must have **Controller** or **Administrator** permissions. If you don't have these permissions, contact your system administrator.
+
+## View permissions
+
+1. On the Permissions Management home page, select the **Remediation** tab, and then select the **Permissions** subtab.
+1. From the **Authorization System Type** dropdown, select **AWS**.
+1. From the **Authorization System** dropdown, select the accounts you want to access.
+1. From the **Search For** dropdown, select **Group**, **User**, or **Role**.
+1. To search for more parameters, you can make a selection from the **User States**, **Permission Creep Index**, and **Task Usage** dropdowns.
+1. Select **Apply**.
+ Permissions Management displays a list of users, roles, or groups that match your criteria.
+1. In **Enter a username**, enter or select a user.
+1. In **Enter a group name**, enter or select a group, then select **Apply**.
+1. Make a selection from the results list.
+
+ The table displays the related **Username** **Domain/Account**, **Source** and **Policy Name**.
++
+## Attach policies
+
+1. On the Permissions Management home page, select the **Remediation** tab, and then select the **Permissions** subtab.
+1. From the **Authorization System Type** dropdown, select **AWS**.
+1. In **Enter a username**, enter or select a user.
+1. In **Enter a Group Name**, enter or select a group, then select **Apply**.
+1. Make a selection from the results list.
+1. To attach a policy, select **Attach Policies**.
+1. In the **Attach Policies** page, from the **Available policies** list, select the plus sign **(+)** to move the policy to the **Selected policies** list.
+1. When you have finished adding policies, select **Submit**.
+1. When the following message displays: **Are you sure you want to change permission?**, select:
+ - **Generate Script** to generate a script where you can manually add/remove the permissions you selected.
+ - **Execute** to change the permission.
+ - **Close** to cancel the action.
+
+## Detach policies
+
+1. On the Permissions Management Permissions Management home page, select the **Remediation** tab, and then select the **Permissions** subtab.
+1. From the **Authorization System Type** dropdown, select **AWS**.
+1. In **Enter a username**, enter or select a user.
+1. In **Enter a Group Name**, enter or select a group, then select **Apply**.
+1. Make a selection from the results list.
+1. To remove a policy, select **Detach Policies**.
+1. In the **Detach Policies** page, from the **Available policies** list, select the plus sign **(+)** to move the policy to the **Selected policies** list.
+1. When you have finished selecting policies, select **Submit**.
+1. When the following message displays: **Are you sure you want to change permission?**, select:
+ - **Generate Script** to generate a script where you can manually add/remove the permissions you selected.
+ - **Execute** to change the permission.
+ - **Close** to cancel the action.
+
+## Next steps
++
+- For information on how to view existing roles/policies, requests, and permissions, see [View roles/policies, requests, and permission in the Remediation dashboard](ui-remediation.md).
+- For information on how to create a role/policy, see [Create a role/policy](how-to-create-role-policy.md).
+- For information on how to clone a role/policy, see [Clone a role/policy](how-to-clone-role-policy.md).
+- For information on how to delete a role/policy, see [Delete a role/policy](how-to-delete-role-policy.md).
+- For information on how to modify a role/policy, see Modify a role/policy](how-to-modify-role-policy.md).
+- To view information about roles/policies, see [View information about roles/policies](how-to-view-role-policy.md).
+- For information on how to revoke high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities, see [Revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities](how-to-revoke-task-readonly-status.md)
+For information on how to create or approve a request for permissions, see [Create or approve a request for permissions](how-to-create-approve-privilege-request.md).
active-directory How To Audit Trail Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-audit-trail-results.md
+
+ Title: Generate an on-demand report from a query in the Audit dashboard in Permissions Management
+description: How to generate an on-demand report from a query in the **Audit** dashboard in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Generate an on-demand report from a query
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how you can generate an on-demand report from a query in the **Audit** dashboard in Permissions Management. You can:
+
+- Run a report on-demand.
+- Schedule and run a report as often as you want.
+- Share a report with other members of your team and management.
+
+## Generate a custom report on-demand
+
+1. In the Permissions Management home page, select the **Audit** tab.
+
+ Permissions Management displays the query options available to you.
+1. In the **Audit** dashboard, select **Search** to run the query.
+1. Select **Export**.
+
+ Permissions Management generates the report and exports it in comma-separated values (**CSV**) format, portable document format (**PDF**), or Microsoft Excel Open XML Spreadsheet (**XLSX**) format.
+
+<!
+## Create a schedule to automatically generate and share a report
+
+1. In the **Audit** tab, load the query you want to use to generate your report.
+2. Select **Settings** (the gear icon).
+3. In **Repeat on**, select on which days of the week you want the report to run.
+4. In **Date**, select the date when you want the query to run.
+5. In **hh mm** (time), select the time when you want the query to run.
+6. In **Request file format**, select the file format you want for your report.
+7. In **Share report with people**, enter email addresses for people to whom you want to send the report.
+8. Select **Schedule**.
+
+ Permissions Management generates the report as set in Steps 3 to 6, and emails it to the recipients you specified in Step 7.
++
+## Delete the schedule for a report
+
+1. In the **Audit** tab, load the query whose report schedule you want to delete.
+2. Select the ellipses menu **(…)** on the far right, and then select **Delete schedule**.
+
+ Permissions Management deletes the schedule for running the query. The query itself isn't deleted.
+>
++
+## Next steps
+
+- For information on how to view how users access information, see [Use queries to see how users access information](ui-audit-trail.md).
+- For information on how to filter and view user activity, see [Filter and query user activity](product-audit-trail.md).
+- For information on how to create a query,see [Create a custom query](how-to-create-custom-queries.md).
active-directory How To Clone Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-clone-role-policy.md
+
+ Title: Clone a role/policy in the Remediation dashboard in Permissions Management
+description: How to clone a role/policy in the Just Enough Permissions (JEP) Controller.
+++++++ Last updated : 02/23/2022+++
+# Clone a role/policy in the Remediation dashboard
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how you can use the **Remediation** dashboard in Permissions Management to clone roles/policies for the Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) authorization systems.
+
+> [!NOTE]
+> To view the **Remediation** tab, you must have **Viewer**, **Controller**, or **Administrator** permissions. To make changes on this tab, you must have **Controller** or **Administrator** permissions. If you don't have these permissions, contact your system administrator.
+
+> [!NOTE]
+> Microsoft Azure uses the term *role* for what other Cloud providers call *policy*. Permissions Management automatically makes this terminology change when you select the authorization system type. In the user documentation, we use *role/policy* to refer to both.
+
+## Clone a role/policy
+
+1. On the Permissions Management Permissions Management home page, select the **Remediation** tab, and then select the **Role/Policies** tab.
+1. Select the role/policy you want to clone, and from the **Actions** column, select **Clone**.
+1. **(AWS Only)** In the **Clone** box, the **Clone Resources** and **Clone Conditions** checkboxes are automatically selected.
+ Deselect the boxes if the resources and conditions are different from what is displayed.
+1. Enter a name for each authorization system that was selected in the **Policy Name** boxes, and then select **Next**.
+
+1. If the data collector hasn't been given controller privileges, the following message displays: **Only online/controller-enabled authorization systems can be submitted for cloning.**
+
+ To clone this role manually, download the script and JSON file.
+
+1. Select **Submit**.
+1. Refresh the **Role/Policies** tab to see the role/policy you cloned.
+
+## Next steps
++
+- For information on how to view existing roles/policies, requests, and permissions, see [View roles/policies, requests, and permission in the Remediation dashboard](ui-remediation.md).
+- For information on how to create a role/policy, see [Create a role/policy](how-to-create-role-policy.md).
+- For information on how to delete a role/policy, see [Delete a role/policy](how-to-delete-role-policy.md).
+- For information on how to modify a role/policy, see Modify a role/policy](how-to-modify-role-policy.md).
+- To view information about roles/policies, see [View information about roles/policies](how-to-view-role-policy.md).
+- For information on how to attach and detach permissions for AWS identities, see [Attach and detach policies for AWS identities](how-to-attach-detach-permissions.md).
+- For information on how to revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities, see [Revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities](how-to-revoke-task-readonly-status.md)
+- For information on how to create or approve a request for permissions, see [Create or approve a request for permissions](how-to-create-approve-privilege-request.md).
+- For information on how to view information about roles/policies, see [View information about roles/policies](how-to-view-role-policy.md)
active-directory How To Create Alert Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-alert-trigger.md
+
+ Title: Create and view activity alerts and alert triggers in Permissions Management
+description: How to create and view activity alerts and alert triggers in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Create and view activity alerts and alert triggers
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how you can create and view activity alerts and alert triggers in Permissions Management.
+
+## Create an activity alert trigger
+
+1. In the Permissions Management home page, select **Activity Triggers** (the bell icon).
+1. In the **Activity** tab, select **Create Activity Trigger**.
+1. In the **Alert Name** box, enter a name for your alert.
+1. In **Authorization System Type**, select your authorization system: Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
+1. In **Authorization System**, select **Is** or **In**, and then select one or more accounts and folders.
+1. From the **Select a Type** dropdown, select: **Access Key ID**, **Identity Tag Key**, **Identity Tag Key Value**, **Resource Name**, **Resource Tag Key**, **Resource Tag Key Value**, **Role Name**, **Role Session Name**, **State**, **Task Name**, or **Username**.
+1. From the **Operator** dropdown, select an option:
+
+ - **Is**/**Is Not**: Select in the value field to view a list of all available values. You can either select or enter the required value.
+ - **Contains**/**Not Contains**: Enter any text that the query parameter should or shouldn't contain, for example *Permissions Management*.
+ - **In**/**Not In**: Select in the value field to view list of all available values. Select the required multiple values.
+
+1. To add another parameter, select the plus sign **(+)**, then select an operator, and then enter a value.
+
+ To remove a parameter, select the minus sign **(-)**.
+1. To add another activity type, select **Add**, and then enter your parameters.
+1. To save your alert, select **Save**.
+
+ A message displays to confirm your activity trigger has been created.
+
+ The **Triggers** table in the **Alert Triggers** subtab displays your alert trigger.
+
+## View an activity alert
+
+1. In the Permissions Management home page, select **Activity Triggers** (the bell icon).
+1. In the **Activity** tab, select the **Alerts** subtab.
+1. From the **Alert Name** dropdown, select an alert.
+1. From the **Date** dropdown, select **Last 24 Hours**, **Last 2 Days**, **Last Week**, or **Custom Range**.
+
+ If you select **Custom Range**, select date and time settings, and then select **Apply**.
+1. To view the alert, select **Apply**
+
+ The **Alerts** table displays information about your alert.
+++
+## View activity alert triggers
+
+1. In the Permissions Management home page, select **Activity triggers** (the bell icon).
+1. In the **Activity** tab, select the **Alert Triggers** subtab.
+1. From the **Status** dropdown, select **All**, **Activated** or **Deactivated**, then select **Apply**.
+
+ The **Triggers** table displays the following information:
+
+ - **Alerts**: The name of the alert trigger.
+ - **# of users subscribed**: The number of users who have subscribed to a specific alert trigger.
+
+ - Select a number in this column to view information about the user.
+
+ - **Created By**: The email address of the user who created the alert trigger.
+ - **Modified By**: The email address of the user who last modified the alert trigger.
+ - **Last Updated**: The date and time the alert trigger was last updated.
+ - **Subscription**: A switch that displays if the alert is **On** or **Off**.
+
+ - If the column displays **Off**, the current user isn't subscribed to that alert. Switch the toggle to **On** to subscribe to the alert.
+ - The user who creates an alert trigger is automatically subscribed to the alert, and will receive emails about the alert.
+
+1. To see only activated or only deactivated triggers, from the **Status** dropdown, select **Activated** or **Deactivated**, and then select **Apply**.
+
+1. To view other options available to you, select the ellipses (**...**), and then select from the available options.
+
+ If the **Subscription** is **On**, the following options are available:
+
+ - **Edit**: Enables you to modify alert parameters
+
+ > [!NOTE]
+ > Only the user who created the alert can perform the following actions: edit the trigger screen, rename an alert, deactivate an alert, and delete an alert. Changes made by other users aren't saved.
+
+ - **Duplicate**: Create a duplicate of the alert called "**Copy of XXX**".
+ - **Rename**: Enter the new name of the query, and then select **Save.**
+ - **Deactivate**: The alert will still be listed, but will no longer send emails to subscribed users.
+ - **Activate**: Activate the alert trigger and start sending emails to subscribed users.
+ - **Notification Settings**: View the **Email** of users who are subscribed to the alert trigger and their **User Status**.
+ - **Delete**: Delete the alert.
+
+ If the **Subscription** is **Off**, the following options are available:
+ - **View**: View details of the alert trigger.
+ - **Notification Settings**: View the **Email** of users who are subscribed to the alert trigger and their **User Status**.
+ - **Duplicate**: Create a duplicate copy of the selected alert trigger.
++++
+## Next steps
+
+- For an overview on activity triggers, see [View information about activity triggers](ui-triggers.md).
+- For information on rule-based anomalies and anomaly triggers, see [Create and view rule-based anomalies and anomaly triggers](product-rule-based-anomalies.md).
+- For information on finding outliers in identity's behavior, see [Create and view statistical anomalies and anomaly triggers](product-statistical-anomalies.md).
+- For information on permission analytics triggers, see [Create and view permission analytics triggers](product-permission-analytics.md).
active-directory How To Create Approve Privilege Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-approve-privilege-request.md
+
+ Title: Create or approve a request for permissions in the Remediation dashboard in Permissions Management
+description: How to create or approve a request for permissions in the Remediation dashboard.
+++++++ Last updated : 02/23/2022+++
+# Create or approve a request for permissions
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how to create or approve a request for permissions in the **Remediation** dashboard in Permissions Management. You can create and approve requests for the Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) authorization systems.
+
+The **Remediation** dashboard has two privilege-on-demand (POD) workflows you can use:
+- **New Request**: The workflow used by a user to create a request for permissions for a specified duration.
+- **Approver**: The workflow used by an approver to review and approve or reject a user's request for permissions.
++
+> [!NOTE]
+> To view the **Remediation** dashboard, you must have **Viewer**, **Controller**, or **Administrator** permissions. To make changes on this tab, you must have **Controller** or **Administrator** permissions. If you don't have these permissions, contact your system administrator.
+
+## Create a request for permissions
+
+1. On the Permissions Management home page, select the **Remediation** tab, and then select the **My Requests** subtab.
+
+ The **My Requests** subtab displays the following options:
+ - **Pending**: A list of requests you've made but haven't yet been reviewed.
+ - **Approved**: A list of requests that have been reviewed and approved by the approver. These requests have either already been activated or are in the process of being activated.
+ - **Processed**: A summary of the requests you've created that have been approved (**Done**), **Rejected**, and requests that have been **Canceled**.
+
+1. To create a request for permissions, select **New Request**.
+1. In the **Roles/Tasks** page:
+ 1. From the **Authorization System Type** dropdown, select the authorization system type you want to access: **AWS**, **Azure** or **GCP**.
+ 1. From the **Authorization System** dropdown, select the accounts you want to access.
+ 1. From the **Identity** dropdown, select the identity on whose behalf you're requesting access.
+
+ - If the identity you select is a Security Assertions Markup Language (SAML) user, and since a SAML user accesses the system through assumption of a role, select the user's role in **Role**.
+
+ - If the identity you select is a local user, to select the policies you want:
+ 1. Select **Request Policy(s)**.
+ 1. In **Available Policies**, select the policies you want.
+ 1. To select a specific policy, select the plus sign, and then find and select the policy you want.
+
+ The policies you've selected appear in the **Selected policies** box.
+
+ - If the identity you select is a local user, to select the tasks you want:
+ 1. Select **Request Task(s)**.
+ 1. In **Available Tasks**, select the tasks you want.
+ 1. To select a specific task, select the plus sign, and then select the task you want.
+
+ The tasks you've selected appear in the **Selected Tasks** box.
+
+ If the user already has existing policies, they're displayed in **Existing Policies**.
+1. Select **Next**.
+
+1. If you selected **AWS**, the **Scope** page appears.
+
+ 1. In **Select Scope**, select:
+ - **All Resources**
+ - **Specific Resources**, and then select the resources you want.
+ - **No Resources**
+ 1. In **Request Conditions**:
+ 1. Select **JSON** to add a JSON block of code.
+ 1. Select **Done** to accept the code you've entered, or **Clear** to delete what you've entered and start again.
+ 1. In **Effect**, select **Allow** or **Deny.**
+ 1. Select **Next**.
+
+1. The **Confirmation** page appears.
+1. In **Request Summary**, enter a summary for your request.
+1. Optional: In **Note**, enter a note for the approver.
+1. In **Schedule**, select when (how quickly) you want your request to be processed:
+ - **ASAP**
+ - **Once**
+ - In **Create Schedule**, select the **Frequency**, **Date**, **Time**, and **For** the required duration, then select **Schedule**.
+ - **Daily**
+ - **Weekly**
+ - **Monthly**
+1. Select **Submit**.
+
+ The following message appears: **Your Request Has Been Successfully Submitted.**
+
+ The request you submitted is now listed in **Pending Requests**.
+
+## Approve or reject a request for permissions
+
+1. On the Permissions Management home page, select the **Remediation** tab, and then select the **My requests** subtab.
+1. To view a list of requests that haven't yet been reviewed, select **Pending Requests**.
+1. In the **Request Summary** list, select the ellipses **(…)** menu on the right of a request, and then select:
+
+ - **Details** to view the details of the request.
+ - **Approve** to approve the request.
+ - **Reject** to reject the request.
+
+1. (Optional) add a note to the requestor, and then select **Confirm.**
+
+ The **Approved** subtab displays a list of requests that have been reviewed and approved by the approver. These requests have either already been activated or are in the process of being activated.
+ The **Processed** subtab displays a summary of the requests that have been approved or rejected, and requests that have been canceled.
++
+## Next steps
++
+- For information on how to view existing roles/policies, requests, and permissions, see [View roles/policies, requests, and permission in the Remediation dashboard](ui-remediation.md).
+- For information on how to create a role/policy, see [Create a role/policy](how-to-create-role-policy.md).
+- For information on how to clone a role/policy, see [Clone a role/policy](how-to-clone-role-policy.md).
+- For information on how to delete a role/policy, see [Delete a role/policy](how-to-delete-role-policy.md).
+- For information on how to modify a role/policy, see Modify a role/policy](how-to-modify-role-policy.md).
+- To view information about roles/policies, see [View information about roles/policies](how-to-view-role-policy.md).
+- For information on how to attach and detach permissions for Amazon Web Services (AWS) identities, see [Attach and detach policies for AWS identities](how-to-attach-detach-permissions.md).
+- For information on how to add and remove roles and tasks for Microsoft Azure and Google Cloud Platform (GCP) identities, see [Add and remove roles and tasks for Azure and GCP identities](how-to-attach-detach-permissions.md).
+- For information on how to revoke high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities, see [Revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities](how-to-revoke-task-readonly-status.md)
active-directory How To Create Custom Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-custom-queries.md
+
+ Title: Create a custom query in Permissions Management
+description: How to create a custom query in the Audit dashboard in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Create a custom query
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how you can use the **Audit** dashboard in Permissions Management to create custom queries that you can modify, save, and run as often as you want.
+
+## Open the Audit dashboard
+
+- In the Permissions Management home page, select the **Audit** tab.
+
+ Permissions Management displays the query options available to you.
+
+## Create a custom query
+
+1. In the **Audit** dashboard, in the **New Query** subtab, select **Authorization System Type**, and then select the authorization systems you want to search: Amazon Web Services (**AWS**), Microsoft **Azure**, Google Cloud Platform (**GCP**), or Platform (**Platform**).
+1. Select the authorization systems you want to search from the **List** and **Folders** box, and then select **Apply**.
+
+1. In the **New Query** box, enter your query parameters, and then select **Add**.
+ For example, to query by a date, select **Date** in the first box. In the second and third boxes, select the down arrow, and then select one of the date-related options.
+
+1. To add parameters, select **Add**, select the down arrow in the first box to display a dropdown of available selections. Then select the parameter you want.
+1. To add more parameters to the same query, select **Add** (the plus sign), and from the first box, select **And** or **Or**.
+
+ Repeat this step for the second and third box to complete entering the parameters.
+1. To change your query as you're creating it, select **Edit** (the pencil icon), and then change the query parameters.
+1. To change the parameter options, select the down arrow in each box to display a dropdown of available selections. Then select the option you want.
+1. To discard your selections, select **Reset Query** for the parameter you want to change, and then make your selections again.
+1. When you're ready to run your query, select **Search**.
+1. To save the query, select **Save**.
+
+ Permissions Management saves the query and adds it to the **Saved Queries** list.
+
+## Save the query under a new name
+
+1. In the **Audit** dashboard, select the ellipses menu **(…)** on the far right and select **Save As**.
+2. Enter a new name for the query, and then select **Save**.
+
+ Permissions Management saves the query under the new name. Both the new query and the original query display in the **Saved Queries** list.
+
+## View a saved query
+
+1. In the **Audit** dashboard, select the down arrow next to **Saved Queries**.
+
+ A list of saved queries appears.
+2. Select the query you want to open.
+3. To open the query with the authorization systems you saved with the query, select **Load with the saved authorization systems**.
+4. To open the query with the authorization systems you have currently selected (which may be different from the ones you originally saved), select **Load with the currently selected authorization systems**.
+5. Select **Load Queries**.
+
+ Permissions Management displays details of the query in the **Activity** table. Select a query to see its details:
+
+ - The **Identity Details**.
+ - The **Domain** name.
+ - The **Resource Name** and **Resource Type**.
+ - The **Task Name**.
+ - The **Date**.
+ - The **IP Address**.
+ - The **Authorization System**.
+
+## View a raw events summary
+
+1. In the **Audit** dashboard, select **View** (the eye icon) to open the **Raw Events Summary** box.
+
+ The **Raw Events Summary** box displays **Username or Role Session Name**, the **Task name**, and the script for your query.
+1. Select **Copy** to copy the script.
+1. Select **X** to close the **Raw events summary** box.
++
+## Run a saved query
+
+1. In the **Audit** dashboard, select the query you want to run.
+
+ Permissions Management displays the results of the query in the **Activity** table.
+
+## Delete a query
+
+1. In the **Audit** dashboard, load the query you want to delete.
+2. Select **Delete**.
+
+ Permissions Management deletes the query. Deleted queries don't display in the **Saved Queries** list.
+
+## Rename a query
+
+1. In the **Audit** dashboard, load the query you want to rename.
+2. Select the ellipses menu **(…)** on the far right, and select **Rename**.
+3. Enter a new name for the query, and then select **Save**.
+
+ Permissions Management saves the query under the new name. Both the new query and the original query display in the **Saved Queries** list.
+
+## Duplicate a query
+
+1. In the **Audit** dashboard, load the query you want to duplicate.
+2. Select the ellipses menu **(…)** on the far right, and then select **Duplicate**.
+
+ CloudKnox creates a copy of the query. Both the copy of the query and the original query display in the **Saved Queries** list.
+
+ You can rename the original or copy of the query, change it, and save it without changing the other query.
+++
+## Next steps
+
+- For information on how to view how users access information, see [Use queries to see how users access information](ui-audit-trail.md).
+- For information on how to filter and view user activity, see [Filter and query user activity](product-audit-trail.md).
+- For information on how to generate an on-demand report from a query, see [Generate an on-demand report from a query](how-to-audit-trail-results.md).
active-directory How To Create Group Based Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-group-based-permissions.md
+
+ Title: Select group-based permissions settings in Permissions Management with the User management dashboard
+description: How to select group-based permissions settings in Permissions Management with the User management dashboard.
+++++++ Last updated : 02/23/2022+++
+# Select group-based permissions settings
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how you can create and manage group-based permissions in Permissions Management with the User management dashboard.
+
+[!NOTE] The Permissions Management Administrator for all authorization systems will be able to create the new group based permissions.
+
+## Select administrative permissions settings for a group
+
+1. To display the **User Management** dashboard, select **User** (your initials) in the upper right of the screen, and then select **User Management**.
+1. Select the **Groups** tab, and then press the **Create Permission** button in the upper right of the table.
+1. In the **Set Group Permission** box, begin typing the name of an **Azure Active Directory Security Group** in your tenant.
+
+1. Select the permission setting you want:
+2.
+ - **Admin for all Authorization System Types** provides **View**, **Control**, and **Approve** permissions for all authorization system types.
+ - **Admin for selected Authorization System Types** provides **View**, **Control**, and **Approve** permissions for selected authorization system types.
+ - **Custom** allows you to set **View**, **Control**, and **Approve** permissions for the authorization system types that you select.
+1. Select **Next**
+
+1. If you selected **Admin for all Authorization System Types**
+ - Select Identities for each Authorization System that you would like members of this group to Request on.
+
+1. If you selected **Admin for selected Authorization System Types**
+ - Select **Viewer**, **Controller**, or **Approver** for the **Authorization System Types** you want.
+ - Select **Next** and then select Identities for each Authorization System that you would like members of this group to Request on.
+
+1. If you select **Custom**, select the **Authorization System Types** you want.
+ - Select **Viewer**, **Controller**, or **Approver** for the **Authorization Systems** you want.
+ - Select **Next** and then select Identities for each Authorization System that you would like members of this group to Request on.
+
+1. Select **Save**, The following message appears: **New Group Has been Created Successfully.**
+1. To see the group you created in the **Groups** table, refresh the page.
+
+## Next steps
+
+- For information about how to manage user information, see [Manage users and groups with the User management dashboard](ui-user-management.md).
+- For information about how to view information about active and completed tasks, see [View information about active and completed tasks](ui-tasks.md).
+- For information about how to view personal and organization information, see [View personal and organization information](product-account-settings.md).
active-directory How To Create Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-role-policy.md
+
+ Title: Create a role/policy in the Remediation dashboard in Permissions Management
+description: How to create a role/policy in the Remediation dashboard in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Create a role/policy in the Remediation dashboard
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how you can use the **Remediation** dashboard in Permissions Management to create roles/policies for the Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) authorization systems.
+
+> [!NOTE]
+> To view the **Remediation** tab, you must have **Viewer**, **Controller**, or **Administrator** permissions. To make changes on this tab, you must have **Controller** or **Administrator** permissions. If you don't have these permissions, contact your system administrator.
+
+> [!NOTE]
+> Microsoft Azure uses the term *role* for what other cloud providers call *policy*. Permissions Management automatically makes this terminology change when you select the authorization system type. In the user documentation, we use *role/policy* to refer to both.
+
+## Create a policy for AWS
+
+1. On the Entra home page, select the **Remediation** tab, and then select the **Role/Policies** tab.
+1. Use the dropdown lists to select the **Authorization System Type** and **Authorization System**.
+1. Select **Create Policy**.
+1. On the **Details** page, the **Authorization System Type** and **Authorization System** are pre-populated from your previous settings.
+ - To change the settings, make a selection from the dropdown.
+1. Under **How Would You Like To Create The Policy**, select the required option:
+
+ - **Activity of User(s)**: Allows you to create a policy based on user activity.
+ - **Activity of Group(s)**: Allows you to create a policy based on the aggregated activity of all the users belonging to the group(s).
+ - **Activity of Resource(s)**: Allows you to create a policy based on the activity of a resource, for example, an EC2 instance.
+ - **Activity of Role**: Allows you to create a policy based on the aggregated activity of all the users that assumed the role.
+ - **Activity of Tag(s)**: Allows you to create a policy based on the aggregated activity of all the tags.
+ - **Activity of Lambda Function**: Allows you to create a new policy based on the Lambda function.
+ - **From Existing Policy**: Allows you to create a new policy based on an existing policy.
+ - **New Policy**: Allows you to create a new policy from scratch.
+1. In **Tasks performed in last**, select the duration: **90 days**, **60 days**, **30 days**, **7 days**, or **1 day**.
+1. Depending on your preference, select or deselect **Include Access Advisor data.**
+1. In **Settings**, from the **Available** column, select the plus sign **(+)** to move the identity into the **Selected** column, and then select **Next**.
+
+1. On the **Tasks** page, from the **Available** column, select the plus sign **(+)** to move the task into the **Selected** column.
+ - To add a whole category, select a category.
+ - To add individual items from a category, select the down arrow on the left of the category name, and then select individual items.
+1. In **Resources**, select **All Resources** or **Specific Resources**.
+
+ If you select **Specific Resources**, a list of available resources appears. Find the resources you want to add, and then select **Add**.
+1. In **Request Conditions**, select **JSON** .
+1. In **Effect**, select **Allow** or **Deny**, and then select **Next**.
+1. In **Policy name:**, enter a name for your policy.
+1. To add another statement to your policy, select **Add Statement**, and then, from the list of **Statements**, select a statement.
+1. Review your **Task**, **Resources**, **Request Conditions**, and **Effect** settings, and then select **Next**.
++
+1. On the **Preview** page, review the script to confirm it's what you want.
+1. If your controller isn't enabled, select **Download JSON** or **Download Script** to download the code and run it yourself.
+
+ If your controller is enabled, skip this step.
+1. Select **Split Policy**, and then select **Submit**.
+
+ A message confirms that your policy has been submitted for creation
+
+1. The [**Permissions Management Tasks**](ui-tasks.md) pane appears on the right.
+ - The **Active** tab displays a list of the policies Permissions Management is currently processing.
+ - The **Completed** tab displays a list of the policies Permissions Management has completed.
+1. Refresh the **Role/Policies** tab to see the policy you created.
+++
+## Create a role for Azure
+
+1. On the Permissions Management home page, select the **Remediation** tab, and then select the **Role/Policies** tab.
+1. Use the dropdown lists to select the **Authorization System Type** and **Authorization System**.
+1. Select **Create Role**.
+1. On the **Details** page, the **Authorization System Type** and **Authorization System** are pre-populated from your previous settings.
+ - To change the settings, select the box and make a selection from the dropdown.
+1. Under **How Would You Like To Create The Role?**, select the required option:
+
+ - **Activity of User(s)**: Allows you to create a role based on user activity.
+ - **Activity of Group(s)**: Allows you to create a role based on the aggregated activity of all the users belonging to the group(s).
+ - **Activity of App(s)**: Allows you to create a role based on the aggregated activity of all apps.
+ - **From Existing Role**: Allows you to create a new role based on an existing role.
+ - **New Role**: Allows you to create a new role from scratch.
+
+1. In **Tasks performed in last**, select the duration: **90 days**, **60 days**, **30 days**, **7 days**, or **1 day**.
+1. Depending on your preference:
+ - Select or deselect **Ignore Non-Microsoft Read Actions**.
+ - Select or deselect **Include Read-Only Tasks**.
+1. In **Settings**, from the **Available** column, select the plus sign **(+)** to move the identity into the **Selected** column, and then select **Next**.
+
+1. On the **Tasks** page, in **Role name:**, enter a name for your role.
+1. From the **Available** column, select the plus sign **(+)** to move the task into the **Selected** column.
+ - To add a whole category, select a category.
+ - To add individual items from a category, select the down arrow on the left of the category name, and then select individual items.
+1. Select **Next**.
+
+1. On the **Preview** page, review:
+ - The list of selected **Actions** and **Not Actions**.
+ - The **JSON** or **Script** to confirm it's what you want.
+1. If your controller isn't enabled, select **Download JSON** or **Download Script** to download the code and run it yourself.
+
+ If your controller is enabled, skip this step.
+
+1. Select **Submit**.
+
+ A message confirms that your role has been submitted for creation
+
+1. The [**Permissions Management Tasks**](ui-tasks.md) pane appears on the right.
+ - The **Active** tab displays a list of the policies Permissions Management is currently processing.
+ - The **Completed** tab displays a list of the policies Permissions Management has completed.
+1. Refresh the **Role/Policies** tab to see the role you created.
+
+## Create a role for GCP
+
+1. On the Permissions Management home page, select the **Remediation** tab, and then select the **Role/Policies** tab.
+1. Use the dropdown lists to select the **Authorization System Type** and **Authorization System**.
+1. Select **Create Role**.
+1. On the **Details** page, the **Authorization System Type** and **Authorization System** are pre-populated from your previous settings.
+ - To change the settings, select the box and make a selection from the dropdown.
+1. Under **How Would You Like To Create The Role?**, select the required option:
+
+ - **Activity of User(s)**: Allows you to create a role based on user activity.
+ - **Activity of Group(s)**: Allows you to create a role based on the aggregated activity of all the users belonging to the group(s).
+ - **Activity of Service Account(s)**: Allows you to create a role based on the aggregated activity of all service accounts.
+ - **From Existing Role**: Allows you to create a new role based on an existing role.
+ - **New Role**: Allows you to create a new role from scratch.
+
+1. In **Tasks performed in last**, select the duration: **90 days**, **60 days**, **30 days**, **7 days**, or **1 day**.
+1. If you selected **Activity Of Service Account(s)** in the previous step, select or deselect **Collect activity across all GCP Authorization Systems.**
+1. From the **Available** column, select the plus sign **(+)** to move the identity into the **Selected** column, and then select **Next**.
++
+1. On the **Tasks** page, in **Role name:**, enter a name for your role.
+1. From the **Available** column, select the plus sign **(+)** to move the task into the **Selected** column.
+ - To add a whole category, select a category.
+ - To add individual items from a category, select the down arrow on the left of the category name, and then select individual items.
+1. Select **Next**.
+
+1. On the **Preview** page, review:
+ - The list of selected **Actions**.
+ - The **YAML** or **Script** to confirm it's what you want.
+1. If your controller isn't enabled, select **Download YAML** or **Download Script** to download the code and run it yourself.
+1. Select **Submit**.
+ A message confirms that your role has been submitted for creation
+
+1. The [**Permissions Management Tasks**](ui-tasks.md) pane appears on the right.
+
+ - The **Active** tab displays a list of the policies Permissions Management is currently processing.
+ - The **Completed** tab displays a list of the policies Permissions Management has completed.
+1. Refresh the **Role/Policies** tab to see the role you created.
++
+## Next steps
+
+- For information on how to view existing roles/policies, requests, and permissions, see [View roles/policies, requests, and permission in the Remediation dashboard](ui-remediation.md).
+- For information on how to modify a role/policy, see [Modify a role/policy](how-to-modify-role-policy.md).
+- For information on how to clone a role/policy, see [Clone a role/policy](how-to-clone-role-policy.md).
+- For information on how to delete a role/policy, see [Delete a role/policy](how-to-delete-role-policy.md).
+- To view information about roles/policies, see [View information about roles/policies](how-to-view-role-policy.md).
+- For information on how to attach and detach permissions for AWS identities, see [Attach and detach policies for AWS identities](how-to-attach-detach-permissions.md).
+- For information on how to revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities, see [Revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities](how-to-revoke-task-readonly-status.md)
+- For information on how to create or approve a request for permissions, see [Create or approve a request for permissions](how-to-create-approve-privilege-request.md).
+- For information on how to view information about roles/policies, see [View information about roles/policies](how-to-view-role-policy.md)
active-directory How To Create Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-rule.md
+
+ Title: Create a rule in the Autopilot dashboard in Permissions Management
+description: How to create a rule in the Autopilot dashboard in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Create a rule in the Autopilot dashboard
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how to create a rule in the Permissions Management **Autopilot** dashboard.
+
+> [!NOTE]
+> Only users with **Administrator** permissions can view and make changes on the Autopilot tab. If you don't have these permissions, contact your system administrator.
+
+## Create a rule
+
+1. In the Permissions Management home page, select the **Autopilot** tab.
+1. In the **Autopilot** dashboard, from the **Authorization system types** dropdown, select Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
+1. From the **Authorization System** dropdown, in the **List** and **Folders** box, select the account and folder names that you want, and then select **Apply**.
+1. In the **Autopilot** dashboard, select **New Rule**.
+1. In the **Rule Name** box, enter a name for your rule.
+1. Select **AWS**, **Azure**, **GCP**, and then select **Next**.
+
+1. Select **Authorization Systems**, and then select **All** or the account names that you want.
+1. From the **Folders** dropdown, select a folder, and then select **Apply**.
+
+ To change your folder settings, select **Reset**.
+
+ - The **Status** column displays if the authorization system is **Online** or **Offline**.
+ - The **Controller** column displays if the controller is **Enabled** or **Not Enabled**.
++
+1. Select **Configure** , and then select the following parameters for your rule:
+
+ - **Role Created On Is**: Select the duration in days.
+ - **Role Last Used On Is**: Select the duration in days when the role was last used.
+ - **Cross Account Role**: Select **True** or **False**.
+
+1. Select **Mode**, and then, if you want recommendations to be generated and applied manually, select **On-Demand**.
+1. Select **Save**
+
+ The following information displays in the **Autopilot Rules** table:
+
+ - **Rule Name**: The name of the rule.
+ - **State**: The status of the rule: idle (not being use) or active (being used).
+ - **Rule Type**: The type of rule being applied.
+ - **Mode**: The status of the mode: on-demand or not.
+ - **Last Generated**: The date and time the rule was last generated.
+ - **Created By**: The email address of the user who created the rule.
+ - **Last Modified On**: The date and time the rule was last modified.
+ - **Subscription**: Provides an **On** or **Off** switch that allows you to receive email notifications when recommendations have been generated, applied, or unapplied.
++++
+## Next steps
+
+- For more information about viewing rules, see [View roles in the Autopilot dashboard](ui-autopilot.md).
+- For information about generating, viewing, and applying rule recommendations for rules, see [Generate, view, and apply rule recommendations for rules](how-to-recommendations-rule.md).
+- For information about notification settings for rules, see [View notification settings for a rule](how-to-notifications-rule.md).
active-directory How To Delete Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-delete-role-policy.md
+
+ Title: Delete a role/policy in the Remediation dashboard in Permissions Management
+description: How to delete a role/policy in the Just Enough Permissions (JEP) Controller.
+++++++ Last updated : 02/23/2022+++
+# Delete a role/policy in the Remediation dashboard
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how you can use the **Remediation** dashboard in Permissions Management to delete roles/policies for the Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) authorization systems.
+
+> [!NOTE]
+> To view the **Remediation** dashboard, you must have **Viewer**, **Controller**, or **Administrator** permissions. To make changes on this tab, you must have **Controller** or **Administrator** permissions. If you don't have these permissions, contact your system administrator.
+
+> [!NOTE]
+> Microsoft Azure uses the term *role* for what other Cloud providers call *policy*. Permissions Management automatically makes this terminology change when you select the authorization system type. In the user documentation, we use *role/policy* to refer to both.
+
+## Delete a role/policy
+
+1. On the Permissions Management home page, select the **Remediation** tab, and then select the **Role/Policies** subtab.
+1. Select the role/policy you want to delete, and from the **Actions** column, select **Delete**.
+
+ You can only delete a role/policy if it isn't assigned to an identity.
+
+ You can't delete system roles/policies.
+
+1. On the **Preview** page, review the role/policy information to make sure you want to delete it, and then select **Submit**.
+
+## Next steps
++
+- For information on how to view existing roles/policies, requests, and permissions, see [View roles/policies, requests, and permission in the Remediation dashboard](ui-remediation.md).
+- For information on how to create a role/policy, see [Create a role/policy](how-to-create-role-policy.md).
+- For information on how to clone a role/policy, see [Clone a role/policy](how-to-clone-role-policy.md).
+- For information on how to modify a role/policy, see Modify a role/policy](how-to-modify-role-policy.md).
+- To view information about roles/policies, see [View information about roles/policies](how-to-view-role-policy.md).
+- For information on how to attach and detach permissions for AWS identities, see [Attach and detach policies for AWS identities](how-to-attach-detach-permissions.md).
+- For information on how to revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities, see [Revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities](how-to-revoke-task-readonly-status.md)
+- For information on how to create or approve a request for permissions, see [Create or approve a request for permissions](how-to-create-approve-privilege-request.md).
+- For information on how to view information about roles/policies, see [View information about roles/policies](how-to-view-role-policy.md)
active-directory How To Modify Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-modify-role-policy.md
+
+ Title: Modify a role/policy in the Remediation dashboard in Permissions Management
+description: How to modify a role/policy in the Remediation dashboard in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Modify a role/policy in the Remediation dashboard
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how you can use the **Remediation** dashboard in Permissions Management to modify roles/policies for the Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) authorization systems.
+
+> [!NOTE]
+> To view the **Remediation** tab, you must have **Viewer**, **Controller**, or **Administrator** permissions. To make changes on this tab, you must have **Controller** or **Administrator** permissions. If you don't have these permissions, contact your system administrator.
+
+> [!NOTE]
+> Microsoft Azure uses the term *role* for what other cloud providers call *policy*. Permissions Management automatically makes this terminology change when you select the authorization system type. In the user documentation, we use *role/policy* to refer to both.
+
+## Modify a role/policy
+
+1. On the Permissions Management home page, select the **Remediation** tab, and then select the **Role/Policies** tab.
+1. Select the role/policy you want to modify, and from the **Actions** column, select **Modify**.
+
+ You can't modify **System** policies and roles.
+
+1. On the **Statements** page, make your changes to the **Tasks**, **Resources**, **Request conditions**, and **Effect** sections as required, and then select **Next**.
+
+1. Review the changes to the JSON or script on the **Preview** page, and then select **Submit**.
+
+## Next steps
+
+- For information on how to view existing roles/policies, requests, and permissions, see [View roles/policies, requests, and permission in the Remediation dashboard](ui-remediation.md).
+- For information on how to create a role/policy, see [Create a role/policy](how-to-create-role-policy.md).
+- For information on how to clone a role/policy, see [Clone a role/policy](how-to-clone-role-policy.md).
+- For information on how to delete a role/policy, see [Delete a role/policy](how-to-delete-role-policy.md).
+- To view information about roles/policies, see [View information about roles/policies](how-to-view-role-policy.md).
+- For information on how to attach and detach permissions for AWS identities, see [Attach and detach policies for AWS identities](how-to-attach-detach-permissions.md).
+- For information on how to revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities, see [Revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities](how-to-revoke-task-readonly-status.md)
+- For information on how to create or approve a request for permissions, see [Create or approve a request for permissions](how-to-create-approve-privilege-request.md).
+- For information on how to view information about roles/policies, see [View information about roles/policies](how-to-view-role-policy.md)
active-directory How To Notifications Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-notifications-rule.md
+
+ Title: View notification settings for a rule in the Autopilot dashboard in Permissions Management
+description: How to view notification settings for a rule in the Autopilot dashboard in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View notification settings for a rule in the Autopilot dashboard
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how to view notification settings for a rule in the Permissions Management **Autopilot** dashboard.
+
+> [!NOTE]
+> Only users with **Administrator** permissions can view and make changes on the Autopilot tab. If you don't have these permissions, contact your system administrator.
+
+## View notification settings for a rule
+
+1. In the Permissions Management home page, select the **Autopilot** tab.
+1. In the **Autopilot** dashboard, from the **Authorization system types** dropdown, select Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
+1. From the **Authorization System** dropdown, in the **List** and **Folders** box, select the account and folder names that you want, and then select **Apply**.
+1. In the **Autopilot** dashboard, select a rule.
+1. In the far right of the row, select the ellipses **(...)**
+1. To view notification settings for a rule, select **Notification Settings**.
+
+ Permissions Management displays a list of subscribed users. These users are signed up to receive notifications for the selected rule.
+
+1. To close the **Notification Settings** box, select **Close**.
++
+## Next steps
+
+- For more information about viewing rules, see [View roles in the Autopilot dashboard](ui-autopilot.md).
+- For information about creating rules, see [Create a rule](how-to-create-rule.md).
+- For information about generating, viewing, and applying rule recommendations for rules, see [Generate, view, and apply rule recommendations for rules](how-to-recommendations-rule.md).
active-directory How To Recommendations Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-recommendations-rule.md
+
+ Title: Generate, view, and apply rule recommendations in the Autopilot dashboard in Permissions Management
+description: How to generate, view, and apply rule recommendations in the Autopilot dashboard in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Generate, view, and apply rule recommendations in the Autopilot dashboard
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how to generate and view rule recommendations in the Permissions Management **Autopilot** dashboard.
+
+> [!NOTE]
+> Only users with **Administrator** permissions can view and make changes on the Autopilot tab. If you don't have these permissions, contact your system administrator.
+
+## Generate rule recommendations
+
+1. In the Permissions Management home page, select the **Autopilot** tab.
+1. In the **Autopilot** dashboard, from the **Authorization system types** dropdown, select Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
+1. From the **Authorization System** dropdown, in the **List** and **Folders** box, select the account and folder names that you want, and then select **Apply**.
+1. In the **Autopilot** dashboard, select a rule.
+1. In the far right of the row, select the ellipses **(...)**.
+1. To generate recommendations for each user and the authorization system, select **Generate Recommendations**.
+
+ Only the user who created the selected rule can generate a recommendation.
+1. View your recommendations in the **Recommendations** subtab.
+1. Select **Close** to close the **Recommendations** subtab.
+
+## View rule recommendations
+
+1. In the Permissions Management home page, select the **Autopilot** tab.
+1. In the **Autopilot** dashboard, from the **Authorization system types** dropdown, select Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
+1. From the **Authorization System** dropdown, in the **List** and **Folders** box, select the account and folder names that you want, and then select **Apply**.
+1. In the **Autopilot** dashboard, select a rule.
+1. In the far right of the row, select the ellipses **(...)**
+
+1. To view recommendations for each user and the authorization system, select **View Recommendations**.
+
+ Permissions Management displays the recommendations for each user and authorization system in the **Recommendations** subtab.
+
+1. Select **Close** to close the **Recommendations** subtab.
+
+## Apply rule recommendations
+
+1. In the Permissions Management home page, select the **Autopilot** tab.
+1. In the **Autopilot** dashboard, from the **Authorization system types** dropdown, select Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
+1. From the **Authorization System** dropdown, in the **List** and **Folders** box, select the account and folder names that you want, and then select **Apply**.
+1. In the **Autopilot** dashboard, select a rule.
+1. In the far right of the row, select the ellipses **(...)**
+
+1. To view recommendations for each user and the authorization system, select **View Recommendations**.
+
+ Permissions Management displays the recommendations for each user and authorization system in the **Recommendations** subtab.
+
+1. To apply a recommendation, select the **Apply Recommendations** subtab, and then select a recommendation.
+1. Select **Close** to close the **Recommendations** subtab.
+
+## Unapply rule recommendations
+
+1. In the Permissions Management home page, select the **Autopilot** tab.
+1. In the **Autopilot** dashboard, from the **Authorization system types** dropdown, select Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
+1. From the **Authorization System** dropdown, in the **List** and **Folders** box, select the account and folder names that you want, and then select **Apply**.
+1. In the **Autopilot** dashboard, select a rule.
+1. In the far right of the row, select the ellipses **(...)**
+
+1. To view recommendations for each user and the authorization system, select **View Recommendations**.
+
+ Permissions Management displays the recommendations for each user and authorization system in the **Recommendations** subtab.
+
+1. To remove a recommendation, select the **Unapply Recommendations** subtab, and then select a recommendation.
+1. Select **Close** to close the **Recommendations** subtab.
++
+## Next steps
+
+- For more information about viewing rules, see [View roles in the Autopilot dashboard](ui-autopilot.md).
+- For information about creating rules, see [Create a rule](how-to-create-rule.md).
+- For information about notification settings for rules, see [View notification settings for a rule](how-to-notifications-rule.md).
active-directory How To Revoke Task Readonly Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-revoke-task-readonly-status.md
+
+ Title: Revoke access to high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities in the Remediation dashboard in Permissions Management
+description: How to revoke access to high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities in the Remediation dashboard in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Revoke access to high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities
++
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how you can revoke high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities using the **Remediation** dashboard.
+
+> [!NOTE]
+> To view the **Remediation** tab, your must have **Viewer**, **Controller**, or **Administrator** permissions. To make changes on this tab, you must have **Controller** or **Administrator** permissions. If you don't have these permissions, contact your system administrator.
+
+## View an identity's permissions
+
+1. On the Permissions Management home page, select the **Remediation** tab, and then select the **Permissions** subtab.
+1. From the **Authorization System Type** dropdown, select **Azure** or **GCP**.
+1. From the **Authorization System** dropdown, select the accounts you want to access.
+1. From the **Search for** dropdown, select **Group**, **User**, or **APP/Service Account**.
+1. To search for more parameters, you can make a selection from the **User States**, **Permission Creep Index**, and **Task Usage** dropdowns.
+1. Select **Apply**.
+
+ Permissions Management displays a list of groups, users, and service accounts that match your criteria.
+1. In **Enter a username**, enter or select a user.
+1. In **Enter a Group Name**, enter or select a group, then select **Apply**.
+1. Make a selection from the results list.
+
+ The table displays the **Username** **Domain/Account**, **Source**, **Resource** and **Current Role**.
++
+## Revoke an identity's access to unused tasks
+
+1. On the Permissions Management home page, select the **Remediation** tab, and then select the **Permissions** subtab.
+1. From the **Authorization System Type** dropdown, select **Azure** or **GCP**.
+1. From the **Authorization System** dropdown, select the accounts you want to access.
+1. From the **Search for** dropdown, select **Group**, **User**, or **APP/Service Account**, and then select **Apply**.
+1. Make a selection from the results list.
+
+1. To revoke an identity's access to tasks they aren't using, select **Revoke Unused Tasks**.
+1. When the following message displays: **Are you sure you want to change permission?**, select:
+ - **Generate Script** to generate a script where you can manually add/remove the permissions you selected.
+ - **Execute** to change the permission.
+ - **Close** to cancel the action.
+
+## Revoke an identity's access to high-risk tasks
+
+1. On the Permissions Management home page, select the **Remediation** tab, and then select the **Permissions** subtab.
+1. From the **Authorization System Type** dropdown, select **Azure** or **GCP**.
+1. From the **Authorization System** dropdown, select the accounts you want to access.
+1. From the **Search For** dropdown, select **Group**, **User**, or **APP/Service Account**, and then select **Apply**.
+1. Make a selection from the results list.
+
+1. To revoke an identity's access to high-risk tasks, select **Revoke High-Risk Tasks**.
+1. When the following message displays: **Are you sure you want to change permission?**, select:
+ - **Generate Script** to generate a script where you can manually add/remove the permissions you selected.
+ - **Execute** to change the permission.
+ - **Close** to cancel the action.
+
+## Revoke an identity's ability to delete tasks
+
+1. On the Permissions Management home page, select the **Remediation** tab, and then select the **Permissions** subtab.
+1. From the **Authorization System Type** dropdown, select **Azure** or **GCP**.
+1. From the **Authorization System** dropdown, select the accounts you want to access.
+1. From the **Search For** dropdown, select **Group**, **User**, or **APP/Service Account**, and then select **Apply**.
+1. Make a selection from the results list.
+
+1. To revoke an identity's ability to delete tasks, select **Revoke Delete Tasks**.
+1. When the following message displays: **Are you sure you want to change permission?**, select:
+ - **Generate Script** to generate a script where you can manually add/remove the permissions you selected.
+ - **Execute** to change the permission.
+ - **Close** to cancel the action.
+
+## Assign read-only status to an identity
+
+1. On the Permissions Management home page, select the **Remediation** tab, and then select the **Permissions** subtab.
+1. From the **Authorization System Type** dropdown, select **Azure** or **GCP**.
+1. From the **Authorization System** dropdown, select the accounts you want to access.
+1. From the **Search for** dropdown, select **Group**, **User**, or **APP/Service Account**, and then select **Apply**.
+1. Make a selection from the results list.
+
+1. To assign read-only status to an identity, select **Assign Read-Only Status**.
+1. When the following message displays: **Are you sure you want to change permission?**, select:
+ - **Generate Script** to generate a script where you can manually add/remove the permissions you selected.
+ - **Execute** to change the permission.
+ - **Close** to cancel the action.
++
+## Next steps
+
+- For information on how to view existing roles/policies, requests, and permissions, see [View roles/policies, requests, and permission in the Remediation dashboard](ui-remediation.md).
+- For information on how to create a role/policy, see [Create a role/policy](how-to-create-role-policy.md).
+- For information on how to clone a role/policy, see [Clone a role/policy](how-to-clone-role-policy.md).
+- For information on how to delete a role/policy, see [Delete a role/policy](how-to-delete-role-policy.md).
+- For information on how to modify a role/policy, see Modify a role/policy](how-to-modify-role-policy.md).
+- To view information about roles/policies, see [View information about roles/policies](how-to-view-role-policy.md).
+- For information on how to attach and detach permissions for AWS identities, see [Attach and detach policies for AWS identities](how-to-attach-detach-permissions.md).
+- For information on how to add and remove roles and tasks for Azure and GCP identities, see [Add and remove roles and tasks for Azure and GCP identities](how-to-attach-detach-permissions.md).
+- For information on how to create or approve a request for permissions, see [Create or approve a request for permissions](how-to-create-approve-privilege-request.md).
active-directory How To View Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-view-role-policy.md
+
+ Title: View information about roles/ policies in the Remediation dashboard in Permissions Management
+description: How to view and filter information about roles/ policies in the Remediation dashboard in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View information about roles/ policies in the Remediation dashboard
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+The **Remediation** dashboard in Permissions Management enables system administrators to view, adjust, and remediate excessive permissions based on a user's activity data. You can use the **Roles/Policies** subtab in the dashboard to view information about roles and policies in the Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) authorization systems.
+
+> [!NOTE]
+> To view the **Remediation dashboard** tab, you must have **Viewer**, **Controller**, or **Administrator** permissions. To make changes on this tab, you must have **Controller** or **Administrator** permissions. If you don't have these permissions, contact your system administrator.
+
+> [!NOTE]
+> Microsoft Azure uses the term *role* for what other Cloud providers call *policy*. Permissions Management automatically makes this terminology change when you select the authorization system type. In the user documentation, we use *role/policy* to refer to both.
++
+## View information about roles/policies
+
+1. On the Permissions Management home page, select the **Remediation** tab, and then select the **Role/Policies** subtab.
+
+ The **Role/Policies list** displays a list of existing roles/policies and the following information about each role/policy
+ - **Role/Policy Name**: The name of the roles/policies available to you.
+ - **Role/Policy Type**: **Custom**, **System**, or **Permissions Management Only**
+ - **Actions**: The type of action you can perform on the role/policy, **Clone**, **Modify**, or **Delete**
++
+1. To display details about the role/policy and view its assigned tasks and identities, select the arrow to the left of the role/policy name.
+
+ The **Tasks** list appears, displaying:
+ - A list of **Tasks**.
+ - **For AWS:**
+ - The **Users**, **Groups**, and **Roles** the task is **Directly Assigned To**.
+ - The **Group Members** and **Role Identities** the task is **Indirectly Accessible By**.
+
+ - **For Azure:**
+ - The **Users**, **Groups**, **Enterprise Applications** and **Managed Identities** the task is **Directly Assigned To**.
+ - The **Group Members** the task is **Indirectly Accessible By**.
+
+ - **For GCP:**
+ - The **Users**, **Groups**, and **Service Accounts** the task is **Directly Assigned To**.
+ - The **Group Members** the task is **Indirectly Accessible By**.
+
+1. To close the role/policy details, select the arrow to the left of the role/policy name.
+
+## Export information about roles/policies
+
+- **Export CSV**: Select this option to export the displayed list of roles/policies as a comma-separated values (CSV) file.
+
+ When the file is successfully exported, a message appears: **Exported Successfully.**
+
+ - Check your email for a message from the Permissions Management Customer Success Team. This email contains a link to:
+ - The **Role Policy Details** report in CSV format.
+ - The **Reports** dashboard where you can configure how and when you can automatically receive reports.
++++
+## Filter information about roles/policies
+
+1. On the Permissions Management home page, select the **Remediation** dashboard, and then select the **Role/Policies** tab.
+1. To filter the roles/policies, select from the following options:
+
+ - **Authorization System Type**: Select **AWS**, **Azure**, or **GCP**.
+ - **Authorization System**: Select the accounts you want.
+ - **Role/Policy Type**: Select from the following options:
+
+ - **All**: All managed roles/policies.
+ - **Custom**: A customer-managed role/policy.
+ - **System**: A cloud service provider-managed role/policy.
+ - **Permissions Management Only**: A role/policy created by Permissions Management.
+
+ - **Role/Policy Status**: Select **All**, **Assigned**, or **Unassigned**.
+ - **Role/Policy Usage**: Select **All** or **Unused**.
+1. Select **Apply**.
+
+ To discard your changes, select **Reset Filter**.
++
+## Next steps
+
+- For information on how to view existing roles/policies, requests, and permissions, see [View roles/policies, requests, and permission in the Remediation dashboard](ui-remediation.md).
+- For information on how to create a role/policy, see [Create a role/policy](how-to-create-role-policy.md).
+- For information on how to clone a role/policy, see [Clone a role/policy](how-to-clone-role-policy.md).
+- For information on how to delete a role/policy, see [Delete a role/policy](how-to-delete-role-policy.md).
+- For information on how to modify a role/policy, see Modify a role/policy](how-to-modify-role-policy.md).
+- For information on how to attach and detach permissions AWS identities, see [Attach and detach policies for AWS identities](how-to-attach-detach-permissions.md).
+- For information on how to revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities, see [Revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities](how-to-revoke-task-readonly-status.md)
+- For information on how to create or approve a request for permissions, see [Create or approve a request for permissions](how-to-create-approve-privilege-request.md).
+- For information on how to view information about roles/policies, see [View information about roles/policies](how-to-view-role-policy.md)
active-directory Integration Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/integration-api.md
+
+ Title: Set and view configuration settings in Permissions Management
+description: How to view the Permissions Management API integration settings and create service accounts and roles.
+++++++ Last updated : 02/23/2022+++
+# Set and view configuration settings
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This topic describes how to view configuration settings, create and delete a service account, and create a role in Permissions Management.
+
+## View configuration settings
+
+The **Integrations** dashboard displays the authorization systems available to you.
+
+1. To display the **Integrations** dashboard, select **User** (your initials) in the upper right of the screen, and then select **Integrations.**
+
+ The **Integrations** dashboard displays a tile for each available authorization system.
+
+1. Select an authorization system tile to view the following integration information:
+
+ 1. To find out more about the Permissions Management API, select **Permissions Management API**, and then select documentation.
+ <!Add Link: [documentation](https://developer.cloudknox.io/)>
+
+ 1. To view information about service accounts, select **Integration**:
+ - **Email**: Lists the email address of the user who created the integration.
+ - **Created By**: Lists the first and last name of the user who created the integration.
+ - **Created On**: Lists the date and time the integration was created.
+ - **Recent Activity**: Lists the date and time the integration was last used, or notes if the integration was never used.
+ - **Service Account ID**: Lists the service account ID.
+ - **Access Key**: Lists the access key code.
+
+ 1. To view settings information, select **Settings**:
+ - **Roles can create service account**: Lists the type of roles you can create.
+ - **Access Key Rotation Policy**: Lists notifications and actions you can set.
+ - **Access Key Usage Policy**: Lists notifications and actions you can set.
+
+## Create a service account
+
+1. On the **Integrations** dashboard, select **User**, and then select **Integrations.**
+2. Click **Create Service Account**. The following information is pre-populated on the page:
+ - **API Endpoint**
+ - **Service Account ID**
+ - **Access Key**
+ - **Secret Key**
+
+3. To copy the codes, select the **Duplicate** icon next to the respective information.
+
+ > [!NOTE]
+ > The codes are time sensitive and will regenerate after the box is closed.
+
+4. To regenerate the codes, at the bottom of the column, select **Regenerate**.
+
+## Delete a service account
+
+1. On the **Integrations** dashboard, select **User**, and then select **Integrations.**
+
+1. On the right of the email address, select **Delete Service Account**.
+
+ On the **Validate OTP To Delete [Service Name] Integration** box, a message displays asking you to check your email for a code sent to the email address on file.
+
+ If you don't receive the code, select **Resend OTP**.
+
+1. In the **Enter OTP** box, enter the code from the email.
+
+1. Click **Verify**.
+
+## Create a role
+
+1. On the **Integrations** dashboard, select **User**, and then select **Settings**.
+2. Under **Roles can create service account**, select the role you want:
+ - **Super Admin**
+ - **Viewer**
+ - **Controller**
+
+3. In the **Access Key Rotation Policy** column, select options for the following:
+
+ - **How often should the users rotate their access keys?**: Select **30 days**, **60 days**, **90 days**, or **Never**.
+ - **Notification**: Enter a whole number in the blank space within **Notify "X" days before the selected period**, or select **Don't Notify**.
+ - **Action (after the key rotation period ends)**: Select **Disable Action Key** or **No Action**.
+
+4. In the **Access Key Usage Policy** column, select options for the following:
+
+ - **How often should the users go without using their access keys?**: Select **30 days**, **60 days**, **90 days**, or **Never**.
+ - **Notification**: Enter a whole number in the blank space within **Notify "X" days before the selected period**, or select **Don't Notify**.
+ - **Action (after the key rotation period ends)**: Select **Disable Action Key** or **No Action**.
+
+5. Click **Save**.
+
+<!## Next steps>
+
+<!View integrated authorization systems](product-integrations)>
+<![Installation overview](installation.md)>
+<![Sign up and deploy FortSentry registration](fortsentry-registration.md)>
active-directory Multi Cloud Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/multi-cloud-glossary.md
+
+ Title: Permissions Management glossary
+description: Permissions Management glossary
+++++++ Last updated : 02/23/2022+++
+# The Permissions Management glossary
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This glossary provides a list of some of the commonly used cloud terms in Permissions Management. These terms will help Permissions Management users navigate through cloud-specific terms and cloud-generic terms.
+
+## Commonly-used acronyms and terms
+
+| Term | Definition |
+|--|--|
+| ACL | Access control list. A list of files or resources that contain information about which users or groups have permission to access those resources or modify those files. |
+| ARN | Azure Resource Notification |
+| Authorization System | CIEM supports AWS accounts, Azure Subscriptions, GCP projects as the Authorization systems |
+| Authorization System Type | Any system which provides the authorizations by assigning the permissions to the identities, resources. CIEM supports AWS, Azure, GCP as the Authorization System Types |
+| Cloud security | A form of cybersecurity that protects data stored online on cloud computing platforms from theft, leakage, and deletion. Includes firewalls, penetration testing, obfuscation, tokenization, virtual private networks (VPN), and avoiding public internet connections. |
+| Cloud storage | A service model in which data is maintained, managed, and backed up remotely. Available to users over a network. |
+| CIAM | Cloud Infrastructure Access Management |
+| CIEM | Cloud Infrastructure Entitlement Management. The next generation of solutions for enforcing least privilege in the cloud. It addresses cloud-native security challenges of managing identity access management in cloud environments. |
+| CIS | Cloud infrastructure security |
+| CWP | Cloud Workload Protection. A workload-centric security solution that targets the unique protection requirements of workloads in modern enterprise environments. |
+| CNAPP | Cloud-Native Application Protection. The convergence of cloud security posture management (CSPM), cloud workload protection (CWP), cloud infrastructure entitlement management (CIEM), and cloud applications security broker (CASB). An integrated security approach that covers the entire lifecycle of cloud-native applications. |
+| CSPM | Cloud Security Posture Management. Addresses risks of compliance violations and misconfigurations in enterprise cloud environments. Also focuses on the resource level to identify deviations from best practice security settings for cloud governance and compliance. |
+| CWPP | Cloud Workload Protection Platform |
+| Data Collector | Virtual entity which stores the data collection configuration |
+| Delete task | A high-risk task that allows users to permanently delete a resource. |
+| ED | Enterprise directory |
+| Entitlement | An abstract attribute that represents different forms of user permissions in a range of infrastructure systems and business applications.|
+| Entitlement management | Technology that grants, resolves, enforces, revokes, and administers fine-grained access entitlements (that is, authorizations, privileges, access rights, permissions and rules). Its purpose is to execute IT access policies to structured/unstructured data, devices, and services. It can be delivered by different technologies, and is often different across platforms, applications, network components, and devices. |
+| High-risk task | A task in which a user can cause data leakage, service disruption, or service degradation. |
+| Hybrid cloud | Sometimes called a cloud hybrid. A computing environment that combines an on-premises data center (a private cloud) with a public cloud. It allows data and applications to be shared between them. |
+| hybrid cloud storage | A private or public cloud used to store an organization's data. |
+| ICM | Incident Case Management |
+| IDS | Intrusion Detection Service |
+| Identity analytics | Includes basic monitoring and remediation, dormant and orphan account detection and removal, and privileged account discovery. |
+| Identity lifecycle management | Maintain digital identities, their relationships with the organization, and their attributes during the entire process from creation to eventual archiving, using one or more identity life cycle patterns. |
+| IGA | Identity governance and administration. Technology solutions that conduct identity management and access governance operations. IGA includes the tools, technologies, reports, and compliance activities required for identity lifecycle management. It includes every operation from account creation and termination to user provisioning, access certification, and enterprise password management. It looks at automated workflow and data from authoritative sources capabilities, self-service user provisioning, IT governance, and password management. |
+| ITSM | Information Technology Security Management. Tools that enable IT operations organizations (infrastructure and operations managers), to better support the production environment. Facilitate the tasks and workflows associated with the management and delivery of quality IT services. |
+| JEP | Just Enough Permissions |
+| JIT | Just in Time access can be seen as a way to enforce the principle of least privilege to ensure users and non-human identities are given the minimum level of privileges. It also ensures that privileged activities are conducted in accordance with an organization's Identity Access Management (IAM), IT Service Management (ITSM), and Privileged Access Management (PAM) policies, with its entitlements and workflows. JIT access strategy enables organizations to maintain a full audit trail of privileged activities so they can easily identify who or what gained access to which systems, what they did at what time, and for how long. |
+| Least privilege | Ensures that users only gain access to the specific tools they need to complete a task. |
+| Multi-tenant | A single instance of the software and its supporting infrastructure serves multiple customers. Each customer shares the software application and also shares a single database. |
+| OIDC | OpenID Connect. An authentication protocol that verifies user identity when a user is trying to access a protected HTTPs end point. OIDC is an evolutionary development of ideas implemented earlier in OAuth. |
+| PAM | Privileged access management. Tools that offer one or more of these features: discover, manage, and govern privileged accounts on multiple systems and applications; control access to privileged accounts, including shared and emergency access; randomize, manage, and vault credentials (password, keys, etc.) for administrative, service, and application accounts; single sign-on (SSO) for privileged access to prevent credentials from being revealed; control, filter, and orchestrate privileged commands, actions, and tasks; manage and broker credentials to applications, services, and devices to avoid exposure; and monitor, record, audit, and analyze privileged access, sessions, and actions. |
+| PASM | Privileged accounts are protected by vaulting their credentials. Access to those accounts is then brokered for human users, services, and applications. Privileged session management (PSM) functions establish sessions with possible credential injection and full session recording. Passwords and other credentials for privileged accounts are actively managed and changed at definable intervals or upon the occurrence of specific events. PASM solutions may also provide application-to-application password management (AAPM) and zero-install remote privileged access features for IT staff and third parties that don't require a VPN. |
+| PEDM | Specific privileges are granted on the managed system by host-based agents to logged-in users. PEDM tools provide host-based command control (filtering); application allow, deny, and isolate controls; and/or privilege elevation. The latter is in the form of allowing particular commands to be run with a higher level of privileges. PEDM tools execute on the actual operating system at the kernel or process level. Command control through protocol filtering is explicitly excluded from this definition because the point of control is less reliable. PEDM tools may also provide file integrity monitoring features. |
+| Permission | Rights and privileges. Details given by users or network administrators that define access rights to files on a network. Access controls attached to a resource dictating which identities can access it and how. Privileges are attached to identities and are the ability to perform certain actions. An identity having the ability to perform an action on a resource. |
+| POD | Permission on Demand. A type of JIT access that allows the temporary elevation of permissions, enabling identities to access resources on a by-request, timed basis. |
+| Permissions creep index (PCI) | A number from 0 to 100 that represents the incurred risk of users with access to high-risk privileges. PCI is a function of users who have access to high-risk privileges but aren't actively using them. |
+| Policy and role management | Maintain rules that govern automatic assignment and removal of access rights. Provides visibility of access rights for selection in access requests, approval processes, dependencies, and incompatibilities between access rights, and more. Roles are a common vehicle for policy management. |
+| Privilege | The authority to make changes to a network or computer. Both people and accounts can have privileges, and both can have different levels of privilege. |
+| Privileged account | A login credential to a server, firewall, or other administrative account. Often referred to as admin accounts. Comprised of the actual username and password; these two things together make up the account. A privileged account is allowed to do more things than a normal account. |
+| Public Cloud | Computing services offered by third-party providers over the public Internet, making them available to anyone who wants to use or purchase them. They may be free or sold on-demand, allowing customers to pay only per usage for the CPU cycles, storage, or bandwidth they consume. |
+| Resource | Any entity that uses compute capabilities can be accessed by users and services to perform actions. |
+| Role | An IAM identity that has specific permissions. Instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. A role doesn't have standard long-term credentials such as a password or access keys associated with. |
+| SCIM | System for CrossΓÇôdomain Identity Management |
+| SIEM | Security Information and Event Management. Technology that supports threat detection, compliance and security incident management through the collection and analysis (both near real time and historical) of security events, as well as a wide variety of other event and contextual data sources. The core capabilities are a broad scope of log event collection and management, the ability to analyze log events and other data across disparate sources, and operational capabilities (such as incident management, dashboards, and reporting). |
+| SOAR | Security orchestration, automation and response (SOAR). Technologies that enable organizations to take inputs from various sources (mostly from security information and event management [SIEM] systems) and apply workflows aligned to processes and procedures. These workflows can be orchestrated via integrations with other technologies and automated to achieve the desired outcome and greater visibility. Other capabilities include case and incident management features; the ability to manage threat intelligence, dashboards and reporting; and analytics that can be applied across various functions. SOAR tools significantly enhance security operations activities like threat detection and response by providing machine-powered assistance to human analysts to improve the efficiency and consistency of people and processes. |
+| Super user / Super identity | A powerful account used by IT system administrators that can be used to make configurations to a system or application, add or remove users, or delete data. |
+| Tenant | A dedicated instance of the services and organization data stored within a specific default location. |
+| UUID | Universally unique identifier. A 128-bit label used for information in computer systems. The term globally unique identifier (GUID) is also used.|
+| Zero trust security | The three foundational principles: explicit verification, breach assumption, and least privileged access.|
+| ZTNA | Zero trust network access. A product or service that creates an identity- and context-based, logical access boundary around an application or set of applications. The applications are hidden from discovery, and access is restricted via a trust broker to a set of named entities. The broker verifies the identity, context and policy adherence of the specified participants before allowing access and prohibits lateral movement elsewhere in the network. It removes application assets from public visibility and significantly reduces the surface area for attack.|
+
+## Next steps
+
+- For an overview of Permissions Management, see [What's Permissions Management?](overview.md).
active-directory Onboard Add Account After Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-add-account-after-onboarding.md
+
+ Title: Add an account /subscription/ project to Permissions Management after onboarding is complete
+description: How to add an account/ subscription/ project to Permissions Management after onboarding is complete.
+++++++ Last updated : 02/23/2022+++
+# Add an account/ subscription/ project after onboarding is complete
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how to add an Amazon Web Services (AWS) account, Microsoft Azure subscription, or Google Cloud Platform (GCP) project in Microsoft Permissions Management after you've completed the onboarding process.
+
+## Add an AWS account after onboarding is complete
+
+1. In the Permissions Management home page, select **Settings** (the gear icon), and then select the **Data collectors** tab.
+1. On the **Data collectors** dashboard, select **AWS**.
+1. Select the ellipses **(...)** at the end of the row, and then select **Edit Configuration**.
+
+ The **Permissions Management Onboarding - Summary** page displays.
+
+1. Go to **AWS Account IDs**, and then select **Edit** (the pencil icon).
+
+ The **Permissions Management Onboarding - AWS Member Account Details** page displays.
+
+1. Go to **Enter Your AWS Account IDs**, and then select **Add** (the plus **+** sign).
+1. Copy your account ID from AWS and paste it into the **Enter Account ID** box.
+
+ The AWS account ID is automatically added to the script.
+
+ If you want to add more account IDs, repeat steps 5 and 6 to add up to a total of 10 account IDs.
+
+1. Copy the script.
+1. Go to AWS and start the Cloud Shell.
+1. Create a new script for the new account and press the **Enter** key.
+1. Paste the script you copied.
+1. Locate the account line, delete the original account ID (the one that was previously added), and then run the script.
+1. Return to Permissions Management, and the new account ID you added will be added to the list of account IDs displayed in the **Permissions Management Onboarding - Summary** page.
+1. Select **Verify now & save**.
+
+ When your changes are saved, the following message displays: **Successfully updated configuration.**
++
+## Add an Azure subscription after onboarding is complete
+
+1. In the Permissions Management home page, select **Settings** (the gear icon), and then select the **Data collectors** tab.
+1. On the **Data collectors** dashboard, select **Azure**.
+1. Select the ellipses **(...)** at the end of the row, and then select **Edit Configuration**.
+
+ The **Permissions Management Onboarding - Summary** page displays.
+
+1. Go to **Azure subscription IDs**, and then select **Edit** (the pencil icon).
+1. Go to **Enter your Azure Subscription IDs**, and then select **Add subscription** (the plus **+** sign).
+1. Copy and paste your subscription ID from Azure and paste it into the subscription ID box.
+
+ The subscription ID is automatically added to the subscriptions line in the script.
+
+ If you want to add more subscription IDs, repeat steps 4 and 5 to add up to a total of 10 subscriptions.
+
+1. Copy the script.
+1. Go to Azure and start the Cloud Shell.
+1. Create a new script for the new subscription and press enter.
+1. Paste the script you copied.
+1. Locate the subscription line and delete the original subscription ID (the one that was previously added), and then run the script.
+1. Return to Permissions Management, and the new subscription ID you added will be added to the list of subscription IDs displayed in the **Permissions Management Onboarding - Summary** page.
+1. Select **Verify now & save**.
+
+ When your changes are saved, the following message displays: **Successfully updated configuration.**
+
+## Add a GCP project after onboarding is complete
+
+1. In the Permissions Management home page, select **Settings** (the gear icon), and then select the **Data collectors** tab.
+1. On the **Data collectors** dashboard, select **GCP**.
+1. Select the ellipses **(...)** at the end of the row, and then select **Edit Configuration**.
+
+ The **Permissions Management Onboarding - Summary** page displays.
+
+1. Go to **GCP Project IDs**, and then select **Edit** (the pencil icon).
+1. Go to **Enter your GCP Project IDs**, and then select **Add Project ID** (the plus **+** sign).
+1. Copy and paste your project ID from Azure and paste it into the **Project ID** box.
+
+ The project ID is automatically added to the **Project ID** line in the script.
+
+ If you want to add more project IDs, repeat steps 4 and 5 to add up to a total of 10 project IDs.
+
+1. Copy the script.
+1. Go to GCP and start the Cloud Shell.
+1. Create a new script for the new project ID and press enter.
+1. Paste the script you copied.
+1. Locate the project ID line and delete the original project ID (the one that was previously added), and then run the script.
+1. Return to Permissions Management, and the new project ID you added will be added to the list of project IDs displayed in the **Permissions Management Onboarding - Summary** page.
+1. Select **Verify now & save**.
+
+ When your changes are saved, the following message displays: **Successfully updated configuration.**
+++
+## Next steps
+
+- For information on how to onboard an Amazon Web Services (AWS) account, see [Onboard an AWS account](onboard-aws.md).
+ - For information on how to onboard a Microsoft Azure subscription, see [Onboard a Microsoft Azure subscription](onboard-azure.md).
+- For information on how to onboard a Google Cloud Platform (GCP) project, see [Onboard a GCP project](onboard-gcp.md).
+- For information on how to enable or disable the controller after onboarding is complete, see [Enable or disable the controller](onboard-enable-controller-after-onboarding.md).
active-directory Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-aws.md
+
+ Title: Onboard an Amazon Web Services (AWS) account on Permissions Management
+description: How to onboard an Amazon Web Services (AWS) account on Permissions Management.
+++++++ Last updated : 04/20/2022+++
+# Onboard an Amazon Web Services (AWS) account
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+> [!NOTE]
+> The Permissions Management PREVIEW is currently not available for tenants hosted in the European Union (EU).
++
+This article describes how to onboard an Amazon Web Services (AWS) account on Permissions Management.
+
+> [!NOTE]
+> A *global administrator* or *super admin* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable Permissions Management on your Azure Active Directory tenant](onboard-enable-tenant.md).
++
+## View a training video on configuring and onboarding an AWS account
+
+To view a video on how to configure and onboard AWS accounts in Permissions Management, select [Configure and onboard AWS accounts](https://www.youtube.com/watch?v=R6K21wiWYmE).
+
+## Onboard an AWS account
+
+1. If the **Data Collectors** dashboard isn't displayed when Permissions Management launches:
+
+ - In the Permissions Management home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab.
+
+1. On the **Data Collectors** dashboard, select **AWS**, and then select **Create Configuration**.
+
+### 1. Create an Azure AD OIDC App
+
+1. On the **Permissions Management Onboarding - Azure AD OIDC App Creation** page, enter the **OIDC Azure app name**.
+
+ This app is used to set up an OpenID Connect (OIDC) connection to your AWS account. OIDC is an interoperable authentication protocol based on the OAuth 2.0 family of specifications. The scripts generated on this page create the app of this specified name in your Azure AD tenant with the right configuration.
+
+1. To create the app registration, copy the script and run it in your Azure command-line app.
+
+ > [!NOTE]
+ > 1. To confirm that the app was created, open **App registrations** in Azure and, on the **All applications** tab, locate your app.
+ > 1. Select the app name to open the **Expose an API** page. The **Application ID URI** displayed in the **Overview** page is the *audience value* used while making an OIDC connection with your AWS account.
+
+1. Return to Permissions Management, and in the **Permissions Management Onboarding - Azure AD OIDC App Creation**, select **Next**.
+
+### 2. Set up an AWS OIDC account
+
+1. In the **Permissions Management Onboarding - AWS OIDC Account Setup** page, enter the **AWS OIDC account ID** where the OIDC provider is created. You can change the role name to your requirements.
+1. Open another browser window and sign in to the AWS account where you want to create the OIDC provider.
+1. Select **Launch Template**. This link takes you to the **AWS CloudFormation create stack** page.
+1. Scroll to the bottom of the page, and in the **Capabilities** box, select **I acknowledge that AWS CloudFormation might create IAM resources with custom names**. Then select **Create Stack.**
+
+ This AWS CloudFormation stack creates an OIDC Identity Provider (IdP) representing Azure AD STS and an AWS IAM role with a trust policy that allows external identities from Azure AD to assume it via the OIDC IdP. These entities are listed on the **Resources** page.
+
+1. Return to Permissions Management, and in the **Permissions Management Onboarding - AWS OIDC Account Setup** page, select **Next**.
+
+### 3. Set up an AWS master account (Optional)
+
+1. If your organization has Service Control Policies (SCPs) that govern some or all of the member accounts, set up the master account connection in the **Permissions Management Onboarding - AWS Master Account Details** page.
+
+ Setting up the master account connection allows Permissions Management to auto-detect and onboard any AWS member accounts that have the correct Permissions Management role.
+
+ - In the **Permissions Management Onboarding - AWS Master Account Details** page, enter the **Master Account ID** and **Master Account Role**.
+
+1. Open another browser window and sign in to the AWS console for your master account.
+
+1. Return to Permissions Management, and in the **Permissions Management Onboarding - AWS Master Account Details** page, select **Launch Template**.
+
+ The **AWS CloudFormation create stack** page opens, displaying the template.
+
+1. Review the information in the template, make changes, if necessary, then scroll to the bottom of the page.
+
+1. In the **Capabilities** box, select **I acknowledge that AWS CloudFormation might create IAM resources with custom names**. Then select **Create stack**.
+
+ This AWS CloudFormation stack creates a role in the master account with the necessary permissions (policies) to collect SCPs and list all the accounts in your organization.
+
+ A trust policy is set on this role to allow the OIDC role created in your AWS OIDC account to access it. These entities are listed in the **Resources** tab of your CloudFormation stack.
+
+1. Return to Permissions Management, and in **Permissions Management Onboarding - AWS Master Account Details**, select **Next**.
+
+### 4. Set up an AWS Central logging account (Optional but recommended)
+
+1. If your organization has a central logging account where logs from some or all of your AWS account are stored, in the **Permissions Management Onboarding - AWS Central Logging Account Details** page, set up the logging account connection.
+
+ In the **Permissions Management Onboarding - AWS Central Logging Account Details** page, enter the **Logging Account ID** and **Logging Account Role**.
+
+1. In another browser window, sign in to the AWS console for the AWS account you use for central logging.
+
+1. Return to Permissions Management, and in the **Permissions Management Onboarding - AWS Central Logging Account Details** page, select **Launch Template**.
+
+ The **AWS CloudFormation create stack** page opens, displaying the template.
+
+1. Review the information in the template, make changes, if necessary, then scroll to the bottom of the page.
+
+1. In the **Capabilities** box, select **I acknowledge that AWS CloudFormation might create IAM resources with custom names**, and then select **Create stack**.
+
+ This AWS CloudFormation stack creates a role in the logging account with the necessary permissions (policies) to read S3 buckets used for central logging. A trust policy is set on this role to allow the OIDC role created in your AWS OIDC account to access it. These entities are listed in the **Resources** tab of your CloudFormation stack.
+
+1. Return to Permissions Management, and in the **Permissions Management Onboarding - AWS Central Logging Account Details** page, select **Next**.
+
+### 5. Set up an AWS member account
+
+1. In the **Permissions Management Onboarding - AWS Member Account Details** page, enter the **Member Account Role** and the **Member Account IDs**.
+
+ You can enter up to 10 account IDs. Click the plus icon next to the text box to add more account IDs.
+
+ > [!NOTE]
+ > Perform the next 6 steps for each account ID you add.
+
+1. Open another browser window and sign in to the AWS console for the member account.
+
+1. Return to the **Permissions Management Onboarding - AWS Member Account Details** page, select **Launch Template**.
+
+ The **AWS CloudFormation create stack** page opens, displaying the template.
+
+1. In the **CloudTrailBucketName** page, enter a name.
+
+ You can copy and paste the **CloudTrailBucketName** name from the **Trails** page in AWS.
+
+ > [!NOTE]
+ > A *cloud bucket* collects all the activity in a single account that Permissions Management monitors. Enter the name of a cloud bucket here to provide Permissions Management with the access required to collect activity data.
+
+1. From the **Enable Controller** dropdown, select:
+
+ - **True**, if you want the controller to provide Permissions Management with read and write access so that any remediation you want to do from the Permissions Management platform can be done automatically.
+ - **False**, if you want the controller to provide Permissions Management with read-only access.
+
+1. Scroll to the bottom of the page, and in the **Capabilities** box, select **I acknowledge that AWS CloudFormation might create IAM resources with custom names**. Then select **Create stack**.
+
+ This AWS CloudFormation stack creates a collection role in the member account with necessary permissions (policies) for data collection.
+
+ A trust policy is set on this role to allow the OIDC role created in your AWS OIDC account to access it. These entities are listed in the **Resources** tab of your CloudFormation stack.
+
+1. Return to Permissions Management, and in the **Permissions Management Onboarding - AWS Member Account Details** page, select **Next**.
+
+ This step completes the sequence of required connections from Azure AD STS to the OIDC connection account and the AWS member account.
+
+### 6. Review and save
+
+1. In **Permissions Management Onboarding ΓÇô Summary**, review the information you've added, and then select **Verify Now & Save**.
+
+ The following message appears: **Successfully created configuration.**
+
+ On the **Data Collectors** dashboard, the **Recently Uploaded On** column displays **Collecting**. The **Recently Transformed On** column displays **Processing.**
+
+ You have now completed onboarding AWS, and Permissions Management has started collecting and processing your data.
+
+### 7. View the data
+
+1. To view the data, select the **Authorization Systems** tab.
+
+ The **Status** column in the table displays **Collecting Data.**
+
+ The data collection process may take some time, depending on the size of the account and how much data is available for collection.
++
+## Next steps
+
+- For information on how to onboard a Microsoft Azure subscription, see [Onboard a Microsoft Azure subscription](onboard-azure.md).
+- For information on how to onboard a Google Cloud Platform (GCP) project, see [Onboard a Google Cloud Platform (GCP) project](onboard-gcp.md).
+- For information on how to enable or disable the controller after onboarding is complete, see [Enable or disable the controller](onboard-enable-controller-after-onboarding.md).
+- For information on how to add an account/subscription/project after onboarding is complete, see [Add an account/subscription/project after onboarding is complete](onboard-add-account-after-onboarding.md).
active-directory Onboard Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-azure.md
+
+ Title: Onboard a Microsoft Azure subscription in Permissions Management
+description: How to a Microsoft Azure subscription on Permissions Management.
+++++++ Last updated : 04/20/2022+++
+# Onboard a Microsoft Azure subscription
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management (Permissions Management) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+> [!NOTE]
+> The Permissions Management PREVIEW is currently not available for tenants hosted in the European Union (EU).
+
+This article describes how to onboard a Microsoft Azure subscription or subscriptions on Permissions Management (Permissions Management). Onboarding a subscription creates a new authorization system to represent the Azure subscription in Permissions Management.
+
+> [!NOTE]
+> A *global administrator* or *super admin* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable Permissions Management on your Azure Active Directory tenant](onboard-enable-tenant.md).
+
+## Prerequisites
+
+To add Permissions Management to your Azure AD tenant:
+- You must have an Azure AD user account and an Azure command-line interface (Azure CLI) on your system, or an Azure subscription. If you don't already have one, [create a free account](https://azure.microsoft.com/free/).
+- You must have **Microsoft.Authorization/roleAssignments/write** permission at the subscription or management group scope to perform these tasks. If you don't have this permission, you can ask someone who has this permission to perform these tasks for you.
++
+## View a training video on enabling Permissions Management in your Azure AD tenant
+
+To view a video on how to enable Permissions Management in your Azure AD tenant, select [Enable Permissions Management in your Azure AD tenant](https://www.youtube.com/watch?v=-fkfeZyevoo).
+
+## How to onboard an Azure subscription
+
+1. If the **Data Collectors** dashboard isn't displayed when Permissions Management launches:
+
+ - In the Permissions Management home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab.
+
+1. On the **Data Collectors** dashboard, select **Azure**, and then select **Create Configuration**.
+
+### 1. Add Azure subscription details
+
+1. On the **Permissions Management Onboarding - Azure Subscription Details** page, enter the **Subscription IDs** that you want to onboard.
+
+ > [!NOTE]
+ > To locate the Azure subscription IDs, open the **Subscriptions** page in Azure.
+ > You can enter up to 10 subscriptions IDs. Select the plus sign **(+)** icon next to the text box to enter more subscriptions.
+
+1. From the **Scope** dropdown, select **Subscription** or **Management Group**. The script box displays the role assignment script.
+
+ > [!NOTE]
+ > Select **Subscription** if you want to assign permissions separately for each individual subscription. The generated script has to be executed once per subscription.
+ > Select **Management Group** if all of your subscriptions are under one management group. The generated script must be executed once for the management group.
+
+1. To give this role assignment to the service principal, copy the script to a file on your system where Azure CLI is installed and execute it.
+
+ You can execute the script once for each subscription, or once for all the subscriptions in the management group.
+
+1. From the **Enable Controller** dropdown, select:
+
+ - **True**, if you want the controller to provide Permissions Management with read and write access so that any remediation you want to do from the Permissions Management platform can be done automatically.
+ - **False**, if you want the controller to provide Permissions Management with read-only access.
+
+1. Return to **Permissions Management Onboarding - Azure Subscription Details** page and select **Next**.
+
+### 2. Review and save.
+
+- In **Permissions Management Onboarding ΓÇô Summary** page, review the information you've added, and then select **Verify Now & Save**.
+
+ The following message appears: **Successfully Created Configuration.**
+
+ On the **Data Collectors** tab, the **Recently Uploaded On** column displays **Collecting**. The **Recently Transformed On** column displays **Processing.**
+
+ You have now completed onboarding Azure, and Permissions Management has started collecting and processing your data.
+
+### 3. View the data.
+
+- To view the data, select the **Authorization Systems** tab.
+
+ The **Status** column in the table displays **Collecting Data.**
+
+ The data collection process will take some time, depending on the size of the account and how much data is available for collection.
++
+## Next steps
+
+- For information on how to onboard an Amazon Web Services (AWS) account, see [Onboard an Amazon Web Services (AWS) account](onboard-aws.md).
+- For information on how to onboard a Google Cloud Platform (GCP) project, see [Onboard a Google Cloud Platform (GCP) project](onboard-gcp.md).
+- For information on how to enable or disable the controller after onboarding is complete, see [Enable or disable the controller](onboard-enable-controller-after-onboarding.md).
+- For information on how to add an account/subscription/project after onboarding is complete, see [Add an account/subscription/project after onboarding is complete](onboard-add-account-after-onboarding.md).
+- For an overview on Permissions Management, see [What's Permissions Management?](overview.md).
+- For information on how to start viewing information about your authorization system in Permissions Management, see [View key statistics and data about your authorization system](ui-dashboard.md).
active-directory Onboard Enable Controller After Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-controller-after-onboarding.md
+
+ Title: Enable or disable the controller in Permissions Management after onboarding is complete
+description: How to enable or disable the controller in Permissions Management after onboarding is complete.
+++++++ Last updated : 02/23/2022+++
+# Enable or disable the controller after onboarding is complete
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how to enable or disable the controller in Microsoft Azure and Google Cloud Platform (GCP) after onboarding is complete.
+
+This article also describes how to enable the controller in Amazon Web Services (AWS) if you disabled it during onboarding. You can only enable the controller in AWS at this time; you can't disable it.
+
+## Enable the controller in AWS
+
+> [!NOTE]
+> You can only enable the controller in AWS; you can't disable it at this time.
+
+1. Sign in to the AWS console of the member account in a separate browser window.
+1. Go to the Permissions Management home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab.
+1. On the **Data Collectors** dashboard, select **AWS**, and then select **Create Configuration**.
+1. On the **Permissions Management Onboarding - AWS Member Account Details** page, select **Launch Template**.
+
+ The **AWS CloudFormation create stack** page opens, displaying the template.
+1. In the **CloudTrailBucketName** box, enter a name.
+
+ You can copy and paste the **CloudTrailBucketName** name from the **Trails** page in AWS.
+
+ > [!NOTE]
+ > A *cloud bucket* collects all the activity in a single account that Permissions Management monitors. Enter the name of a cloud bucket here to provide Permissions Management with the access required to collect activity data.
+
+1. In the **EnableController** box, from the drop-down list, select **True** to provide Permissions Management with read and write access so that any remediation you want to do from the Permissions Management platform can be done automatically.
+
+1. Scroll to the bottom of the page, and in the **Capabilities** box and select **I acknowledge that AWS CloudFormation might create IAM resources with custom names**. Then select **Create stack**.
+
+ This AWS CloudFormation stack creates a collection role in the member account with necessary permissions (policies) for data collection. A trust policy is set on this role to allow the OIDC role created in your AWS OIDC account to access it. These entities are listed in the **Resources** tab of your CloudFormation stack.
+
+1. Return to Permissions Management, and on the Permissions Management **Onboarding - AWS Member Account Details** page, select **Next**.
+1. On **Permissions Management Onboarding ΓÇô Summary** page, review the information you've added, and then select **Verify Now & Save**.
+
+ The following message appears: **Successfully created configuration.**
+
+## Enable or disable the controller in Azure
++
+1. In Azure, open the **Access control (IAM)** page.
+1. In the **Check access** section, in the **Find** box, enter **Cloud Infrastructure Entitlement Management**.
+
+ The **Cloud Infrastructure Entitlement Management assignments** page appears, displaying the roles assigned to you.
+
+ - If you have read-only permission, the **Role** column displays **Reader**.
+ - If you have administrative permission, the **Role** column displays **User Access Administrative**.
+
+1. To add the administrative role assignment, return to the **Access control (IAM)** page, and then select **Add role assignment**.
+1. Add or remove the role assignment for Cloud Infrastructure Entitlement Management.
+
+1. Go to the Permissions Management home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab.
+1. On the **Data Collectors** dashboard, select **Azure**, and then select **Create Configuration**.
+1. On the **Permissions Management Onboarding - Azure Subscription Details** page, enter the **Subscription ID**, and then select **Next**.
+1. On **Permissions Management Onboarding ΓÇô Summary** page, review the controller permissions, and then select **Verify Now & Save**.
+
+ The following message appears: **Successfully Created Configuration.**
++
+## Enable or disable the controller in GCP
+
+1. Execute the **gcloud auth login**.
+1. Follow the instructions displayed on the screen to authorize access to your Google account.
+1. Execute the **sh mciem-workload-identity-pool.sh** to create the workload identity pool, provider, and service account.
+1. Execute the **sh mciem-member-projects.sh** to give Permissions Management permissions to access each of the member projects.
+
+ - If you want to manage permissions through Permissions Management, select **Y** to **Enable controller**.
+ - If you want to onboard your projects in read-only mode, select **N** to **Disable controller**.
+
+1. Optionally, execute **mciem-enable-gcp-api.sh** to enable all recommended GCP APIs.
+
+1. Go to the Permissions Management home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab.
+1. On the **Data Collectors** dashboard, select **GCP**, and then select **Create Configuration**.
+1. On the **Permissions Management Onboarding - Azure AD OIDC App Creation** page, select **Next**.
+1. On the **Permissions Management Onboarding - GCP OIDC Account Details & IDP Access** page, enter the **OIDC Project Number** and **OIDC Project ID**, and then select **Next**.
+1. On the **Permissions Management Onboarding - GCP Project IDs** page, enter the **Project IDs**, and then select **Next**.
+1. On the **Permissions Management Onboarding ΓÇô Summary** page, review the information you've added, and then select **Verify Now & Save**.
+
+ The following message appears: **Successfully Created Configuration.**
+
+## Next steps
+
+- For information on how to onboard an Amazon Web Services (AWS) account, see [Onboard an AWS account](onboard-aws.md).
+- For information on how to onboard a Microsoft Azure subscription, see [Onboard a Microsoft Azure subscription](onboard-azure.md).
+- For information on how to onboard a Google Cloud Platform (GCP) project, see [Onboard a GCP project](onboard-gcp.md).
+- For information on how to add an account/subscription/project after onboarding is complete, see [Add an account/subscription/project after onboarding is complete](onboard-add-account-after-onboarding.md).
active-directory Onboard Enable Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-tenant.md
+
+ Title: Enable Permissions Management in your organization
+description: How to enable Permissions Management in your organization.
+++++++ Last updated : 04/20/2022+++
+# Enable Permissions Management in your organization
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
++
+> [!NOTE]
+> The Permissions Management PREVIEW is currently not available for tenants hosted in the European Union (EU).
+++
+This article describes how to enable Permissions Management in your organization. Once you've enabled Permissions Management, you can connect it to your Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) platforms.
+
+> [!NOTE]
+> To complete this task, you must have *global administrator* permissions as a user in that tenant. You can't enable Permissions Management as a user from other tenant who has signed in via B2B or via Azure Lighthouse.
+
+## Prerequisites
+
+To enable Permissions Management in your organization:
+
+- You must have an Azure AD tenant. If you don't already have one, [create a free account](https://azure.microsoft.com/free/).
+- You must be eligible for or have an active assignment to the global administrator role as a user in that tenant.
+
+> [!NOTE]
+> During public preview, Permissions Management doesn't perform a license check.
+
+## View a training video on enabling Permissions Management
+
+- To view a video on how to enable Permissions Management in your Azure AD tenant, select [Enable Permissions Management in your Azure AD tenant](https://www.youtube.com/watch?v=-fkfeZyevoo).
+- To view a video on how to configure and onboard AWS accounts in Permissions Management, select [Configure and onboard AWS accounts](https://www.youtube.com/watch?v=R6K21wiWYmE).
+- To view a video on how to configure and onboard GCP accounts in Permissions Management, select [Configure and onboard GCP accounts](https://www.youtube.com/watch?app=desktop&v=W3epcOaec28).
++
+## How to enable Permissions Management on your Azure AD tenant
+
+1. In your browser:
+ 1. Go to [Azure services](https://portal.azure.com) and use your credentials to sign in to [Azure Active Directory](https://ms.portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview).
+ 1. If you aren't already authenticated, sign in as a global administrator user.
+ 1. If needed, activate the global administrator role in your Azure AD tenant.
+ 1. In the Azure AD portal, select **Features highlights**, and then select **Permissions Management**.
+
+ 1. If you're prompted to select a sign in account, sign in as a global administrator for a specified tenant.
+
+ The **Welcome to Permissions Management** screen appears, displaying information on how to enable Permissions Management on your tenant.
+
+1. To provide access to the Permissions Management application, create a service principal.
+
+ An Azure service principal is a security identity used by user-created apps, services, and automation tools to access specific Azure resources.
+
+ > [!NOTE]
+ > To complete this step, you must have Azure CLI or Azure PowerShell on your system, or an Azure subscription where you can run Cloud Shell.
+
+ - To create a service principal that points to the Permissions Management application via Cloud Shell:
+
+ 1. Copy the script on the **Welcome** screen:
+
+ `az ad sp create --id b46c3ac5-9da6-418f-a849-0a07a10b3c6c`
+
+ 1. If you have an Azure subscription, return to the Azure AD portal and select **Cloud Shell** on the navigation bar.
+ If you don't have an Azure subscription, open a command prompt on a Windows Server.
+ 1. If you have an Azure subscription, paste the script into Cloud Shell and press **Enter**.
+
+ - For information on how to create a service principal through the Azure portal, see [Create an Azure service principal with the Azure CLI](/cli/azure/create-an-azure-service-principal-azure-cli).
+
+ - For information on the **az** command and how to sign in with the no subscriptions flag, see [az login](/cli/azure/reference-index?view=azure-cli-latest#az-login&preserve-view=true).
+
+ - For information on how to create a service principal via Azure PowerShell, see [Create an Azure service principal with Azure PowerShell](/powershell/azure/create-azure-service-principal-azureps?view=azps-7.1.0&preserve-view=true).
+
+ 1. After the script runs successfully, the service principal attributes for Permissions Management display. Confirm the attributes.
+
+ The **Cloud Infrastructure Entitlement Management** application displays in the Azure AD portal under **Enterprise applications**.
+
+1. Return to the **Welcome to Permissions Management** screen and select **Enable Permissions Management**.
+
+ You have now completed enabling Permissions Management on your tenant. Permissions Management launches with the **Data Collectors** dashboard.
+
+## Configure data collection settings
+
+Use the **Data Collectors** dashboard in Permissions Management to configure data collection settings for your authorization system.
+
+1. If the **Data Collectors** dashboard isn't displayed when Permissions Management launches:
+
+ - In the Permissions Management home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab.
+
+1. Select the authorization system you want: **AWS**, **Azure**, or **GCP**.
+
+1. For information on how to onboard an AWS account, Azure subscription, or GCP project into Permissions Management, select one of the following articles and follow the instructions:
+
+ - [Onboard an AWS account](onboard-aws.md)
+ - [Onboard an Azure subscription](onboard-azure.md)
+ - [Onboard a GCP project](onboard-gcp.md)
+
+## Next steps
+
+- For an overview of Permissions Management, see [What's Permissions Management?](overview.md)
+- For a list of frequently asked questions (FAQs) about Permissions Management, see [FAQs](faqs.md).
+- For information on how to start viewing information about your authorization system in Permissions Management, see [View key statistics and data about your authorization system](ui-dashboard.md).
active-directory Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-gcp.md
+
+ Title: Onboard a Google Cloud Platform (GCP) project in Permissions Management
+description: How to onboard a Google Cloud Platform (GCP) project on Permissions Management.
+++++++ Last updated : 04/20/2022+++
+# Onboard a Google Cloud Platform (GCP) project
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
++
+> [!NOTE]
+> The Permissions Management PREVIEW is currently not available for tenants hosted in the European Union (EU).
++
+This article describes how to onboard a Google Cloud Platform (GCP) project on Permissions Management.
+
+> [!NOTE]
+> A *global administrator* or *super admin* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable Permissions Management on your Azure Active Directory tenant](onboard-enable-tenant.md).
+
+## View a training video on configuring and onboarding a GCP account
+
+To view a video on how to configure and onboard GCP accounts in Permissions Management, select [Configure and onboard GCP accounts](https://www.youtube.com/watch?app=desktop&v=W3epcOaec28).
++
+## Onboard a GCP project
+
+1. If the **Data Collectors** dashboard isn't displayed when Permissions Management launches:
+
+ - In the Permissions Management home page, select **Settings** (the gear icon), and then select the **Data Collectors** subtab.
+
+1. On the **Data Collectors** tab, select **GCP**, and then select **Create Configuration**.
+
+### 1. Create an Azure AD OIDC app.
+
+1. On the **Permissions Management Onboarding - Azure AD OIDC App Creation** page, enter the **OIDC Azure App Name**.
+
+ This app is used to set up an OpenID Connect (OIDC) connection to your GCP project. OIDC is an interoperable authentication protocol based on the OAuth 2.0 family of specifications. The scripts generated will create the app of this specified name in your Azure AD tenant with the right configuration.
+
+1. To create the app registration, copy the script and run it in your command-line app.
+
+ > [!NOTE]
+ > 1. To confirm that the app was created, open **App registrations** in Azure and, on the **All applications** tab, locate your app.
+ > 1. Select the app name to open the **Expose an API** page. The **Application ID URI** displayed in the **Overview** page is the *audience value* used while making an OIDC connection with your AWS account.
+
+ 1. Return to Permissions Management, and in the **Permissions Management Onboarding - Azure AD OIDC App Creation**, select **Next**.
+
+### 2. Set up a GCP OIDC project.
+
+1. In the **Permissions Management Onboarding - GCP OIDC Account Details & IDP Access** page, enter the **OIDC Project ID** and **OIDC Project Number** of the GCP project in which the OIDC provider and pool will be created. You can change the role name to your requirements.
+
+ > [!NOTE]
+ > You can find the **Project number** and **Project ID** of your GCP project on the GCP **Dashboard** page of your project in the **Project info** panel.
+
+1. You can change the **OIDC Workload Identity Pool Id**, **OIDC Workload Identity Pool Provider Id** and **OIDC Service Account Name** to meet your requirements.
+
+ Optionally, specify **G-Suite IDP Secret Name** and **G-Suite IDP User Email** to enable G-Suite integration.
+
+ You can either download and run the script at this point or you can do it in the Google Cloud Shell, as described [later in this article](onboard-gcp.md#4-run-scripts-in-cloud-shell-optional-if-not-already-executed).
+1. Select **Next**.
+
+### 3. Set up GCP member projects.
+
+1. In the **Permissions Management Onboarding - GCP Project Ids** page, enter the **Project IDs**.
+
+ You can enter up to 10 GCP project IDs. Select the plus icon next to the text box to insert more project IDs.
+
+1. You can choose to download and run the script at this point, or you can do it via Google Cloud Shell, as described in the [next step](onboard-gcp.md#4-run-scripts-in-cloud-shell-optional-if-not-already-executed).
+
+### 4. Run scripts in Cloud Shell. (Optional if not already executed)
+
+1. In the **Permissions Management Onboarding - GCP Project Ids** page, select **Launch SSH**.
+1. To copy all your scripts into your current directory, in **Open in Cloud Shell**, select **Trust repo**, and then select **Confirm**.
+
+ The Cloud Shell provisions the Cloud Shell machine and makes a connection to your Cloud Shell instance.
+
+ > [!NOTE]
+ > Follow the instructions in the browser as they may be different from the ones given here.
+
+ The **Welcome to Permissions Management GCP onboarding** screen appears, displaying steps you must complete to onboard your GCP project.
+
+### 5. Paste the environment vars from the Permissions Management portal.
+
+1. Return to Permissions Management and select **Copy export variables**.
+1. In the GCP Onboarding shell editor, paste the variables you copied, and then press **Enter**.
+1. Execute the **gcloud auth login**.
+1. Follow instructions displayed on the screen to authorize access to your Google account.
+1. Execute the **sh mciem-workload-identity-pool.sh** to create the workload identity pool, provider, and service account.
+1. Execute the **sh mciem-member-projects.sh** to give Permissions Management permissions to access each of the member projects.
+
+ - If you want to manage permissions through Permissions Management, select **Y** to **Enable controller**.
+
+ - If you want to onboard your projects in read-only mode, select **N** to **Disable controller**.
+
+1. Optionally, execute **mciem-enable-gcp-api.sh** to enable all recommended GCP APIs.
+
+1. Return to **Permissions Management Onboarding - GCP Project Ids**, and then select **Next**.
+
+### 6. Review and save.
+
+1. In the **Permissions Management Onboarding ΓÇô Summary** page, review the information you've added, and then select **Verify Now & Save**.
+
+ The following message appears: **Successfully Created Configuration.**
+
+ On the **Data Collectors** tab, the **Recently Uploaded On** column displays **Collecting**. The **Recently Transformed On** column displays **Processing.**
+
+ You have now completed onboarding GCP, and Permissions Management has started collecting and processing your data.
+
+### 7. View the data.
+
+- To view the data, select the **Authorization Systems** tab.
+
+ The **Status** column in the table displays **Collecting Data.**
+
+ The data collection process may take some time, depending on the size of the account and how much data is available for collection.
+++
+## Next steps
+
+- For information on how to onboard an Amazon Web Services (AWS) account, see [Onboard an Amazon Web Services (AWS) account](onboard-aws.md).
+- For information on how to onboard a Microsoft Azure subscription, see [Onboard a Microsoft Azure subscription](onboard-azure.md).
+- For information on how to enable or disable the controller after onboarding is complete, see [Enable or disable the controller](onboard-enable-controller-after-onboarding.md).
+- For information on how to add an account/subscription/project after onboarding is complete, see [Add an account/subscription/project after onboarding is complete](onboard-add-account-after-onboarding.md).
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/overview.md
+
+ Title: What's Permissions Management?
+description: An introduction to Permissions Management.
+++++++ Last updated : 04/20/2022+++
+# What's Permissions Management?
++
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+> [!NOTE]
+> The Permissions Management PREVIEW is currently not available for tenants hosted in the European Union (EU).
+
+## Overview
+
+Permissions Management is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. For example, over-privileged workload and user identities, actions, and resources across multi-cloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP).
+
+Permissions Management detects, automatically right-sizes, and continuously monitors unused and excessive permissions.
+
+Organizations have to consider permissions management as a central piece of their Zero Trust security to implement least privilege access across their entire infrastructure:
+
+- Organizations are increasingly adopting multi-cloud strategy and are struggling with the lack of visibility and the increasing complexity of managing access permissions.
+- With the proliferation of identities and cloud services, the number of high-risk cloud permissions is exploding, expanding the attack surface for organizations.
+- IT security teams are under increased pressure to ensure access to their expanding cloud estate is secure and compliant.
+- The inconsistency of cloud providers' native access management models makes it even more complex for Security and Identity to manage permissions and enforce least privilege access policies across their entire environment.
++
+## Key use cases
+
+Permissions Management allows customers to address three key use cases: *discover*, *remediate*, and *monitor*.
+
+### Discover
+
+Customers can assess permission risks by evaluating the gap between permissions granted and permissions used.
+
+- Cross-cloud permissions discovery: Granular and normalized metrics for key cloud platforms: AWS, Azure, and GCP.
+- Permission Creep Index (PCI): An aggregated metric that periodically evaluates the level of risk associated with the number of unused or excessive permissions across your identities and resources. It measures how much damage identities can cause based on the permissions they have.
+- Permission usage analytics: Multi-dimensional view of permissions risk for all identities, actions, and resources.
+
+### Remediate
+
+Customers can right-size permissions based on usage, grant new permissions on-demand, and automate just-in-time access for cloud resources.
+
+- Automated deletion of permissions unused for the past 90 days.
+- Permissions on-demand: Grant identities permissions on-demand for a time-limited period or an as-needed basis.
++
+### Monitor
+
+Customers can detect anomalous activities with machine language-powered (ML-powered) alerts and generate detailed forensic reports.
+
+- ML-powered anomaly detections.
+- Context-rich forensic reports around identities, actions, and resources to support rapid investigation and remediation.
+
+Permissions Management deepens Zero Trust security strategies by augmenting the least privilege access principle, allowing customers to:
+
+- Get comprehensive visibility: Discover which identity is doing what, where, and when.
+- Automate least privilege access: Use access analytics to ensure identities have the right permissions, at the right time.
+- Unify access policies across infrastructure as a service (IaaS) platforms: Implement consistent security policies across your cloud infrastructure.
+++
+## Next steps
+
+- For information on how to onboard Permissions Management for your organization, see [Enable Permissions Management in your organization](onboard-enable-tenant.md).
+- For a list of frequently asked questions (FAQs) about Permissions Management, see [FAQs](faqs.md).
active-directory Product Account Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-account-explorer.md
+
+ Title: View roles and identities that can access account information from an external account
+description: How to view information about identities that can access accounts from an external account in Permissions Management.
+++++ Last updated : 02/23/2022+++
+# View roles and identities that can access account information from an external account
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+You can view information about users, groups, and resources that can access account information from an external account in Permissions Management.
+
+## Display information about users, groups, or tasks
+
+1. In Permissions Management, select the **Usage analytics** tab, and then, from the dropdown, select one of the following:
+
+ - **Users**
+ - **Group**
+ - **Active resources**
+ - **Active tasks**
+ - **Active resources**
+ - **Serverless functions**
+
+1. To choose an account from your authorization system, select the lock icon in the left panel.
+1. In the **Authorization systems** pane, select an account, then select **Apply**.
+1. To choose a user, role, or group, select the person icon.
+1. Select a user or group, then select **Apply**.
+1. To choose an account from your authorization system, select it from the Authorization Systems menu.
+1. In the user type filter, user, role, or group.
+1. In the **Task** filter, select **All** or **High-risk tasks**, then select **Apply**.
+1. To delete a task, select **Delete**, then select **Apply**.
+
+## Export information about users, groups, or tasks
+
+To export the data in comma-separated values (CSV) file format, select **Export** from the top-right hand corner of the table.
+
+## View users and roles
+1. To view users and roles, select the lock icon, and then select the person icon to open the **Users** pane.
+1. To view the **Role summary**, select the "eye" icon to the right of the role name.
+
+ The following details display:
+ - **Policies**: A list of all the policies attached to the role.
+ - **Trusted entities**: The identities from external accounts that can assume this role.
+
+1. To view all the identities from various accounts that can assume this role, select the down arrow to the left of the role name.
+1. To view a graph of all the identities that can access the specified account and through which role(s), select the role name.
+
+ If Permissions Management is monitoring the external account, it lists specific identities from the accounts that can assume this role. Otherwise, it lists the identities declared in the **Trusted entity** section.
+
+ **Connecting roles**: Lists the following roles for each account:
+ - *Direct roles* that are trusted by the account role.
+ - *Intermediary roles* that aren't directly trusted by the account role but are assumable by identities through role-chaining.
+
+1. To view all the roles from that account that are used to access the specified account, select the down arrow to the left of the account name.
+1. To view the trusted identities declared by the role, select the down arrow to the left of the role name.
+
+ The trusted identities for the role are listed only if the account is being monitored by Permissions Management.
+
+1. To view the role definition, select the "eye" icon to the right of the role name.
+
+ When you select the down arrow and expand details, a search box is displayed. Enter your criteria in this box to search for specific roles.
+
+ **Identities with access**: Lists the identities that come from external accounts:
+ - To view all the identities from that account can access the specified account, select the down arrow to the left of the account name.
+ - To view the **Role summary** for EC2 instances and Lambda functions, select the "eye" icon to the right of the identity name.
+ - To view a graph of how the identity can access the specified account and through which role(s), select the identity name.
+
+1. The **Info** tab displays the **Privilege creep index** and **Service control policy (SCP)** information about the account.
+
+For more information about the **Privilege creep index** and SCP information, see [View key statistics and data about your authorization system](ui-dashboard.md).
active-directory Product Account Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-account-settings.md
+
+ Title: View personal and organization information in Permissions Management
+description: How to view personal and organization information in the Account settings dashboard in Permissions Management.
+++++ Last updated : 02/23/2022+++
+# View personal and organization information
+
+> [!IMPORTANT]
+> Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+The **Account settings** dashboard in Permissions Management allows you to view personal information, passwords, and account preferences.
+This information can't be modified because the user information is pulled from Azure AD. Only **User Session Time(min)**
+
+## View personal information
+
+1. In the Permissions Management home page, select the down arrow to the right of the **User** (your initials) menu, and then select **Account Settings**.
+
+ The **Personal Information** box displays your **First Name**, **Last Name**, and the **Email Address** that was used to register your account on Permissions Management.
+
+## View current organization information
+
+1. In the Permissions Management home page, select the down arrow to the right of the **User** (your initials) menu, and then select **Account Settings**.
+
+ The **Current Organization Information** displays the **Name** of your organization, the **Tenant ID** box, and the **User Session Timeout (min)**.
+
+1. To change duration of the **User Session Timeout (min)**, select **Edit** (the pencil icon), and then enter the number of minutes before you want a user session to time out.
+1. Select the check mark to confirm your new setting.
++
+## Next steps
+
+- For information about how to manage user information, see [Manage users and groups with the User management dashboard](ui-user-management.md).
+- For information about how to view information about active and completed tasks, see [View information about active and completed tasks](ui-tasks.md).
+- For information about how to select group-based permissions settings, see [Select group-based permissions settings](how-to-create-group-based-permissions.md).
active-directory Product Audit Trail https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-audit-trail.md
+
+ Title: Filter and query user activity in Permissions Management
+description: How to filter and query user activity in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Filter and query user activity
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+The **Audit** dashboard in Permissions Management details all user activity performed in your authorization system. It captures all high risk activity in a centralized location, and allows system administrators to query the logs. The **Audit** dashboard enables you to:
+
+- Create and save new queries so you can access key data points easily.
+- Query across multiple authorization systems in one query.
+
+## Filter information by authorization system
+
+If you haven't used filters before, the default filter is the first authorization system in the filter list.
+
+If you have used filters before, the default filter is last filter you selected.
+
+1. To display the **Audit** dashboard, on the Permissions Management home page, select **Audit**.
+
+1. To select your authorization system type, in the **Authorization System Type** box, select Amazon Web Services (**AWS**), Microsoft Azure (**Azure**), Google Cloud Platform (**GCP**), or Platform (**Platform**).
+
+1. To select your authorization system, in the **Authorization System** box:
+
+ - From the **List** subtab, select the accounts you want to use.
+ - From the **Folders** subtab, select the folders you want to use.
+
+1. To view your query results, select **Apply**.
+
+## Create, view, modify, or delete a query
+
+There are several different query parameters you can configure individually or in combination. The query parameters and corresponding instructions are listed in the following sections.
+
+- To create a new query, select **New Query**.
+- To view an existing query, select **View** (the eye icon).
+- To edit an existing query, select **Edit** (the pencil icon).
+- To delete a function line in a query, select **Delete** (the minus sign **-** icon).
+- To create multiple queries at one time, select **Add New Tab** to the right of the **Query** tabs that are displayed.
+
+ You can open a maximum number of six query tab pages at the same time. A message will appear when you've reached the maximum.
+
+## Create a query with specific parameters
+
+### Create a query with a date
+
+1. In the **New Query** section, the default parameter displayed is **Date In "Last day"**.
+
+ The first-line parameter always defaults to **Date** and can't be deleted.
+
+1. To edit date details, select **Edit** (the pencil icon).
+
+ To view query details, select **View** (the eye icon).
+
+1. Select **Operator**, and then select an option:
+ - **In**: Select this option to set a time range from the past day to the past year.
+ - **Is**: Select this option to choose a specific date from the calendar.
+ - **Custom**: Select this option to set a date range from the **From** and **To** calendars.
+
+1. To run the query on the current selection, select **Search**.
+
+1. To save your query, select **Save**.
+
+ To clear the recent selections, select **Reset**.
+
+### View operator options for identities
+
+The **Operator** menu displays the following options depending on the identity you select in the first dropdown:
+
+- **Is** / **Is Not**: View a list of all available usernames. You can either select or enter a username in the box.
+- **Contains** / **Not Contains**: Enter text that the **Username** should or shouldn't contain, for example, *Permissions Management*.
+- **In** / **Not In**: View a list all available usernames and select multiple usernames.
+
+### Create a query with a username
+
+1. In the **New query** section, select **Add**.
+
+1. From the menu, select **Username**.
+
+1. From the **Operator** menu, select the required option.
+
+1. To add criteria to this section, select **Add**.
+
+ You can change the operation between **And** / **Or** statements, and select other criteria. For example, the first set of criteria selected can be **Is** with the username **Test**.
+
+1. Select the plus (**+**) sign, select **Or** with **Contains**, and then enter a username, for example, *Permissions Management*.
+
+1. To remove a row of criteria, select **Remove** (the minus sign **-** icon).
+
+1. To run the query on the current selection, select **Search**.
+
+1. To clear the recent selections, select **Reset**.
+
+### Create a query with a resource name
+
+1. In the **New query** section, select **Add**.
+
+1. From the menu, select **Resource Name**.
+
+1. From the **Operator** menu, select the required option.
+
+1. To add criteria to this section, select **Add**.
+
+ You can change the operation between **And** / **Or** statements, and select other criteria. For example, the first set of criteria selected can be **Is** with resource name **Test**.
+
+1. Select the plus (**+**) sign, select **Or** with **Contains**, and then enter a username, for example, *Permissions Management*.
+
+1. To remove a row of criteria, select **Remove** (the minus sign **-** icon).
+
+1. To run the query on the current selection, select **Search**.
+
+1. To clear the recent selections, select **Reset**.
+
+### Create a query with a resource type
+
+1. In the **New Query** section, select **Add**.
+
+1. From the menu, select **Resource Type**.
+
+1. From the **Operator** menu, select the required option.
+
+1. To add criteria to this section, select **Add**.
+
+1. Change the operation between **And** / **Or** statements, and select other criteria. For example, the first set of criteria selected can be **Is** with resource type **s3::bucket**.
+
+1. Select the plus (**+**) sign, select **Or** with **Is**, and then enter or select `ec2::instance`.
+
+1. To remove a row of criteria, select **Remove** (the minus sign **-** icon).
+
+1. To run the query on the current selection, select **Search**.
+
+1. To clear the recent selections, select **Reset**.
++
+### Create a query with a task name
+
+1. In the **New Query** section, select **Add**.
+
+1. From the menu, select **Task Name**.
+
+1. From the **Operator** menu, select the required option.
+
+1. To add criteria to this section, select **Add**.
+
+1. Change the operation between **And** / **Or** statements, and select other criteria. For example, the first set of criteria selected can be **Is** with task name **s3:CreateBucket**.
+
+1. Select **Add**, select **Or** with **Is**, and then enter or select `ec2:TerminateInstance`.
+
+1. To remove a row of criteria, select **Remove** (the minus sign **-** icon).
+
+1. To run the query on the current selection, select **Search**.
+
+1. To clear the recent selections, select **Reset**.
+
+### Create a query with a state
+
+1. In the **New Query** section, select **Add**.
+
+1. From the menu, select **State**.
+
+1. From the **Operator** menu, select the required option.
+
+ - **Is** / **Is not**: Allows a user to select in the value field and select **Authorization Failure**, **Error**, or **Success**.
+
+1. To add criteria to this section, select **Add**.
+
+1. Change the operation between **And** / **Or** statements, and select other criteria. For example, the first set of criteria selected can be **Is** with State **Authorization Failure**.
+
+1. Select the **Add** icon, select **Or** with **Is**, and then select **Success**.
+
+1. To remove a row of criteria, select **Remove** (the minus sign **-** icon).
+
+1. To run the query on the current selection, select **Search**.
+
+1. To clear the recent selections, select **Reset**.
+
+### Create a query with a role name
+
+1. In the **New query** section, select **Add**.
+
+2. From the menu, select **Role Name**.
+
+3. From the **Operator** menu, select the required option.
+
+4. To add criteria to this section, select **Add**.
+
+5. Change the operation between **And** / **Or** statements, and select other criteria. For example, the first set of criteria selected can be **Contains** with free text **Test**.
+
+6. Select the **Add** icon, select **Or** with **Contains**, and then enter your criteria, for example *Permissions Management*.
+
+7. To remove a row of criteria, select **Remove** (the minus sign **-** icon).
+
+8. To run the query on the current selection, select **Search**.
+
+9. To clear the recent selections, select **Reset**.
+
+### Create a query with a role session name
+
+1. In the **New Query** section, select **Add**.
+
+2. From the menu, select **Role Session Name**.
+
+3. From the **Operator** menu, select the required option.
+
+4. To add criteria to this section, select **Add**.
+
+5. Change the operation between **And** / **Or** statements, and select other criteria. For example, the first set of criteria selected can be **Contains** with free text **Test**.
+
+6. Select the **Add** icon, select **Or** with **Contains**, and then enter your criteria, for example *Permissions Management*.
+
+7. To remove a row of criteria, select **Remove** (the minus sign **-** icon).
+
+8. To run the query on the current selection, select **Search**.
+
+9. To clear the recent selections, select **Reset**.
+
+### Create a query with an access key ID
+
+1. In the **New Query** section, select **Add**.
+
+2. From the menu, select **Access Key ID**.
+
+3. From the **Operator** menu, select the required option.
+
+4. To add criteria to this section, select **Add**.
+
+5. Change the operation between **And** / **Or** statements, and select other criteria. For example, the first set of criteria selected can be **Contains** with free `AKIAIFXNDW2Z2MPEH5OQ`.
+
+6. Select the **Add** icon, select **Or** with **Not** **Contains**, and then enter `AKIAVP2T3XG7JUZRM7WU`.
+
+7. To remove a row of criteria, select **Remove** (the minus sign **-** icon).
+
+8. To run the query on the current selection, select **Search**.
+
+9. To clear the recent selections, select **Reset**.
+
+### Create a query with a tag key
+
+1. In the **New Query** section, select **Add**.
+
+2. From the menu, select **Tag Key**.
+
+3. From the **Operator** menu, select the required option.
+
+4. To add criteria to this section, select **Add**.
+
+5. Change the operation between **And** / **Or** statements, and select other criteria. For example, the first set of criteria selected can be **Is** and type in, or select **Test**.
+
+6. Select the **Add** icon, select **Or** with **Is**, and then enter your criteria, for example *Permissions Management*.
+
+7. To remove a row of criteria, select **Remove** (the minus sign **-** icon).
+
+8. To run the query on the current selection, select **Search**.
+
+9. To clear the recent selections, select **Reset**.
+
+### Create a query with a tag key value
+
+1. In the **New Query** section, select **Add**.
+
+2. From the menu, select **Tag Key Value**.
+
+3. From the **Operator** menu, select the required option.
+
+4. To add criteria to this section, select **Add**.
+
+5. Change the operation between **And** / **Or** statements, and select other criteria. For example, the first set of criteria selected can be **Is** and type in, or select **Test**.
+
+6. Select the **Add** icon, select **Or** with **Is**, and then enter your criteria, for example *Permissions Management*.
+
+7. To remove a row of criteria, select **Remove** (the minus sign **-** icon).
+
+8. To run the query on the current selection, select **Search**.
+
+9. To clear the recent selections, select **Reset**.
+
+### View query results
+
+1. In the **Activity** table, your query results display in columns.
+
+ The results display all executed tasks that aren't read-only.
+
+1. To sort each column by ascending or descending value, select the up or down arrows next to the column name.
+
+ - **Identity Details**: The name of the identity, for example the name of the role session performing the task.
+
+ - To view the **Raw Events Summary**, which displays the full details of the event, next to the **Name** column, select **View**.
+
+ - **Resource Name**: The name of the resource on which the task is being performed.
+
+ If the column displays **Multiple**, it means multiple resources are listed in the column.
+
+1. To view a list of all resources, hover over **Multiple**.
+
+ - **Resource Type**: Displays the type of resource, for example, *Key* (encryption key) or *Bucket* (storage).
+ - **Task Name**: The name of the task that was performed by the identity.
+
+ An exclamation mark (**!**) next to the task name indicates that the task failed.
+
+ - **Date**: The date when the task was performed.
+
+ - **IP Address**: The IP address from where the user performed the task.
+
+ - **Authorization System**: The authorization system name in which the task was performed.
+
+1. To download the results in comma-separated values (CSV) file format, select **Download**.
+
+## Save a query
+
+1. After you complete your query selections from the **New Query** section, select **Save**.
+
+2. In the **Query Name** box, enter a name for your query, and then select **Save**.
+
+3. To save a query with a different name, select the ellipses (**...**) next to **Save**, and then select **Save As**.
+
+4. Make your query selections from the **New Query** section, select the ellipses (**...**), and then select **Save As**.
+
+5. To save a new query, in the **Save Query** box, enter the name for the query, and then select **Save**.
+
+6. To save an existing query you've modified, select the ellipses (**...**).
+
+ - To save a modified query under the same name, select **Save**.
+ - To save a modified query under a different name, select **Save As**.
+
+### View a saved query
+
+1. Select **Saved Queries**, and then select a query from the **Load Queries** list.
+
+ A message box opens with the following options: **Load with the saved authorization system** or **Load with the currently selected authorization system**.
+
+1. Select the appropriate option, and then select **Load Queries**.
+
+1. View the query information:
+
+ - **Query Name**: Displays the name of the saved query.
+ - **Query Type**: Displays whether the query is a *System* query or a *Custom* query.
+ - **Schedule**: Displays how often a report will be generated. You can schedule a one-time report or a monthly report.
+ - **Next On**: Displays the date and time the next report will be generated.
+ - **Format**: Displays the output format for the report, for example, CSV.
+ - **Last Modified On**: Displays the date in which the query was last modified on.
+
+1. To view or set schedule details, select the gear icon, select **Create Schedule**, and then set the details.
+
+ If a schedule has already been created, select the gear icon to open the **Edit Schedule** box.
+
+ - **Repeat**: Sets how often the report should repeat.
+ - **Start On**: Sets the date when you want to receive the report.
+ - **At**: Sets the specific time when you want to receive the report.
+ - **Report Format**: Select the output type for the file, for example, CSV.
+ - **Share Report With**: The email address of the user who is creating the schedule is displayed in this field. You can add other email addresses.
+
+1. After selecting your options, select **Schedule**.
++
+### Save a query under a different name
+
+- Select the ellipses (**...**).
+
+ System queries have only one option:
+
+ - **Duplicate**: Creates a duplicate of the query and names the file *Copy of XXX*.
+
+ Custom queries have the following options:
+
+ - **Rename**: Enter the new name of the query and select **Save**.
+ - **Delete**: Delete the saved query.
+
+ The **Delete Query** box opens, asking you to confirm that you want to delete the query. Select **Yes** or **No**.
+
+ - **Duplicate**: Creates a duplicate of the query and names it *Copy of XXX*.
+ - **Delete Schedule**: Deletes the schedule details for this query.
+
+ This option isn't available if you haven't yet saved a schedule.
+
+ The **Delete Schedule** box opens, asking you to confirm that you want to delete the schedule. Select **Yes** or **No**.
++
+## Export the results of a query as a report
+
+- To export the results of the query, select **Export**.
+
+ Permissions Management exports the results in comma-separated values (**CSV**) format, portable document format (**PDF**), or Microsoft Excel Open XML Spreadsheet (**XLSX**) format.
++
+## Next steps
+
+- For information on how to view how users access information, see [Use queries to see how users access information](ui-audit-trail.md).
+- For information on how to create a query, see [Create a custom query](how-to-create-custom-queries.md).
+- For information on how to generate an on-demand report from a query, see [Generate an on-demand report from a query](how-to-audit-trail-results.md).
active-directory Product Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-dashboard.md
+
+ Title: View data about the activity in your authorization system in Permissions Management
+description: How to view data about the activity in your authorization system in the Permissions Management Dashboard in Permissions Management.
+++++++ Last updated : 02/23/2022+++++
+# View data about the activity in your authorization system
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+The Permissions Management **Dashboard** provides an overview of the authorization system and account activity being monitored. You can use this dashboard to view data collected from your Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) authorization systems.
+
+## View data about your authorization system
+
+1. In the Permissions Management home page, select **Dashboard**.
+1. From the **Authorization systems type** dropdown, select **AWS**, **Azure**, or **GCP**.
+1. Select the **Authorization System** box to display a **List** of accounts and **Folders** available to you.
+1. Select the accounts and folders you want, and then select **Apply**.
+
+ The **Permission Creep Index (PCI)** chart updates to display information about the accounts and folders you selected. The number of days since the information was last updated displays in the upper right corner.
+
+1. In the Permission Creep Index (PCI) graph, select a bubble.
+
+ The bubble displays the number of identities that are considered high-risk.
+
+ *High-risk* refers to the number of users who have permissions that exceed their normal or required usage.
+
+1. Select the box to display detailed information about the identities contributing to the **Low PCI**, **Medium PCI**, and **High PCI**.
+
+1. The **Highest PCI change** displays the authorization system name with the PCI number and the change number for the last seven days, if applicable.
+
+ - To view all the changes and PCI ratings in your authorization system, select **View all**.
+
+1. To return to the PCI graph, select the **Graph** icon in the upper right of the list box.
+
+For more information about the Permissions Management **Dashboard**, see [View key statistics and data about your authorization system](ui-dashboard.md).
+
+## View user data on the PCI heat map
+
+The **Permission Creep Index (PCI)** heat map shows the incurred risk of users with access to high-risk privileges. The distribution graph displays all the users who contribute to the privilege creep. It displays how many users contribute to a particular score. For example, if the score from the PCI chart is 14, the graph shows how many users have a score of 14.
+
+- To view detailed data about a user, select the number.
+
+ The PCI trend graph shows you the historical trend of the PCI score over the last 90 days.
+
+- To download the **PCI History** report, select **Download** (the down arrow icon).
++
+## View information about users, roles, resources, and PCI trends
+
+To view specific information about the following, select the number displayed on the heat map.
+
+- **Users**: Displays the total number of users and how many fall into the high, medium, and low categories.
+- **Roles**: Displays the total number of roles and how many fall into the high, medium, and low categories.
+- **Resources**: Displays the total number of resources and how many fall into the high, medium, and low categories.
+- **PCI trend**: Displays a line graph of the PCI trend over the last several weeks.
+
+## View identity findings
+
+The **Identity** section below the heat map on the left side of the page shows all the relevant findings about identities, including roles that can access secret information, roles that are inactive, over provisioned active roles, and so on.
+
+- To expand the full list of identity findings, select **All findings**.
+
+## View resource findings
+
+The **Resource** section below the heat map on the right side of the page shows all the relevant findings about your resources. It includes unencrypted S3 buckets, open security groups, managed keys, and so on.
+
+## Next steps
+
+- For more information about how to view key statistics and data in the Dashboard, see [View key statistics and data about your authorization system](ui-dashboard.md).
active-directory Product Data Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-data-inventory.md
+
+ Title: Display an inventory of created resources and licenses for your authorization system
+description: How to display an inventory of created resources and licenses for your authorization system in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Display an inventory of created resources and licenses for your authorization system
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+You can use the **Inventory** dashboard in Permissions Management to display an inventory of created resources and licensing information for your authorization system and its associated accounts.
+
+## View resources created for your authorization system
+
+1. To access your inventory information, in the Permissions Management home page, select **Settings** (the gear icon).
+1. Select the **Inventory** tab, select the **Inventory** subtab, and then select your authorization system type:
+
+ - **AWS** for Amazon Web Services.
+ - **Azure** for Microsoft Azure.
+ - **GCP** for Google Cloud Platform.
+
+ The **Inventory** tab displays information pertinent to your authorization system type.
+
+1. To change the columns displayed in the table, select **Columns**, and then select the information you want to display.
+
+ - To discard your changes, select **Reset to default**.
+
+## View the number of licenses associated with your authorization system
+
+1. To access licensing information about your data sources, in the Permissions Management home page, select **Settings** (the gear icon).
+
+1. Select the **Inventory** tab, select the **Licensing** subtab, and then select your authorization system type.
+
+ The **Licensing** table displays the following information pertinent to your authorization system type:
+
+ - The names of your accounts in the **Authorization system** column.
+ - The number of **Compute** licenses.
+ - The number of **Serverless** licenses.
+ - The number of **Compute containers**.
+ - The number of **Databases**.
+ - The **Total number of licenses**.
++
+## Next steps
+
+- For information about viewing and configuring settings for collecting data from your authorization system and its associated accounts, see [View and configure settings for data collection](product-data-sources.md).
active-directory Product Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-data-sources.md
+
+ Title: View and configure settings for data collection from your authorization system in Permissions Management
+description: How to view and configure settings for collecting data from your authorization system in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View and configure settings for data collection
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
++
+You can use the **Data Collectors** dashboard in Permissions Management to view and configure settings for collecting data from your authorization systems. It also provides information about the status of the data collection.
+
+## Access and view data sources
+
+1. To access your data sources, in the Permissions Management home page, select **Settings** (the gear icon). Then select the **Data Collectors** tab.
+
+1. On the **Data Collectors** dashboard, select your authorization system type:
+
+ - **AWS** for Amazon Web Services.
+ - **Azure** for Microsoft Azure.
+ - **GCP** for Google Cloud Platform.
+
+1. To display specific information about an account:
+
+ 1. Enter the following information:
+
+ - **Uploaded on**: Select **All** accounts, **Online** accounts, or **Offline** accounts.
+ - **Transformed on**: Select **All** accounts, **Online** accounts, or **Offline** accounts.
+ - **Search**: Enter an ID or Internet Protocol (IP) address to find a specific account.
+
+ 1. Select **Apply** to display the results.
+
+ Select **Reset Filter** to discard your settings.
+
+1. The following information displays:
+
+ - **ID**: The unique identification number for the data collector.
+ - **Data types**: Displays the data types that are collected:
+ - **Entitlements**: The permissions of all identities and resources for all the configured authorization systems.
+ - **Recently uploaded on**: Displays whether the entitlement data is being collected.
+
+ The status displays *ONLINE* if the data collection has no errors and *OFFLINE* if there are errors.
+ - **Recently transformed on**: Displays whether the entitlement data is being processed.
+
+ The status displays *ONLINE* if the data processing has no errors and *OFFLINE* if there are errors.
+ - The **Tenant ID**.
+ - The **Tenant name**.
+
+## Modify a data collector
+
+1. Select the ellipses **(...)** at the end of the row in the table.
+1. Select **Edit Configuration**.
+
+ The **Permissions Management Onboarding - Summary** box displays.
+
+1. Select **Edit** (the pencil icon) for each field you want to change.
+1. Select **Verify now & save**.
+
+ To verify your changes later, select **Save & verify later**.
+
+ When your changes are saved, the following message displays: **Successfully updated configuration.**
+
+## Delete a data collector
+
+1. Select the ellipses **(...)** at the end of the row in the table.
+1. Select **Delete Configuration**.
+
+ The **Permissions Management Onboarding - Summary** box displays.
+1. Select **Delete**.
+1. Check your email for a one time password (OTP) code, and enter it in **Enter OTP**.
+
+ If you don't receive an OTP, select **Resend OTP**.
+
+ The following message displays: **Successfully deleted configuration.**
+
+## Start collecting data from an authorization system
+
+1. Select the **Authorization Systems** tab, and then select your authorization system type.
+1. Select the ellipses **(...)** at the end of the row in the table.
+1. Select **Collect Data**.
+
+ A message displays to confirm data collection has started.
+
+## Stop collecting data from an authorization system
+
+1. Select the ellipses **(...)** at the end of the row in the table.
+1. To delete your authorization system, select **Delete**.
+
+ The **Validate OTP To Delete Authorization System** box displays.
+
+1. Enter the OTP code
+1. Select **Verify**.
+
+## Next steps
+
+- For information about viewing an inventory of created resources and licensing information for your authorization system, see [Display an inventory of created resources and licenses for your authorization system](product-data-inventory.md)
active-directory Product Define Permission Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-define-permission-levels.md
+
+ Title: Define and manage users, roles, and access levels in Permissions Management
+description: How to define and manage users, roles, and access levels in Permissions Management User management dashboard.
+++++++ Last updated : 02/23/2022+++
+# Define and manage users, roles, and access levels
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+In Permissions Management, a key component of the interface is the User management dashboard. This topic describes how system administrators can define and manage users, their roles, and their access levels in the system.
+
+## The User management dashboard
+
+The Permissions Management User management dashboard provides a high-level overview of:
+
+- Registered and invited users.
+- Permissions allowed for each user within a given system.
+- Recent user activity.
+
+It also provides the functionality to invite or delete a user, edit, view, and customize permissions settings.
++
+## Manage users for customers without SAML integration
+
+Follow this process to invite users if the customer hasn't enabled SAML integration with the Permissions Management application.
+
+### Invite a user to Permissions Management
+
+Inviting a user to Permissions Management adds the user to the system and allows system administrators to assign permissions to those users. Follow the steps below to invite a user to Permissions Management.
+
+1. To invite a user to Permissions Management, select the down caret icon next to the **User** icon on the right of the screen, and then select **User Management**.
+2. From the **Users** tab, select **Invite User**.
+3. From the **Set User Permission** window, in the **User** text box, enter the user's email address.
+4. Under **Permission**, select the applicable option.
+
+ - **Admin for All Authorization System Types**: **View**, **Control**, and **Approve** permissions for all Authorization System Types.
+
+ 1. Select **Next**.
+ 2. Select **Requestor for User** for each authorization system, if applicable.
+
+ A user must have an account with a valid email address in the authorization system to select **Requestor for User**. If a user doesn't exist in the authorization system, **Requestor for User** is grayed out.
+ 3. Optional: To request access for multiple other identities, under **Requestor for Other Users**, select **Add**, and then select **Users**.
+
+ For example, a user may have various roles in different authorization systems, so they can select the **Add** icon and the **Users** icon to request access for all their accounts.
+ 4. On the **Add Users** screen, enter the user's name or ID in the **User Search** box and select all applicable users. Then select **Add**.
+
+ - **Admin for Selected Authorization System Types**: **View**, **Control**, and **Approve** permissions for selected Authorization System Types.
+
+ 1. Select **Viewer**, **Controller**, or **Approver** for the appropriate authorization system(s).
+ 2. Select **Next**.
+ 3. Select **Requestor for User** for each authorization system, if applicable.
+
+ A user must have an account with a valid email address in the authorization system to select **Requestor for User**. If a user doesn't exist in the authorization system, **Requestor for User** is grayed out.
+ 4. Optional: To request access for multiple other identities, under **Requestor for Other Users**, select **Add**, and then select **Users**.
+
+ For example, a user may have various roles in different authorization systems, so they can select **Add**, and then select **Users** to request access for all their accounts.
+ 5. On the **Add Users** screen, enter the user's name or ID in the **User Search** box and select all applicable users. Then select **Add**.
+
+ - **Custom**: **View**, **Control**, and **Approve** permissions for specific accounts in **Auth System Types**.
+
+ 1. Select **Next**.
+
+ The default view displays the **List** section.
+ 2. Select the appropriate boxes for **Viewer**, **Controller**, or **Approver**.
+
+ For access to all authorization system types, select **All (Current and Future)**.
+ 1. Select **Next**.
+ 1. Select **Requestor for User** for each authorization system, if applicable.
+
+ A user must have an account with a valid email address in the authorization system to select **Requestor for User**. If a user doesn't exist in the authorization system, **Requestor for User** is grayed out.
+ 5. Optional: To request access for multiple other identities, under **Requestor for Other Users**, select **Add**, and then select **Users**.
+
+ For example, a user may have various roles in different authorization systems, so they can select **Add**, and then select **Users** to request access for all their accounts.
+ 6. On the **Add Users** screen, enter the user's name or ID in the **User Search** box and select all applicable users. Then select **Add**.
+
+5. Select **Save**.
+
+ The following message displays in green at the top of the screen: **New User Has Been Invited Successfully**.
+++
+## Manage users for customers with SAML integration
+
+Follow this process to invite users if the customer has enabled SAML integration with the Permissions Management application.
+
+### Create a permission in Permissions Management
+
+Creating a permission directly in Permissions Management allows system administrators to assign permissions to specific users. The following steps help you to create a permission.
+
+- On the right side of the screen, select the down caret icon next to **User**, and then select **User management**.
+
+- For **Users**:
+ 1. To create permissions for a specific user, select the **Users** tab, and then select **Permission.**
+ 2. From the **Set User Permission** window, enter the user's email address in the **User** text box.
+ 3. Under **Permission**, select the applicable button. Then expand menu to view instructions for each option.
+ - **Admin for All Authorization System Types**: **View**, **Control**, and **Approve** permissions for all Authorization System Types.
+ 1. Select **Next**.
+ 2. Check **Requestor for User** for each authorization system, if applicable.
+
+ A user must have an account with a valid email address in the authorization system to select **Requestor for User**. If a user doesn't exist in the authorization system, **Requestor for User** is grayed out.
+
+ 3. Optional: To request access for multiple other identities, under **Requestor for Other Users**, select **Add**, and then select **Users**.
+
+ For example, a user may have various roles in different authorization systems, so they can select **Add**, and then select **Users** to request access for all their accounts.
+
+ 4. On the **Add Users** screen, enter the user's name or ID in the **User Search** box and select all applicable users. Then select **Add**.
+
+ - **Admin for Selected Authorization System Types**: **View**, **Control**, and **Approve** permissions for selected Authorization System Types.
+ 1. Check **Viewer**, **Controller**, or **Approver** for the appropriate authorization system(s).
+ 2. Select **Next**.
+ 3. Check **Requestor for User** for each authorization system, if applicable.
+
+ A user must have an account with a valid email address in the authorization system to select **Requestor for User**. If a user doesn't exist in the authorization system, **Requestor for User** is grayed out.
+
+ 4. Optional: To request access for multiple other identities, under **Requestor for Other Users**, select **Add**, and then select **Users**.
+
+ For example, a user may have various roles in different authorization systems, so they can select **Add**, and then select **Users** to request access for all their accounts.
+ 5. On the **Add Users** screen, enter the user's name or ID in the **User Search** box and select all applicable users. Then select **Add**.
+ - **Custom**: **View**, **Control**, and **Approve** permissions for specific accounts in **Auth System Types**.
+
+ 1. Select **Next**.
+
+ The default view displays the **List** tab, which displays individual authorization systems.
+ - To view groups of authorization systems organized into folder, select the **Folder** tab.
+ 2. Check the appropriate boxes for **Viewer**, **Controller**, or **Approver**.
+
+ For access to all authorization system types, select **All (Current and Future)**.
+ 3. Select **Next**.
+ 4. Check **Requestor for User** for each authorization system, if applicable.
+
+ A user must have an account with a valid email address in the authorization system to select **Requestor for User**. If a user doesn't exist in the authorization system, **Requestor for User** is grayed out.
+ 5. Optional: To request access for multiple other identities, under **Requestor for Other Users**, select **Add**, and then select **Users**.
+
+ For example, a user can have various roles in different authorization systems, so they can select **Add**, and then select **Users** to request access for all their accounts.
+
+ 6. On the **Add Users** screen, enter the user's name or ID in the **User Search** box and select all applicable users. Then select **Add**.
+
+ 4. Select **Save**.
+
+ The following message displays in green at the top of the screen:
+ **New User Has Been Created Successfully**.
+ 5. The new user receives an email invitation to log in to Permissions Management.
+
+### The Pending tab
+
+1. To view the created permission, select the **Pending** tab. The system administrator can view the following details:
+ - **Email Address**: Displays the email address of the invited user.
+ - **Permissions**: Displays each service account and if the user has permissions as a **Viewer**, **Controller**, **Approver**, or **Requestor**.
+ - **Invited By**: Displays the email address of the person who sent the invitation.
+ - **Sent**: Displays the date the invitation was sent to the user.
+2. To make changes to the following, select the ellipses **(...)** in the far right column.
+ - **View Permissions**: Displays a list of accounts for which the user has permissions.
+ - **Edit Permissions**: System administrators can edit a user's permissions.
+ - **Delete**: System administrators can delete a permission
+ - **Reinvite**: System administrator can reinvite the permission if the user didn't receive the email invite
+
+ When a user registers with Permissions Management, they move from the **Pending** tab to the **Registered** tab.
+
+### The Registered tab
+
+- For **Users**:
+
+ 1. The **Registered** tab provides a high-level overview of user details to system administrators:
+ - The **Name/Email Address** column lists the name and email address of the user.
+ - The **Permissions** column lists each authorization system, and each type of permission.
+
+ If a user has all permissions for all authorization systems, **Admin for All Authorization Types** display across all columns. If a user only has some permissions, numbers display in each column they have permissions for. For example, if the number "3" is listed in the **Viewer** column, the user has viewer permission for three accounts within that authorization system.
+ - The **Joined On** column records when the user registered for Permissions Management.
+ - The **Recent Activity** column displays the date when a user last performed an activity.
+ - The **Search** button allows a system administrator to search for a user by name and all users who match the criteria displays.
+ - The **Filters** option allows a system administrator to filter by specific details. When the filter option is selected, the **Authorization System** box displays.
+
+ To display all authorization system accounts,Select **All**. Then select the appropriate boxes for the accounts that need to be viewed.
+ 2. To make the changes to the following changes, select the ellipses **(...)** in the far right column:
+ - **View Permissions**: Displays a list of accounts for which the user has permissions.
+ - **Edit Permissions**: System administrators can edit the accounts for which a user has permissions.
+ - **Remove Permissions**: System administrators can remove permissions from a user.
+
+- For **Groups**:
+ 1. To create permissions for a specific user, select the **Groups** tab, and then select **Permission**.
+ 2. From the **Set Group Permission** window, enter the name of the group in the **Group Name** box.
+
+ The identity provider creates groups.
+
+ Some users may be part of multiple groups. In this case, the user's overall permissions is a union of the permissions assigned the various groups the user is a member of.
+ 3. Under **Permission**, select the applicable button and expand the menu to view instructions for each option.
+
+ - **Admin for All Authorization System Types**: **View**, **Control**, and **Approve** permissions for all Authorization System Types.
+ 1. Select **Next**.
+ 2. Check **Requestor for User** for each authorization system, if applicable.
+
+ A user must have an account with a valid email address in the authorization system to select **Requestor for User**. If a user doesn't exist in the authorization system, **Requestor for User** is grayed out.
+ 3. Optional: To request access for multiple other identities, under **Requestor for Other Users**, select **Add**, and then select **Users**.
+
+ For example, a user may have various roles in different authorization systems, so they can select **Add**, and then select **Users** to request access for all their accounts.
+
+ 4. On the **Add Users** screen, enter the user's name or ID in the **User Search** box and select all applicable users. Then select **Add**.
+
+ - **Admin for Selected Authorization System Types**: **View**, **Control**, and **Approve** permissions for selected Authorization System Types.
+ 1. Check **Viewer**, **Controller**, or **Approver** for the appropriate authorization system(s).
+ 2. Select **Next**.
+ 3. Check **Requestor for User** for each authorization system, if applicable.
+
+ A user must have an account with a valid email address in the authorization system to select **Requestor for User**. If a user doesn't exist in the authorization system, **Requestor for User** is grayed out.
+ 4. Optional: To request access for multiple other identities, under **Requestor for Other Users**, select **Add**, and then select **Users**.
+
+ For example, a user may have various roles in different authorization systems, so they can select **Add**, and then select **Users** to request access for all their accounts.
+
+ 5. On the **Add Users** screen, enter the user's name or ID in the **User Search** box and select all applicable users. Then select **Add**.
+
+ - **Custom**: **View**, **Control**, and **Approve** permissions for specific accounts in Auth System Types.
+ 1. Select **Next**.
+
+ The default view displays the **List** section.
+
+ 2. Check the appropriate boxes for **Viewer**, **Controller**, or **Approver.
+
+ For access to all authorization system types, select **All (Current and Future)**.
+
+ 3. Select **Next**.
+
+ 4. Check **Requestor for User** for each authorization system, if applicable.
+
+ A user must have an account with a valid email address in the authorization system to select **Requestor for User**. If a user doesn't exist in the authorization system, **Requestor for User** is grayed out.
+
+ 5. Optional: To request access for multiple other identities, under **Requestor for Other Users**, select **Add**, and then select **Users**.
+
+ For example, a user may have various roles in different authorization systems, so they can select **Add**, and then select **Users** to request access for all their accounts.
+
+ 6. On the **Add Users** screen, enter the user's name or ID in the **User Search** box and select all applicable users. Then select **Add**.
+
+ 4. Select **Save**.
+
+ The following message displays in green at the top of the screen: **New Group Has Been Created Successfully**.
+
+### The Groups tab
+
+1. The **Groups** tab provides a high-level overview of user details to system administrators:
+
+ - The **Name** column lists the name of the group.
+ - The **Permissions** column lists each authorization system, and each type of permission.
+
+ If a group has all permissions for all authorization systems, **Admin for All Authorization Types** displays across all columns.
+
+ If a group only has some permissions, the corresponding columns display numbers for the groups.
+
+ For example, if the number "3" is listed in the **Viewer** column, then the group has viewer permission for three accounts within that authorization system.
+ - The **Modified By** column records the email address of the person who created the group.
+ - The **Modified On** column records the date the group was last modified on.
+ - The **Search** button allows a system administrator to search for a group by name and all groups who match the criteria displays.
+ - The **Filters** option allows a system administrator to filter by specific details. When the filter option is selected, the **Authorization System** box displays.
+
+ To display all authorization system accounts, select **All**. Then select the appropriate boxes for the accounts that need to be viewed.
+
+2. To make changes to the following, select the ellipses **(...)** in the far right column:
+ - **View Permissions**: Displays a list of the accounts for which the group has permissions.
+ - **Edit Permissions**: System administrators can edit a group's permissions.
+ - **Duplicate**: System administrators can duplicate permissions from one group to another.
+ - **Delete**: System administrators can delete permissions from a group.
++
+## Next steps
+
+- For information about how to view user management information, see [Manage users with the User management dashboard](ui-user-management.md).
+- For information about how to create group-based permissions, see [Create group-based permissions](how-to-create-group-based-permissions.md).
active-directory Product Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-integrations.md
+
+ Title: View integration information about an authorization system in CloudKnox Permissions Management
+description: View integration information about an authorization system in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View integration information about an authorization system
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+The **Integrations** dashboard in CloudKnox Permissions Management (CloudKnox) allows you to view all your authorization systems in one place, and to ensure all applications are functioning as one. This information helps improve quality and performance as a whole.
+
+## Display integration information about an authorization system
+
+Refer to the **Integration** subpages in CloudKnox for information about available authorization systems for integration.
+
+1. To display the **Integrations** dashboard, select **User** (your initials) in the upper right of the screen, and then select **Integrations.**
+
+ The **Integrations** dashboard displays a tile for each available authorization system.
+
+1. Select an authorization system tile to view its integration information.
+
+## Available integrated authorization systems
+
+The following authorization systems may be listed in the **Integrations** dashboard, depending on which systems are integrated into the CloudKnox application.
+
+- **ServiceNow**: Manages digital workflows for enterprise operations, and the CloudKnox integration allows you to request and approve permissions through the ServiceNow ticketing workflow.
+- **Splunk**: Searches, monitors, and analyzes machine-generated data, and the CloudKnox integration enables exporting usage analytics data, alerts, and logs.
+- **HashiCorp Terraform**: CloudKnox enables the generation of least-privilege policies through the Hashi Terraform provider.
+- **CloudKnox API**: The CloudKnox application programming interface (API) provides access to CloudKnox features.
+- **Saviynt**: Enables you to view Identity entitlements and usage inside the Saviynt console.
+- **Securonix**: Enables exporting usage analytics data, alerts, and logs.
++++
+<!## Next steps>
+
+<![Installation overview](cloudknox-installation.md)>
+<![Configure integration with the CloudKnox API](cloudknox-integration-api.md)>
+<![Sign up and deploy FortSentry in your organization](cloudknox-fortsentry-registration.md)>
active-directory Product Permission Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-permission-analytics.md
+
+ Title: Create and view permission analytics triggers in Permissions Management
+description: How to create and view permission analytics triggers in the Permission analytics tab in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Create and view permission analytics triggers
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how you can create and view permission analytics triggers in Permissions Management.
+
+## View permission analytics triggers
+
+1. In the Permissions Management home page, select **Activity triggers** (the bell icon).
+1. Select **Permission Analytics**, and then select the **Alerts** subtab.
+
+ The **Alerts** subtab displays the following information:
+
+ - **Alert Name**: Lists the name of the alert.
+ - To view the name, ID, role, domain, authorization system, statistical condition, anomaly date, and observance period, select **Alert name**.
+ - To expand the top information found with a graph of when the anomaly occurred, select **Details**.
+ - **Anomaly Alert Rule**: Displays the name of the rule select when creating the alert.
+ - **# of Occurrences**: Displays how many times the alert trigger has occurred.
+ - **Task**: Displays how many tasks are affected by the alert
+ - **Resources**: Displays how many resources are affected by the alert
+ - **Identity**: Displays how many identities are affected by the alert
+ - **Authorization System**: Displays which authorization systems the alert applies to
+ - **Date/Time**: Displays the date and time of the alert.
+ - **Date/Time (UTC)**: Lists the date and time of the alert in Coordinated Universal Time (UTC).
+
+1. To filter the alerts, select the appropriate alert name or, from the **Alert Name** menu,select **All**.
+
+ - From the **Date** dropdown menu, select **Last 24 Hours**, **Last 2 Days**, **Last Week**, or **Custom Range**, and then select **Apply**.
+
+ If you select **Custom range**, select date and time settings, and then select **Apply**. - **View Trigger**: Displays the current trigger settings and applicable authorization system details.
+
+1. To view the following details, select the ellipses (**...**):
+
+ - **Details**: Displays **Authorization System Type**, **Authorization Systems**, **Resources**, **Tasks**, and **Identities** that matched the alert criteria.
+1. To view specific matches, select **Resources**, **Tasks**, or **Identities**.
+
+ The **Activity** section displays details about the **Identity Name**, **Resource Name**, **Task Name**, **Date**, and **IP Address**.
+
+## Create a permission analytics trigger
+
+1. In the Permissions Management home page, select **Activity triggers** (the bell icon).
+1. Select **Permission Analytics**, select the **Alerts** subtab, and then select **Create Permission Analytics Trigger**.
+1. In the **Alert Name** box, enter a name for the alert.
+1. Select the **Authorization System**.
+1. Select **Identity performed high number of tasks**, and then select **Next**.
+1. On the **Authorization Systems** tab, select the appropriate accounts and folders, or select **All**.
+
+ This screen defaults to the **List** view but can also be changed to the **Folder** view, and the applicable folder can be selected instead of individually by system.
+
+ - The **Status** column displays if the authorization system is online or offline
+ - The **Controller** column displays if the controller is enabled or disabled.
+
+1. On the **Configuration** tab, to update the **Time Interval**, select **90 Days**, **60 Days**, or **30 Days** from the **Time range** dropdown.
+1. Select **Save**.
+
+## View permission analytics alert triggers
+
+1. In the Permissions Management home page, select **Activity triggers** (the bell icon).
+1. Select **Permission Analytics**, and then select the **Alert Triggers** subtab.
+
+ The **Alert triggers** subtab displays the following information:
+
+ - **Alert**: Lists the name of the alert.
+ - **Anomaly Alert Rule**: Displays the name of the rule select when creating the alert.
+ - **# of users subscribed**: Displays the number of users subscribed to the alert.
+ - **Created By**: Displays the email address of the user who created the alert.
+ - **Last modified By**: Displays the email address of the user who last modified the alert.
+ - **Last Modified On**: Displays the date and time the trigger was last modified.
+ - **Subscription**: Toggle the button to **On** or **Off**.
+ - **View Trigger**: Displays the current trigger settings and applicable authorization system details.
+
+1. To view other options available to you, select the ellipses (**...**), and then make a selection from the available options:
+
+ - **Details** displays **Authorization System Type**, **Authorization Systems**, **Resources**, **Tasks**, and **Identities** that matched the alert criteria.
+ - To view the specific matches, select **Resources**, **Tasks**, or **Identities**.
+ - The **Activity** section displays details about the **Identity Name**, **Resource Name**, **Task Name**, **Date**, and **IP Address**.
+
+1. To filter by **Activated** or **Deactivated**, in the **Status** section, select **All**, **Activated**, or **Deactivated**, and then select **Apply**.
++
+## Next steps
+
+- For an overview on activity triggers, see [View information about activity triggers](ui-triggers.md).
+- For information on activity alerts and alert triggers, see [Create and view activity alerts and alert triggers](how-to-create-alert-trigger.md).
+- For information on rule-based anomalies and anomaly triggers, see [Create and view rule-based anomalies and anomaly triggers](product-rule-based-anomalies.md).
+- For information on finding outliers in identity's behavior, see [Create and view statistical anomalies and anomaly triggers](product-statistical-anomalies.md).
active-directory Product Permissions Analytics Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-permissions-analytics-reports.md
+
+ Title: Generate and download the Permissions analytics report in CloudKnox Permissions Management
+description: How to generate and download the Permissions analytics report in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Generate and download the Permissions analytics report
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how to generate and download the **Permissions analytics report** in CloudKnox Permissions Management (CloudKnox).
+
+> [!NOTE]
+> This topic applies only to Amazon Web Services (AWS) users.
+
+## Generate the Permissions analytics report
+
+1. In the CloudKnox home page, select the **Reports** tab, and then select the **Systems Reports** subtab.
+
+ The **Systems Reports** subtab displays a list of reports the **Reports** table.
+1. Find **Permissions Analytics Report** in the list, and to download the report, select the down arrow to the right of the report name, or from the ellipses **(...)** menu, select **Download**.
+
+ The following message displays: **Successfully Started To Generate On Demand Report.**
+
+1. For detailed information in the report, select the right arrow next to one of the following categories. Or, select the required category under the **Findings** column.
+
+ - **AWS**
+ - Inactive Identities
+ - Users
+ - Roles
+ - Resources
+ - Serverless Functions
+ - Inactive Groups
+ - Super Identities
+ - Users
+ - Roles
+ - Resources
+ - Serverless Functions
+ - Over-Provisioned Active Identities
+ - Users
+ - Roles
+ - Resources
+ - Serverless Functions
+ - PCI Distribution
+ - Privilege Escalation
+ - Users
+ - Roles
+ - Resources
+ - S3 Bucket Encryption
+ - Unencrypted Buckets
+ - SSE-S3 Buckets
+ - S3 Buckets Accessible Externally
+ - EC2 S3 Buckets Accessibility
+ - Open Security Groups
+ - Identities That Can Administer Security Tools
+ - Users
+ - Roles
+ - Resources
+ - Serverless Functions
+ - Identities That Can Access Secret Information
+ - Users
+ - Roles
+ - Resources
+ - Serverless Functions
+ - Cross-Account Access
+ - External Accounts
+ - Roles That Allow All Identities
+ - Hygiene: MFA Enforcement
+ - Hygiene: IAM Access Key Age
+ - Hygiene: Unused IAM Access Keys
+ - Exclude From Reports
+ - Users
+ - Roles
+ - Resources
+ - Serverless Functions
+ - Groups
+ - Security Groups
+ - S3 Buckets
++
+1. Select a category and view the following columns of information:
+
+ - **User**, **Role**, **Resource**, **Serverless Function Name**: Displays the name of the identity.
+ - **Authorization System**: Displays the authorization system to which the identity belongs.
+ - **Domain**: Displays the domain name to which the identity belongs.
+ - **Permissions**: Displays the maximum number of permissions that the identity can be granted.
+ - **Used**: Displays how many permissions that the identity has used.
+ - **Granted**: Displays how many permissions that the identity has been granted.
+ - **PCI**: Displays the permission creep index (PCI) score of the identity.
+ - **Date Last Active On**: Displays the date that the identity was last active.
+ - **Date Created On**: Displays the date when the identity was created.
+++
+<!## Add and remove tags in the Permissions analytics report
+
+1. Select **Tags**.
+1. Select one of the categories from the **Permissions Analytics Report**.
+1. Select the identity name to which you want to add a tag. Then, select the checkbox at the top to select all identities.
+1. Select **Add Tag**.
+1. In the **Tag** column:
+ - To select from the available options from the list, select **Select a Tag**.
+ - To search for a tag, enter the tag name.
+ - To create a new custom tag, select **New Custom Tag**.
+ - To create a new tag, enter a name for the tag and select **Create**.
+ - To remove a tag, select **Delete**.
+
+1. In the **Value (optional)** box, enter a value, if necessary.
+1. Select **Save**.>
+
+## Next steps
+
+- For information on how to view system reports in the **Reports** dashboard, see [View system reports in the Reports dashboard](product-reports.md).
+- For a detailed overview of available system reports, see [View a list and description of system reports](all-reports.md).
+- For information about how to generate and view a system report, see [Generate and view a system report](report-view-system-report.md).
+- For information about how to create, view, and share a system report, see [Create, view, and share a custom report](report-view-system-report.md).
active-directory Product Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-reports.md
+
+ Title: View system reports in the Reports dashboard in CloudKnox Permissions Management
+description: How to view system reports in the Reports dashboard in CloudKnox Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View system reports in the Reports dashboard
+
+> [!IMPORTANT]
+> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+CloudKnox Permissions Management (CloudKnox) has various types of system report types available that capture specific sets of data. These reports allow management to:
+
+- Make timely decisions.
+- Analyze trends and system/user performance.
+- Identify trends in data and high risk areas so that management can address issues more quickly and improve their efficiency.
+
+## Explore the Reports dashboard
+
+The **Reports** dashboard provides a table of information with both system reports and custom reports. The **Reports** dashboard defaults to the **System Reports** tab, which has the following details:
+
+- **Report Name**: The name of the report.
+- **Category**: The type of report. For example, **Permission**.
+- **Authorization Systems**: Displays which authorizations the custom report applies to.
+- **Format**: Displays the output format the report can be generated in. For example, comma-separated values (CSV) format, portable document format (PDF), or Microsoft Excel Open XML Spreadsheet (XLSX) format.
+
+ - To download a report, select the down arrow to the right of the report name, or from the ellipses **(...)** menu, select **Download**.
+
+ The following message displays across the top of the screen in green if the download is successful: **Successfully Started To Generate On Demand Report**.
+
+## Available system reports
+
+CloudKnox offers the following reports for management associated with the authorization systems noted in parenthesis:
+
+- **Access Key Entitlements And Usage**:
+ - **Summary of report**: Provides information about access key, for example, permissions, usage, and rotation date.
+ - **Applies to**: Amazon Web Services (AWS) and Microsoft Azure
+ - **Report output type**: CSV
+ - **Ability to collate report**: Yes
+ - **Type of report**: **Summary** or **Detailed**
+ - **Use cases**:
+ - The access key age, last rotation date, and last usage date is available in the summary report to help with key rotation.
+ - The granted task and Permissions creep index (PCI) score to take action on the keys.
+
+- **User Entitlements And Usage**:
+ - **Summary of report**: Provides information about the identities' permissions, for example, entitlement, usage, and PCI.
+ - **Applies to**: AWS, Azure, and Google Cloud Platform (GCP)
+ - **Report output type**: CSV
+ - **Ability to collate report**: Yes
+ - **Type of report**: **Summary** or **Detailed**
+ - **Use cases**:
+ - The data displayed on the **Usage Analytics** screen is downloaded as part of the **Summary** report. The user's detailed permissions usage is listed in the **Detailed** report.
+
+- **Group Entitlements And Usage**:
+ - **Summary of report**: Provides information about the group's permissions, for example, entitlement, usage, and PCI.
+ - **Applies to**: AWS, Azure, and GCP
+ - **Report output type**: CSV
+ - **Ability to collate report**: Yes
+ - **Type of report**: **Summary**
+ - **Use cases**:
+ - All group level entitlements and permission assignments, PCIs, and the number of members are listed as part of this report.
+
+- **Identity Permissions**:
+ - **Summary of report**: Report on identities that have specific permissions, for example, identities that have permission to delete any S3 buckets.
+ - **Applies to**: AWS, Azure, and GCP
+ - **Report output type**: CSV
+ - **Ability to collate report**: No
+ - **Type of report**: **Summary**
+ - **Use cases**:
+ - Any task usage or specific task usage via User/Group/Role/App can be tracked with this report.
+
+- **Identity privilege activity report**
+ - **Summary of report**: Provides information about permission changes that have occurred in the selected duration.
+ - **Applies to**: AWS, Azure, and GCP
+ - **Report output type**: PDF
+ - **Ability to collate report**: No
+ - **Type of report**: **Summary**
+ - **Use cases**:
+ - Any identity permission change can be captured using this report.
+ - The **Identity Privilege Activity** report has the following main sections: **User Summary**, **Group Summary**, **Role Summary**, and **Delete Task Summary**.
+ - The **User** summary lists the current granted permissions and high-risk permissions and resources accessed in 1 day, 7 days, or 30 days. There are subsections for newly added or deleted users, users with PCI change, and High-risk active/inactive users.
+ - The **Group** summary lists the administrator level groups with the current granted permissions and high-risk permissions and resources accessed in 1 day, 7 days, or 30 days. There are subsections for newly added or deleted groups, groups with PCI change, and High-risk active/inactive groups.
+ - The **Role summary** lists similar details as **Group Summary**.
+ - The **Delete Task summary** section lists the number of times the **Delete task** has been executed in the given time period.
+
+- **Permissions Analytics Report**
+ - **Summary of report**: Provides information about the violation of key security best practices.
+ - **Applies to**: AWS, Azure, and GCP
+ - **Report output type**: CSV
+ - **Ability to collate report**: Yes
+ - **Type of report**: **Detailed**
+ - **Use cases**:
+ - This report lists the different key findings in the selected auth systems. The key findings include super identities, inactive identities, over provisioned active identities, storage bucket hygiene, and access key age (for AWS only). The report helps administrators to visualize the findings across the organization.
+
+ For more information about this report, see [Permissions analytics report](product-permissions-analytics-reports.md).
+
+- **Role/Policy Details**
+ - **Summary of report**: Provides information about roles and policies.
+ - **Applies to**: AWS, Azure, GCP
+ - **Report output type**: CSV
+ - **Ability to collate report**: No
+ - **Type of report**: **Summary**
+ - **Use cases**:
+ - Assigned/Unassigned, custom/system policy, and the used/unused condition is captured in this report for any specific, or all, AWS accounts. Similar data can be captured for Azure/GCP for the assigned/unassigned roles.
+
+- **PCI History**
+ - **Summary of report**: Provides a report of privilege creep index (PCI) history.
+ - **Applies to**: AWS, Azure, GCP
+ - **Report output type**: CSV
+ - **Ability to collate report**: Yes
+ - **Type of report**: **Summary**
+ - **Use cases**:
+ - This report plots the trend of the PCI by displaying the monthly PCI history for each authorization system.
+
+- **All Permissions for Identity**
+ - **Summary of report**: Provides results of all permissions for identities.
+ - **Applies to**: AWS, Azure, GCP
+ - **Report output type**: CSV
+ - **Ability to collate report**: Yes
+ - **Type of report**: **Detailed**
+ - **Use cases**:
+ - This report lists all the assigned permissions for the selected identities.
++++
+## Next steps
+
+- For a detailed overview of available system reports, see [View a list and description of system reports](all-reports.md).
+- For information about how to create, view, and share a system report, see [Create, view, and share a custom report](report-view-system-report.md).
+- For information about how to create and view a custom report, see [Generate and view a custom report](report-create-custom-report.md).
+- For information about how to create and view the Permissions analytics report, see [Generate and download the Permissions analytics report](product-permissions-analytics-reports.md).
active-directory Product Rule Based Anomalies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-rule-based-anomalies.md
+
+ Title: Create and view rule-based anomalies and anomaly triggers in Permissions Management
+description: How to create and view rule-based anomalies and anomaly triggers in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Create and view rule-based anomaly alerts and anomaly triggers
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+Rule-based anomalies identify recent activity in Permissions Management that is determined to be unusual based on explicit rules defined in the activity trigger. The goal of rule-based anomaly is high precision detection.
+
+## View rule-based anomaly alerts
+
+1. In the Permissions Management home page, select **Activity triggers** (the bell icon).
+1. Select **Rule-Based Anomaly**, and then select the **Alerts** subtab.
+
+ The **Alerts** subtab displays the following information:
+
+ - **Alert Name**: Lists the name of the alert.
+
+ - To view the specific identity, resource, and task names that occurred during the alert collection period, select the **Alert Name**.
+
+ - **Anomaly Alert Rule**: Displays the name of the rule select when creating the alert.
+ - **# of Occurrences**: How many times the alert trigger has occurred.
+ - **Task**: How many tasks performed are triggered by the alert.
+ - **Resources**: How many resources accessed are triggered by the alert.
+ - **Identity**: How many identities performing unusual behavior are triggered by the alert.
+ - **Authorization System**: Displays which authorization systems the alert applies to, Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
+ - **Date/Time**: Lists the date and time of the alert.
+ - **Date/Time (UTC)**: Lists the date and time of the alert in Coordinated Universal Time (UTC).
++
+1. To filter alerts:
+
+ - From the **Alert Name** dropdown, select **All** or the appropriate alert name.
+ - From the **Date** dropdown menu, select **Last 24 Hours**, **Last 2 Days**, **Last Week**, or **Custom Range**, and select **Apply**.
+
+ - If you select **Custom Range**, also enter **From** and **To** duration settings.
+1. To view details that match the alert criteria, select the ellipses (**...**).
+
+ - **View Trigger**: Displays the current trigger settings and applicable authorization system details
+ - **Details**: Displays details about **Authorization System Type**, **Authorization Systems**, **Resources**, **Tasks**, **Identities**, and **Activity**
+ - **Activity**: Displays details about the **Identity Name**, **Resource Name**, **Task Name**, **Date/Time**, **Inactive For**, and **IP Address**. Selecting the "eye" icon displays the **Raw Events Summary**
+
+## Create a rule-based anomaly trigger
+
+1. In the Permissions Management home page, select **Activity triggers** (the bell icon).
+1. Select **Rule-Based Anomaly**, and then select the **Alerts** subtab.
+1. Select **Create Anomaly Trigger**.
+
+1. In the **Alert Name** box, enter a name for the alert.
+1. Select the **Authorization System**, **AWS**, **Azure**, or **GCP**.
+1. Select one of the following conditions:
+ - **Any Resource Accessed for the First Time**: The identity accesses a resource for the first time during the specified time interval.
+ - **Identity Performs a Particular Task for the First Time**: The identity does a specific task for the first time during the specified time interval.
+ - **Identity Performs a Task for the First Time**: The identity performs any task for the first time during the specified time interval
+1. Select **Next**.
+1. On the **Authorization Systems** tab, select the available authorization systems and folders, or select **All**.
+
+ This screen defaults to **List** view, but you can change it to **Folders** view. You can select the applicable folder instead of individually selecting by authorization system.
+
+ - The **Status** column displays if the authorization system is online or offline.
+ - The **Controller** column displays if the controller is enabled or disabled.
+
+1. On the **Configuration** tab, to update the **Time Interval**, select **90 Days**, **60 Days**, or **30 Days** from the **Time range** dropdown.
+1. Select **Save**.
+
+## View a rule-based anomaly trigger
+
+1. In the Permissions Management home page, select **Activity triggers** (the bell icon).
+1. Select **Rule-Based Anomaly**, and then select the **Alert Triggers** subtab.
+
+ The **Alert Triggers** subtab displays the following information:
+
+ - **Alerts**: Displays the name of the alert.
+ - **Anomaly Alert Rule**: Displays the name of the selected rule when creating the alert.
+ - **# of Users Subscribed**: Displays the number of users subscribed to the alert.
+ - **Created By**: Displays the email address of the user who created the alert.
+ - **Last Modified By**: Displays the email address of the user who last modified the alert.
+ - **Last Modified On**: Displays the date and time the trigger was last modified.
+ - **Subscription**: Subscribes you to receive alert emails. Switches between **On** and **Off**.
+
+1. To view other options available to you, select the ellipses (**...**), and then select from the available options:
+
+ If the **Subscription** is **On**, the following options are available:
+
+ - **Edit**: Enables you to modify alert parameters.
+
+ Only the user who created the alert can edit the trigger screen, rename an alert, deactivate an alert, and delete an alert. Changes made by other users aren't saved.
+
+ - **Duplicate**: Create a duplicate copy of the selected alert trigger.
+ - **Rename**: Enter the new name of the query, and then select **Save.**
+ - **Deactivate**: The alert will still be listed, but will no longer send emails to subscribed users.
+ - **Activate**: Activate the alert trigger and start sending emails to subscribed users.
+ - **Notification Settings**: View the **Email** of users who are subscribed to the alert trigger.
+ - **Delete**: Delete the alert.
+
+ If the **Subscription** is **Off**, the following options are available:
+ - **View**: View details of the alert trigger.
+ - **Notification Settings**: View the **Email** of users who are subscribed to the alert trigger.
+ - **Duplicate**: Create a duplicate copy of the selected alert trigger.
+
+1. To filter by **Activated** or **Deactivated**, in the **Status** section, select **All**, **Activated**, or **Deactivated**, and then select **Apply**.
+++
+## Next steps
+
+- For an overview on activity triggers, see [View information about activity triggers](ui-triggers.md).
+- For information on activity alerts and alert triggers, see [Create and view activity alerts and alert triggers](how-to-create-alert-trigger.md).
+- For information on finding outliers in identity's behavior, see [Create and view statistical anomalies and anomaly triggers](product-statistical-anomalies.md).
+- For information on permission analytics triggers, see [Create and view permission analytics triggers](product-permission-analytics.md).
active-directory Product Statistical Anomalies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-statistical-anomalies.md
+
+ Title: Create and view statistical anomalies and anomaly triggers in Permissions Management
+description: How to create and view statistical anomalies and anomaly triggers in the Statistical Anomaly tab in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Create and view statistical anomalies and anomaly triggers
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+Statistical anomalies can detect outliers in an identity's behavior if recent activity is determined to be unusual based on models defined in an activity trigger. The goal of this anomaly trigger is a high recall rate.
+
+## View statistical anomalies in an identity's behavior
+
+1. In the Permissions Management home page, select **Activity triggers** (the bell icon).
+1. Select **Statistical Anomaly**, and then select the **Alerts** subtab.
+
+ The **Alerts** subtab displays the following information:
+
+ - **Alert Name**: Lists the name of the alert.
+ - **Anomaly Alert Rule**: Displays the name of the rule select when creating the alert.
+ - **# of Occurrences**: Displays how many times the alert trigger has occurred.
+ - **Authorization System**: Displays which authorization systems the alert applies to.
+ - **Date/Time**: Lists the day of the outlier occurring.
+ - **Date/Time (UTC)**: Lists the day of the outlier occurring in Coordinated Universal Time (UTC).
++
+1. To filter the alerts based on name, select the appropriate alert name or choose **All** from the **Alert Name** dropdown menu, and select **Apply**.
+1. To filter the alerts based on alert time, select **Last 24 Hours**, **Last 2 Days**, **Last Week**, or **Custom Range** from the **Date** dropdown menu, and select **Apply**.
+1. If you select the ellipses (**...**) and select:
+ - **Details**, this brings you to an Alert Summary view with **Authorization System**, **Statistical Model** and **Observance Period** displayed along with a table with a row per identity triggering this alert. From here you can click:
+ - **Details**: Displays graph(s) highlighting the anomaly with context, and up to the top 3 actions performed on the day of the anomaly
+ - **View Trigger**: Displays the current trigger settings and applicable authorization system details
+ - **View Trigger**: Displays the current trigger settings and applicable authorization system details
+
+## Create a statistical anomaly trigger
+
+1. In the Permissions Management home page, select **Activity triggers** (the bell icon).
+1. Select **Statistical Anomaly**, select the **Alerts** subtab, and then select **Create Alert Trigger**.
+1. Enter a name for the alert in the **Alert Name** box.
+1. Select the **Authorization System**, Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
+1. Select one of the following conditions:
+
+ - **Identity Performed High Number of Tasks**: The identity performs higher than their usual volume of tasks. For example, an identity typically performs 25 tasks per day, and now it is performing 100 tasks per day.
+ - **Identity Performed Low Number of Tasks**: The identity performs lower than their usual volume of tasks. For example, an identity typically performs 100 tasks per day, and now it is performing 25 tasks per day.
+ - **Identity Performed Tasks with Unusual Results**: The identity performing an action gets a different result than usual, such as most tasks end in a successful result and are now ending in a failed result or vice versa.
+ - **Identity Performed Tasks with Unusual Timing**: The identity does tasks at unusual times as established by their baseline in the observance period. Times are grouped by the following UTC 4 hour windows.
+ - 12AM-4AM UTC
+ - 4AM-8AM UTC
+ - 8AM-12PM UTC
+ - 12PM-4PM UTC
+ - 4PM-8PM UTC
+ - 8PM-12AM UTC
+ - **Identity Performed Tasks with Unusual Types**: The identity performs unusual types of tasks as established by their baseline in the observance period. For example, an identity performs read, write, or delete tasks they wouldn't ordinarily perform.
+ - **Identity Performed Tasks with Multiple Unusual Patterns**: The identity has several unusual patterns in the tasks performed by the identity as established by their baseline in the observance period.
+1. Select **Next**.
+
+1. On the **Authorization Systems** tab, select the appropriate systems, or, to select all systems, select **All**.
+
+ The screen defaults to the **List** view but you can switch to **Folder** view using the menu, and then select the applicable folder instead of individually by system.
+
+ - The **Status** column displays if the authorization system is online or offline.
+
+ - The **Controller** column displays if the controller is enabled or disabled.
++
+1. On the **Configuration** tab, to update the **Time Interval**, from the **Time Range** dropdown, select **90 Days**, **60 Days**, or **30 Days**, and then select **Save**.
+
+## View statistical anomaly triggers
+
+1. In the Permissions Management home page, select **Activity triggers** (the bell icon).
+1. Select **Statistical Anomaly**, and then select the **Alert Triggers** subtab.
+
+ The **Alert Triggers** subtab displays the following information:
+
+ - **Alert**: Displays the name of the alert.
+ - **Anomaly Alert Rule**: Displays the name of the rule select when creating the alert.
+ - **# of users subscribed**: Displays the number of users subscribed to the alert.
+ - **Created By**: Displays the email address of the user who created the alert.
+ - **Last Modified By**: Displays the email address of the user who last modified the alert.
+ - **Last Modified On**: Displays the date and time the trigger was last modified.
+ - **Subscription**: Subscribes you to receive alert emails. Toggle the button to **On** or **Off**.
+
+1. To filter by **Activated** or **Deactivated**, in the **Status** section, select **All**, **Activated**, or **Deactivated**, and then select **Apply**.
+
+1. To view other options available to you, select the ellipses (**...**), and then select from the available options:
+
+ If the **Subscription** is **On**, the following options are available:
+ - **Edit**: Enables you to modify alert parameters
+
+ > [!NOTE]
+ > Only the user who created the alert can perform the following actions: edit the trigger screen, rename an alert, deactivate an alert, and delete an alert. Changes made by other users aren't saved.
+ - **Duplicate**: Create a duplicate copy of the selected alert trigger.
+ - **Rename**: Enter the new name of the query, and then select **Save.**
+ - **Deactivate**: The alert will still be listed, but will no longer send emails to subscribed users.
+ - **Activate**: Activate the alert trigger and start sending emails to subscribed users.
+ - **Notification Settings**: View the **Email** of users who are subscribed to the alert trigger.
+ - **Delete**: Delete the alert.
+
+ If the **Subscription** is **Off**, the following options are available:
+ - **View**: View details of the alert trigger.
+ - **Notification settings**: View the **Email** of users who are subscribed to the alert trigger.
+ - **Duplicate**: Create a duplicate copy of the selected alert trigger.
++
+1. Select **Apply**.
+++
+## Next steps
+
+- For an overview on activity triggers, see [View information about activity triggers](ui-triggers.md).
+- For information on activity alerts and alert triggers, see [Create and view activity alerts and alert triggers](how-to-create-alert-trigger.md).
+- For information on rule-based anomalies and anomaly triggers, see [Create and view rule-based anomalies and anomaly triggers](product-rule-based-anomalies.md).
+- For information on permission analytics triggers, see [Create and view permission analytics triggers](product-permission-analytics.md).
active-directory Report Create Custom Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/report-create-custom-report.md
+
+ Title: Create, view, and share a custom report a custom report in Permissions Management
+description: How to create, view, and share a custom report in the Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Create, view, and share a custom report
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how to create, view, and share a custom report in Permissions Management.
+
+## Create a custom report
+
+1. In the Permissions Management home page, select the **Reports** tab, and then select the **Custom Reports** subtab.
+1. Select **New Custom Report**.
+1. In the **Report Name** box, enter a name for your report.
+1. From the **Report Based on** list:
+ 1. To view which authorization systems the report applies to, hover over each report name.
+ 1. To view a description of a report, select the report.
+1. Select a report you want to use as the base for your custom report, and then select **Next**.
+1. In the **MyReport** box, select the **Authorization System** you want: Amazon Web Services (**AWS**), Microsoft Azure (**Azure**), or Google Cloud Platform (**GCP**).
+
+1. To add specific accounts, select the **List** subtab, and then select **All** or the account names.
+1. To add specific folders, select the **Folders** subtab, and then select **All** or the folder names.
+
+1. Select the **Report Format** subtab, and then select the format for your report: comma-separated values (**CSV**) file, portable document format (**PDF**), or Microsoft Excel Open XML Spreadsheet (**XLSX**) file.
+1. Select the **Schedule** tab, and then select the frequency for your report, from **None** up to **Monthly**.
+
+ - For **Hourly** and **Daily** options, set the start date by choosing from the **Calendar** dropdown, and can input a specific time of the day they want to receive the report.
+
+ In addition to date and time, the **Weekly** and **Biweekly** provide options for you to select on which day(s)of the week the report should repeat.
+
+1. Select **Save**.
+
+ The following message displays across the top of the screen in green if the download is successful: **Report has been created**.
+The report name appears in the **Reports** table.
+
+## View a custom report
+
+1. In the Permissions Management home page, select the **Reports** tab, and then select the **Custom Reports** subtab.
+
+ The **Custom Reports** tab displays the following information in the **Reports** table:
+
+ - **Report Name**: The name of the report.
+ - **Category**: The type of report: **Permission**.
+ - **Authorization System**: The authorization system in which you can view the report: AWS, Azure, and GCP.
+ - **Format**: The format of the report, **CSV**, **PDF**, or **XLSX** format.
+
+1. To view a report, from the **Report Name** column, select the report you want.
+1. To download a report, select the down arrow to the right of the report name, or from the ellipses **(...)** menu, select **Download**.
+1. To refresh the list of reports, select **Reload**.
+
+## Share a custom report
+
+1. In the Permissions Management home page, select the **Reports** tab, and then select the **Custom Reports** subtab.
+1. In the **Reports** table, select a report and then select the ellipses (**...**) icon.
+1. In the **Report Settings** box, select **Share with**.
+1. In the **Search Email to add** box, enter the name of other Permissions Management user(s).
+
+ You can only share reports with other Permissions Management users.
+1. Select **Save**.
+
+## Search for a custom report
+
+1. In the Permissions Management home page, select the **Reports** tab, and then select the **Custom Reports** subtab.
+1. On the **Custom Reports** tab, select **Search**.
+1. In the **Search** box, enter the name of the report you want.
+
+ The **Custom Reports** tab displays a list of reports that match your search criteria.
+1. Select the report you want.
+1. To download a report, select the down arrow to the right of the report name, or from the ellipses **(...)** menu, select **Download**.
+1. To refresh the list of reports, select **Reload**.
++
+## Modify a saved or scheduled custom report
+
+1. In the Permissions Management home page, select the **Reports** tab, and then select the **Custom Reports** subtab.
+1. Hover over the report name on the **Custom Reports** tab.
+
+ - To rename the report, select **Edit** (the pencil icon), and enter a new name.
+ - To change the settings for your report, select **Settings** (the gear icon). Make your changes, and then select **Save**.
+
+ - To download a copy of the report, select the **Down arrow** icon.
+
+1. To perform other actions to the report, select the ellipses (**...**) icon:
+
+ - **Download**: Downloads a copy of the report.
+
+ - **Report Settings**: Displays the settings for the report, including scheduling, sharing the report, and so on.
+
+ - **Duplicate**: Creates a duplicate of the report called **"Copy of XXX"**. Any reports not created by the current user are listed as **Duplicate**.
+
+ When you select **Duplicate**, a box appears asking if you're sure you want to create a duplicate. Select **Confirm**.
+
+ When the report is successfully duplicated, the following message displays: **Report generated successfully**.
+
+ - **API Settings**: Download the report using your Application Programming Interface (API) settings.
+
+ When this option is selected, the **API Settings** window opens and displays the **Report ID** and **Secret Key**. Select **Generate New Key**.
+
+ - **Delete**: Select this option to delete the report.
+
+ After selecting **Delete**, a pop-up box appears asking if the user is sure they want to delete the report. Select **Confirm**.
+
+ **Report is deleted successfully** appears across the top of the screen in green if successfully deleted.
+
+ - **Unsubscribe**: Unsubscribe the user from receiving scheduled reports and notifications.
+
+ This option is only available after a report has been scheduled.
++
+## Next steps
+
+- For information on how to view system reports in the **Reports** dashboard, see [View system reports in the Reports dashboard](product-reports.md).
+- For a detailed overview of available system reports, see [View a list and description of system reports](all-reports.md).
+- For information about how to generate and view a system report, see [Generate and view a system report](report-view-system-report.md).
+- For information about how to create and view the Permissions analytics report, see [Generate and download the Permissions analytics report](product-permissions-analytics-reports.md).
active-directory Report View System Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/report-view-system-report.md
+
+ Title: Generate and view a system report in Permissions Management
+description: How to generate and view a system report in the Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Generate and view a system report
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how to generate and view a system report in Permissions Management.
+
+## Generate a system report
+
+1. In the Permissions Management home page, select the **Reports** tab, and then select the **Systems Reports** subtab.
+ The **Systems Reports** subtab displays the following options in the **Reports** table:
+
+ - **Report Name**: The name of the report.
+ - **Category**: The type of report: **Permission**.
+ - **Authorization System**: The authorization system activity in the report: Amazon Web Services (AWS), Microsoft Azure (Azure), and Google Cloud Platform (GCP).
+ - **Format**: The format in which the report is available: comma-separated values (**CSV**) format, portable document format (**PDF**), or Microsoft Excel Open XML Spreadsheet (**XLSX**) format.
+
+1. In the **Report Name** column, find the report you want, and then select the down arrow to the right of the report name to download the report.
+
+ Or, from the ellipses **(...)** menu, select **Download**.
+
+ The following message displays: **Successfully Started To Generate On Demand Report.**
+
+ > [!NOTE]
+ > If you select one authorization system, the report includes a summary. If you select more than one authorization system, the report does not include a summary.
+
+1. To refresh the list of reports, select **Reload**.
+
+## Search for a system report
+
+1. On the **Systems Reports** subtab, select **Search**.
+1. In the **Search** box, enter the name of the report you want.
+
+ The **Systems Reports** subtab displays a list of reports that match your search criteria.
+1. Select a report from the **Report Name** column.
+1. To download a report, select the down arrow to the right of the report name, or from the ellipses **(...)** menu, select **Download**.
+1. To refresh the list of reports, select **Reload**.
++
+## Next steps
+
+- For information on how to view system reports in the **Reports** dashboard, see [View system reports in the Reports dashboard](product-reports.md).
+- For a detailed overview of available system reports, see [View a list and description of system reports](all-reports.md).
+- For information about how to create, view, and share a system report, see [Create, view, and share a custom report](report-view-system-report.md).
+- For information about how to create and view the Permissions analytics report, see [Generate and download the Permissions analytics report](product-permissions-analytics-reports.md).
active-directory Training Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/training-videos.md
+
+ Title: CloudKnox Permissions Management training videos
+description: CloudKnox Permissions Management training videos.
+++++++ Last updated : 04/20/2022+++
+# CloudKnox Permissions Management training videos
+
+To view step-by-step training videos on how to use CloudKnox Permissions Management (CloudKnox) features, select a link below.
+
+## Onboard CloudKnox in your organization
++
+### Enable CloudKnox in your Azure Active Directory (Azure AD) tenant
+
+To view a video on how to enable CloudKnox in your Azure AD tenant, select [Enable CloudKnox in your Azure AD tenant](https://www.youtube.com/watch?v=-fkfeZyevoo).
+
+### Configure and onboard Amazon Web Services (AWS) accounts
+
+To view a video on how to configure and onboard Amazon Web Services (AWS) accounts in CloudKnox, select [Configure and onboard AWS accounts](https://www.youtube.com/watch?v=R6K21wiWYmE).
+
+### Configure and onboard Google Cloud Platform (GCP) accounts
+
+To view a video on how to configure and onboard Google Cloud Platform (GCP) accounts in CloudKnox, select [Configure and onboard GCP accounts](https://www.youtube.com/watch?app=desktop&v=W3epcOaec28).
++++
+## Next steps
+
+- For an overview of CloudKnox, see [What's CloudKnox Permissions Management?](overview.md)
+- For a list of frequently asked questions (FAQs) about CloudKnox, see [FAQs](faqs.md).
+- For information on how to start viewing information about your authorization system in CloudKnox, see [View key statistics and data about your authorization system](ui-dashboard.md).
active-directory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/troubleshoot.md
+
+ Title: Troubleshoot issues with Permissions Management
+description: Troubleshoot issues with Permissions Management
+++++++ Last updated : 02/23/2022+++
+# Troubleshoot issues with Permissions Management
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This section answers troubleshoot issues with Permissions Management.
+
+## One time passcode (OTP) email
+
+### The user didn't receive the OTP email.
+
+- Check your junk or Spam mail folder for the email.
+
+## Reports
+
+### The individual files are generated according to the authorization system (subscription/account/project).
+
+- Select the **Collate** option in the **Custom Report** screen in the Permissions Management **Reports** tab.
+
+## Data collection in AWS
+
+### Data collection > AWS Authorization system data collection status is offline. Upload and transform is also offline.
+
+- Check the Permissions Management-related role that exists in these accounts.
+- Validate the trust relationship with the OpenID Connect (OIDC) role.
+
+<!Next steps>
active-directory Ui Audit Trail https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-audit-trail.md
+
+ Title: Use queries to see how users access information in an authorization system in Permissions Management
+description: How to use queries to see how users access information in an authorization system in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Use queries to see how users access information
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+The **Audit** dashboard in Permissions Management provides an overview of queries a Permissions Management user has created to review how users access their authorization systems and accounts.
+
+This article provides an overview of the components of the **Audit** dashboard.
+
+## View information in the Audit dashboard
++
+1. In Permissions Management, select the **Audit** tab.
+
+ Permissions Management displays the query options available to you.
+
+1. The following options display at the top of the **Audit** dashboard:
+
+ - A tab for each existing query. Select the tab to see details about the query.
+ - **New Query**: Select the tab to create a new query.
+ - **New tab (+)**: Select the tab to add a **New Query** tab.
+ - **Saved Queries**: Select to view a list of saved queries.
+
+1. To return to the main page, select **Back to Audit Trail**.
++
+## Use a query to view information
+
+1. In Permissions Management, select the **Audit** tab.
+1. The **New query** tab displays the following options:
+
+ - **Authorization Systems Type**: A list of your authorization systems: Amazon Web Services (**AWS**), Microsoft Azure (**Azure**), Google Cloud Platform (**GCP**), or Platform (**Platform**).
+
+ - **Authorization System**: A **List** of accounts and **Folders** in the authorization system.
+
+ - To display a **List** of accounts and **Folders** in the authorization system, select the down arrow, and then select **Apply**.
+
+1. To add an **Audit Trail Condition**, select **Conditions** (the eye icon), select the conditions you want to add, and then select **Close**.
+
+1. To edit existing parameters, select **Edit** (the pencil icon).
+
+1. To add the parameter that you created to the query, select **Add**.
+
+1. To search for activity data that you can add to the query, select **Search** .
+
+1. To save your query, select **Save**.
+
+1. To save your query under a different name, select **Save As** (the ellipses **(...)** icon).
+
+1. To discard your work and start creating a query again, select **Reset Query**.
+
+1. To delete a query, select the **X** to the right of the query tab.
+++
+## Next steps
+
+- For information on how to filter and view user activity, see [Filter and query user activity](product-audit-trail.md).
+- For information on how to create a query,see [Create a custom query](how-to-create-custom-queries.md).
+- For information on how to generate an on-demand report from a query, see [Generate an on-demand report from a query](how-to-audit-trail-results.md).
active-directory Ui Autopilot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-autopilot.md
+
+ Title: View rules in the Autopilot dashboard in Permissions Management
+description: How to view rules in the Autopilot dashboard in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View rules in the Autopilot dashboard
+
+> [!IMPORTANT]
+> Micorosft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+The **Autopilot** dashboard in Permissions Management provides a table of information about **Autopilot rules** for administrators.
++
+> [!NOTE]
+> Only users with the **Administrator** role can view and make changes on this tab.
+
+## View a list of rules
+
+1. In the Permissions Management home page, select the **Autopilot** tab.
+1. In the **Autopilot** dashboard, from the **Authorization system types** dropdown, select the authorization system types you want: Amazon Web Services (**AWS**), Microsoft **Azure**, or Google Cloud Platform (**GCP**).
+1. From the **Authorization System** dropdown, in the **List** and **Folders** box, select the account and folder names that you want.
+1. Select **Apply**.
+
+ The following information displays in the **Autopilot Rules** table:
+
+ - **Rule Name**: The name of the rule.
+ - **State**: The status of the rule: idle (not being use) or active (being used).
+ - **Rule Type**: The type of rule being applied.
+ - **Mode**: The status of the mode: on-demand or not.
+ - **Last Generated**: The date and time the rule was last generated.
+ - **Created By**: The email address of the user who created the rule.
+ - **Last Modified**: The date and time the rule was last modified.
+ - **Subscription**: Provides an **On** or **Off** subscription that allows you to receive email notifications when recommendations have been generated, applied, or unapplied.
+
+## View other available options for rules
+
+- Select the ellipses **(...)**
+
+ The following options are available:
+
+ - **View Rule**: Select to view details of the rule.
+ - **Delete Rule**: Select to delete the rule. Only the user who created the selected rule can delete the rule.
+ - **Generate Recommendations**: Creates recommendations for each user and the authorization system. Only the user who created the selected rule can create recommendations.
+ - **View Recommendations**: Displays the recommendations for each user and authorization system.
+ - **Notification Settings**: Displays the users subscribed to this rule. Only the user who created the selected rule can add other users to be notified.
+
+You can also select:
+
+- **Reload**: Select to refresh the displayed list of roles/policies.
+- **Search**: Select to search for a specific role/policy.
+- **Columns**: From the dropdown list, select the columns you want to display.
+ - Select **Reset to default** to return to the system defaults.
+- **New Rule**: Select to create a new rule. For more information, see [Create a rule](how-to-create-rule.md).
+++
+## Next steps
+
+- For information about creating rules, see [Create a rule](how-to-create-rule.md).
+- For information about generating, viewing, and applying rule recommendations for rules, see [Generate, view, and apply rule recommendations for rules](how-to-recommendations-rule.md).
+- For information about notification settings for rules, see [View notification settings for a rule](how-to-notifications-rule.md).
active-directory Ui Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-dashboard.md
+
+ Title: View key statistics and data about your authorization system in Permissions Management
+description: How to view statistics and data about your authorization system in the Permissions Management.
+++++++ Last updated : 02/23/2022++++
+# View key statistics and data about your authorization system
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+Permissions Management provides a summary of key statistics and data about your authorization system regularly. This information is available for Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).
+
+## View metrics related to avoidable risk
+
+The data provided by Permissions Management includes metrics related to avoidable risk. These metrics allow the Permissions Management administrator to identify areas where they can reduce risks related to the principle of least permissions.
+
+You can view the following information in Entra:
+
+- The **Permission Creep Index (PCI)** heat map on the Permissions Management **Dashboard** identifies:
+ - The number of users who have been granted high-risk permissions but aren't using them.
+ - The number of users who contribute to the permission creep index (PCI) and where they are on the scale.
+
+- The [**Analytics** dashboard](usage-analytics-home.md) provides a snapshot of permission metrics within the last 90 days.
++
+## Components of the Permissions Management Dashboard
+
+The Permissions Management **Dashboard** displays the following information:
+
+- **Authorization system types**: A dropdown list of authorization system types you can access: AWS, Azure, and GCP.
+
+- **Authorization System**: Displays a **List** of accounts and **Folders** in the selected authorization system you can access.
+
+ - To add or remove accounts and folders, from the **Name** list, select or deselect accounts and folders, and then select **Apply**.
+
+- **Permission Creep Index (PCI)**: The graph displays the **# of identities contributing to PCI**.
+
+ The PCI graph may display one or more bubbles. Each bubble displays the number of identities that are considered high risk. *High-risk* refers to the number of users who have permissions that exceed their normal or required usage.
+ - To display a list of the number of identities contributing to the **Low PCI**, **Medium PCI**, and **High PCI**, select the **List** icon in the upper right of the graph.
+ - To display the PCI graph again, select the **Graph** icon in the upper right of the list box.
+
+- **Highest PCI change**: Displays a list of your accounts and information about the **PCI** and **Change** in the index over the past 7 days.
+ - To download the list, select the down arrow in the upper right of the list box.
+
+ The following message displays: **We'll email you a link to download the file.**
+ - Check your email for the message from the Permissions Management Customer Success Team. The email contains a link to the **PCI history** report in Microsoft Excel format.
+ - The email also includes a link to the **Reports** dashboard, where you can configure how and when you want to receive reports automatically.
+ - To view all the PCI changes, select **View all**.
+
+- **Identity**: A summary of the **Findings** that includes:
+ - The number of **Inactive** identities that haven't been accessed in over 90 days.
+ - The number of **Super** identities that access data regularly.
+ - The number of identities that can **Access secret information**: A list of roles that can access sensitive or secret information.
+ - **Over-provisioned active** identities that have more permissions than they currently access.
+ - The number of identities **With permission escalation**: A list of roles that can increase permissions.
+
+ To view the list of all identities, select **All findings**.
+
+- **Resources**: A summary of the **Findings** that includes the number of resources that are:
+ - **Open security groups**
+ - **Microsoft managed keys**
+ - **Instances with access to S3 buckets**
+ - **Unencrypted S3 buckets**
+ - **SSE-S3 Encrypted buckets**
+ - **S3 Bucket accessible externally**
+++
+## The PCI heat map
+
+The **Permission Creep Index** heat map shows the incurred risk of users with access to high-risk permissions, and provides information about:
+
+- Users who were given access to high-risk permissions but aren't actively using them. *High-risk permissions* include the ability to modify or delete information in the authorization system.
+
+- The number of resources a user has access to, otherwise known as resource reach.
+
+- The high-risk permissions coupled with the number of resources a user has access to produce the score seen on the chart.
+
+ Permissions are classified as *high*, *medium*, and *low*.
+
+ - **High** (displayed in red) - The score is between 68 and 100. The user has access to many high-risk permissions they aren't using, and has high resource reach.
+ - **Medium** (displayed in yellow) - The score is between 34 and 67. The user has access to some high-risk permissions that they use, or have medium resource reach.
+ - **Low** (displayed in green) - The score is between 0 and 33. The user has access to few high-risk permissions. They use all their permissions and have low resource reach.
+
+- The number displayed on the graph shows how many users contribute to a particular score. To view detailed data about a user, hover over the number.
+
+ The distribution graph displays all the users who contribute to the permission creep. It displays how many users contribute to a particular score. For example, if the score from the PCI chart is 14, the graph shows how many users have a score of 14.
+
+- The **PCI Trend** graph shows you the historical trend of the PCI score over the last 90 days.
+ - To download the **PCI history report**, select **Download**.
+
+### View information on the heat map
+
+1. Select the number on the heat map bubble to display:
+
+ - The total number of **Identities** and how many of them are in the high, medium, and low categories.
+ - The **PCI trend** over the last several weeks.
+
+1. The **Identity** section below the heat map on the left side of the page shows all the relevant findings about identities, including roles that can access secret information, roles that are inactive, over provisioned active roles, and so on.
+
+ - To expand the full list of identities, select **All findings**.
+
+1. The **Resource** section below the heat map on the right side of the page shows all the relevant findings about resources. It includes unencrypted S3 buckets, open security groups, and so on.
++
+## The Analytics summary
+
+You can also view a summary of users and activities section on the [Analytics dashboard](usage-analytics-home.md). This dashboard provides a snapshot of the following high-risk tasks or actions users have accessed, and displays the total number of users with the high-risk access, how many users are inactive or have unexecuted tasks, and how many users are active or have executed tasks:
+
+- **Users with access to high-risk tasks**: Displays the total number of users with access to a high risk task (**Total**), how many users have access but haven't used the task (**Inactive**), and how many users are actively using the task (**Active**).
+
+- **Users with access to delete tasks**: A subset of high-risk tasks, which displays the number of users with access to delete tasks (**Total**), how many users have the delete permissions but haven't used the permissions (**Inactive**), and how many users are actively executing the delete capability (**Active**).
+
+- **High-risk tasks accessible by users**: Displays all available high-risk tasks in the authorization system (**Granted**), how many high-risk tasks aren't used (**Unexecuted**), and how many high-risk tasks are used (**Executed**).
+
+- **Delete tasks accessible by users**: Displays all available delete tasks in the authorization system (**Granted**), how many delete tasks aren't used (**Unexecuted**), and how many delete tasks are used (**Executed**).
+
+- **Resources that permit high-risk tasks**: Displays the total number of resources a user has access to (**Total**), how many resources are available but not used (**Inactive**), and how many resources are used (**Active**).
+
+- **Resources that permit delete tasks**: Displays the total number of resources that permit delete tasks (**Total**), how many resources with delete tasks aren't used (**Inactive**), and how many resources with delete tasks are used (**Active**).
+++
+## Next steps
+
+- For information on how to view authorization system and account activity data on the Permissions ManagementDashboard, see [View data about the activity in your authorization system](product-dashboard.md).
+- For an overview of the Analytics dashboard, see [An overview of the Analytics dashboard](usage-analytics-home.md).
active-directory Ui Remediation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-remediation.md
+
+ Title: View existing roles/policies and requests for permission in the Remediation dashboard in Permissions Management
+description: How to view existing roles/policies and requests for permission in the Remediation dashboard in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View roles/policies and requests for permission in the Remediation dashboard
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+The **Remediation** dashboard in Permissions Management provides an overview of roles/policies, permissions, a list of existing requests for permissions, and requests for permissions you have made.
+
+This article provides an overview of the components of the **Remediation** dashboard.
+
+> [!NOTE]
+> To view the **Remediation** dashboard, your must have **Viewer**, **Controller**, or **Administrator** permissions. To make changes on this dashboard, you must have **Controller** or **Administrator** permissions. If you don't have these permissions, contact your system administrator.
+
+> [!NOTE]
+> Microsoft Azure uses the term *role* for what other cloud providers call *policy*. Permissions Management automatically makes this terminology change when you select the authorization system type. In the user documentation, we use *role/policy* to refer to both.
+
+## Display the Remediation dashboard
+
+1. On the Permissions Management home page, select the **Remediation** tab.
+
+ The **Remediation** dashboard includes six subtabs:
+
+ - **Roles/Policies**: Use this subtab to perform Create Read Update Delete (CRUD) operations on roles/policies.
+ - **Permissions**: Use this subtab to perform Read Update Delete (RUD) on granted permissions.
+ - **Role/Policy Template**: Use this subtab to create a template for roles/policies template.
+ - **Requests**: Use this subtab to view approved, pending, and processed Permission on Demand (POD) requests.
+ - **My Requests**: Use this tab to manage lifecycle of the POD request either created by you or needs your approval.
+ - **Settings**: Use this subtab to select **Request Role/Policy Filters**, **Request Settings**, and **Auto-Approve** settings.
+
+1. Use the dropdown to select the **Authorization System Type** and **Authorization System**, and then select **Apply**.
+
+## View and create roles/policies
+
+The **Role/Policies** subtab provides the following settings that you can use to view and create a role/policy.
+
+- **Authorization System Type**: Displays a dropdown with authorization system types you can access, Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).
+- **Authorization System**: Displays a list of authorization systems accounts you can access.
+- **Policy Type**: A dropdown with available role/policy types. You can select **All**, **Custom**, **System**, or **Permissions Management Only**.
+- **Policy Status**: A dropdown with available role/policy statuses. You can select **All**, **Assigned**, or **Unassigned**.
+- **Policy Usage**: A dropdown with **All** or **Unused** roles/policies.
+- **Apply**: Select this option to save the changes you've made.
+- **Reset Filter**: Select this option to discard the changes you've made.
+
+The **Policy list** displays a list of existing roles/policies and the following information about each role/policy.
+
+- **Policy Name**: The name of the roles/policies available to you.
+- **Policy Type**: **Custom**, **System**, or **Permissions Management Only**
+- **Actions**
+ - Select **Clone** to create a duplicate copy of the role/policy.
+ - Select **Modify** to change the existing role/policy.
+ - Select **Delete** to delete the role/policy.
+
+Other options available to you:
+- **Search**: Select this option to search for a specific role/policy.
+- **Reload**: Select this option to refresh the displayed list of roles/policies.
+- **Export CSV**: Select this option to export the displayed list of roles/policies as a comma-separated values (CSV) file.
+
+ When the file is successfully exported, a message appears: **Exported Successfully.**
+
+ - Check your email for a message from the Permissions Management Customer Success Team. This email contains a link to:
+ - The **Role Policy Details** report in CSV format.
+ - The **Reports** dashboard where you can configure how and when you can automatically receive reports.
+- **Create Role/Policy**: Select this option to create a new role/policy. For more information, see [Create a role/policy](how-to-create-role-policy.md).
++
+## Add filters to permissions
+
+The **Permissions** subtab provides the following settings that you can use to add filters to your permissions.
+
+- **Authorization System Type**: Displays a dropdown with authorization system types you can access, AWS, Azure, and GCP.
+- **Authorization System**: Displays a list of authorization systems accounts you can access.
+- **Search For**: A dropdown from which you can select **Group**, **User**, or **Role**.
+- **User Status**: A dropdown from which you can select **Any**, **Active**, or **Inactive**.
+- **Privilege Creep Index** (PCI): A dropdown from which you can select a PCI rating of **Any**, **High**, **Medium**, or **Low**.
+- **Task Usage**: A dropdown from which you can select **Any**, **Granted**, **Used**, or **Unused**.
+- **Enter a Username**: A dropdown from which you can select a username.
+- **Enter a Group Name**: A dropdown from which you can select a group name.
+- **Apply**: Select this option to save the changes you've made and run the filter.
+- **Reset Filter**: Select this option to discard the changes you've made.
+- **Export CSV**: Select this option to export the displayed list of roles/policies as a comma-separated values (CSV) file.
+
+ When the file is successfully exported, a message appears: **Exported Successfully.**
+
+ - Check your email for a message from the Permissions Management Customer Success Team. This email contains a link to:
+ - The **Role Policy Details** report in CSV format.
+ - The **Reports** dashboard where you can configure how and when you can automatically receive reports.
++
+## Create templates for roles/policies
+
+Use the **Role/Policy Template** subtab to create a template for roles/policies.
+
+1. Select:
+ - **Authorization System Type**: Displays a dropdown with authorization system types you can access, WS, Azure, and GCP.
+ - **Create Template**: Select this option to create a template.
+
+1. In the **Details** page, make the required selections:
+ - **Authorization System Type**: Select the authorization system types you want, **AWS**, **Azure**, or **GCP**.
+ - **Template Name**: Enter a name for your template, and then select **Next**.
+
+1. In the **Statements** page, complete the **Tasks**, **Resources**, **Request Conditions** and **Effect** sections. Then select **Save** to save your role/policy template.
+
+Other options available to you:
+- **Search**: Select this option to search for a specific role/policy.
+- **Reload**: Select this option to refresh the displayed list of roles/policies.
+
+## View requests for permission
+
+Use the **Requests** tab to view a list of **Pending**, **Approved**, and **Processed** requests for permissions your team members have made.
+
+- Select:
+ - **Authorization System Type**: Displays a dropdown with authorization system types you can access, AWS, Azure, and GCP.
+ - **Authorization System**: Displays a list of authorization systems accounts you can access.
+
+Other options available to you:
+
+- **Reload**: Select this option to refresh the displayed list of roles/policies.
+- **Search**: Select this option to search for a specific role/policy.
+- **Columns**: Select one or more of the following to view more information about the request:
+ - **Submitted By**
+ - **On Behalf Of**
+ - **Authorization System**
+ - **Tasks/Scope/Policies**
+ - **Request Date**
+ - **Schedule**
+ - **Submitted**
+ - **Reset to Default**: Select this option to discard your settings.
+
+### View pending requests
+
+The **Pending** table displays the following information:
+
+- **Summary**: A summary of the request.
+- **Submitted By**: The name of the user who submitted the request.
+- **On Behalf Of**: The name of the user on whose behalf the request was made.
+- **Authorization System**: The authorization system the user selected.
+- **Task/Scope/Policies**: The type of task/scope/policy selected.
+- **Request Date**: The date when the request was made.
+- **Submitted**: The period since the request was made.
+- The ellipses **(...)** menu - Select the ellipses, and then select **Details**, **Approve**, or **Reject**.
+- Select an option:
+ - **Reload**: Select this option to refresh the displayed list of roles/policies.
+ - **Search**: Select this option to search for a specific role/policy.
+ - **Columns**: From the dropdown, select the columns you want to display.
+
+**To return to the previous view:**
+
+- Select the up arrow.
+
+### View approved requests
+
+The **Approved** table displays information about the requests that have been approved.
+
+### View processed requests
+
+The **Processed** table displays information about the requests that have been processed.
+
+## View requests for permission for your approval
+
+Use the **My Requests** subtab to view a list of **Pending**, **Approved**, and **Processed** requests for permissions your team members have made and you must approve or reject.
+
+- Select:
+ - **Authorization System Type**: Displays a dropdown with authorization system types you can access, AWS, Azure, and GCP.
+ - **Authorization System**: Displays a list of authorization systems accounts you can access.
+
+Other options available to you:
+
+- **Reload**: Select this option to refresh the displayed list of roles/policies.
+- **Search**: Select this option to search for a specific role/policy.
+- **Columns**: Select one or more of the following to view more information about the request:
+ - **On Behalf Of**
+ - **Authorization System**
+ - **Tasks/Scope/Policies**
+ - **Request Date**
+ - **Schedule**
+ - **Reset to Default**: Select this option to discard your settings.
+- **New Request**: Select this option to create a new request for permissions. For more information, see Create a request for permissions.
+
+### View pending requests
+
+The **Pending** table displays the following information:
+
+- **Summary**: A summary of the request.
+- **Submitted By**: The name of the user who submitted the request.
+- **On Behalf Of**: The name of the user on whose behalf the request was made.
+- **Authorization System**: The authorization system the user selected.
+- **Task/Scope/Policies**: The type of task/scope/policy selected.
+- **Request Date**: The date when the request was made.
+- **Submitted**: The period since the request was made.
+- The ellipses **(...)** menu - Select the ellipses, and then select **Details**, **Approve**, or **Reject**.
+- Select an option:
+ - **Reload**: Select this option to refresh the displayed list of roles/policies.
+ - **Search**: Select this option to search for a specific role/policy.
+ - **Columns**: From the dropdown, select the columns you want to display.
++
+### View approved requests
+
+The **Approved** table displays information about the requests that have been approved.
+
+### View processed requests
+
+The **Processed** table displays information about the requests that have been processed.
+
+## Make setting selections for requests and auto-approval
+
+The **Settings** subtab provides the following settings that you can use to make setting selections to **Request Role/Policy Filters**, **Request Settings**, and **Auto-Approve** requests.
+
+- **Authorization System Type**: Displays a dropdown with authorization system types you can access, AWS, Azure, and GCP.
+- **Authorization System**: Displays a list of authorization systems accounts you can access.
+- **Reload**: Select this option to refresh the displayed list of role/policy filters.
+- **Create Filter**: Select this option to create a new filter.
+
+## Next steps
++
+- For information on how to view existing roles/policies, requests, and permissions, see [View roles/policies, requests, and permission in the Remediation dashboard](ui-remediation.md).
+- For information on how to create a role/policy, see [Create a role/policy](how-to-create-role-policy.md).
+- For information on how to clone a role/policy, see [Clone a role/policy](how-to-clone-role-policy.md).
+- For information on how to delete a role/policy, see [Delete a role/policy](how-to-delete-role-policy.md).
+- For information on how to modify a role/policy, see Modify a role/policy](how-to-modify-role-policy.md).
+- To view information about roles/policies, see [View information about roles/policies](how-to-view-role-policy.md).
+- For information on how to attach and detach permissions for AWS identities, see [Attach and detach policies for AWS identities](how-to-attach-detach-permissions.md).
+- For information on how to revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities, see [Revoke high-risk and unused tasks or assign read-only status for Azure and GCP identities](how-to-revoke-task-readonly-status.md)
+- For information on how to create or approve a request for permissions, see [Create or approve a request for permissions](how-to-create-approve-privilege-request.md).
+- For information on how to view information about roles/policies, see [View information about roles/policies](how-to-view-role-policy.md)
active-directory Ui Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-tasks.md
+
+ Title: View information about active and completed tasks in Permissions Management
+description: How to view information about active and completed tasks in the Activities pane in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View information about active and completed tasks
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes the usage of the **Permissions Management Tasks** pane in Permissions Management.
+
+## Display active and completed tasks
+
+1. In the Permissions Management home page, select **Tasks** (the timer icon).
+
+ The **Permissions Management Tasks** pane appears on the right of the Permissions Management home page. It has two tabs:
+ - **Active**: Displays a list of active tasks, a description of each task, and when the task was started.
+
+ If there are no active tasks, the following message displays: **There are no active tasks**.
+ - **Completed**: Displays a list of completed tasks, a description of each task, when the task was started and ended, and whether the task **Failed** or **Succeeded**.
+
+ If there are no completed activities, the following message displays: **There are no recently completed tasks**.
+1. To close the **Permissions Management Tasks** pane, click outside the pane.
+
+## Next steps
+
+- For information on how to create a role/policy in the **Remediation** dashboard, see [Create a role/policy](how-to-create-role-policy.md).
active-directory Ui Triggers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-triggers.md
+
+ Title: View information about activity triggers in Permissions Management
+description: How to view information about activity triggers in the Activity triggers dashboard in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View information about activity triggers
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how to use the **Activity triggers** dashboard in Permissions Management to view information about activity alerts and triggers.
+
+## Display the Activity triggers dashboard
+
+- In the Permissions Management home page, select **Activity triggers** (the bell icon).
+
+ The **Activity triggers** dashboard has four tabs:
+
+ - **Activity**
+ - **Rule-Based Anomaly**
+ - **Statistical Anomaly**
+ - **Permission Analytics**
+
+ Each tab has two subtabs:
+
+ - **Alerts**
+ - **Alert Triggers**
+
+## View information about alerts
+
+The **Alerts** subtab in the **Activity**, **Rule-Based Anomaly**, **Statistical Anomaly**, and **Permission Analytics** tabs display the following information:
+
+- **Alert Name**: Select **All** alert names or specific ones.
+- **Date**: Select **Last 24 hours**, **Last 2 Days**, **Last Week**, or **Custom Range.**
+
+ - If you select **Custom Range**, also enter **From** and **To** duration settings.
+- **Apply**: Select this option to activate your settings.
+- **Reset Filter**: Select this option to discard your settings.
+- **Reload**: Select this option to refresh the displayed information.
+- **Create Activity Trigger**: Select this option to [create a new alert trigger](how-to-create-alert-trigger.md).
+- The **Alerts** table displays a list of alerts with the following information:
+ - **Alerts**: The name of the alert.
+ - **# of users subscribed**: The number of users who have subscribed to the alert.
+ - **Created By**: The name of the user who created the alert.
+ - **Modified By**: The name of the user who modified the alert.
+
+The **Rule-Based Anomaly** tab and the **Statistical Anomaly** tab both have one more option:
+
+- **Columns**: Select the columns you want to display: **Task**, **Resource**, and **Identity**.
+ - To return to the system default settings, select **Reset to default**.
+
+## View information about alert triggers
+
+The **Alert Triggers** subtab in the **Activity**, **Rule-Based Anomaly**, **Statistical Anomaly**, and **Permission Analytics** tab displays the following information:
+
+- **Status**: Select the alert status you want to display: **All**, **Activated**, or **Deactivated**.
+- **Apply**: Select this option to activate your settings.
+- **Reset Filter**: Select this option to discard your settings.
+- **Reload**: Select **Reload** to refresh the displayed information.
+- **Create Activity Trigger**: Select this option to [create a new alert trigger](how-to-create-alert-trigger.md).
+- The **Triggers** table displays a list of triggers with the following information:
+ - **Alerts**: The name of the alert.
+ - **# of users subscribed**: The number of users who have subscribed to the alert.
+ - **Created By**: The name of the user who created the alert.
+ - **Modified By**: The name of the user who modified the alert.
++++++
+## Next steps
+
+- For information on activity alerts and alert triggers, see [Create and view activity alerts and alert triggers](how-to-create-alert-trigger.md).
+- For information on rule-based anomalies and anomaly triggers, see [Create and view rule-based anomalies and anomaly triggers](product-rule-based-anomalies.md).
+- For information on finding outliers in identity's behavior, see [Create and view statistical anomalies and anomaly triggers](product-statistical-anomalies.md).
+- For information on permission analytics triggers, see [Create and view permission analytics triggers](product-permission-analytics.md).
active-directory Ui User Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-user-management.md
+
+ Title: Manage users and groups with the User management dashboard in Permissions Management
+description: How to manage users and groups in the User management dashboard in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# Manage users and groups with the User management dashboard
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article describes how to use the Permissions Management **User management** dashboard to view and manage users and groups.
+
+**To display the User management dashboard**:
+
+- In the upper right of the Permissions Management home page, select **User** (your initials) in the upper right of the screen, and then select **User management.**
+
+ The **User Management** dashboard has two tabs:
+
+ - **Users**: Displays information about registered users.
+ - **Groups**: Displays information about groups.
+
+## Manage users
+
+Use the **Users** tab to display the following information about users:
+
+- **Name** and **Email Address**: The user's name and email address.
+- **Joined On**: The date the user registered on the system.
+- **Recent Activity**: The date the user last used their permissions to access the system.
+- The ellipses **(...)** menu: Select the ellipses, and then select **View Permissions** to open the **View User Permission** box.
+
+ - To view details about the user's permissions, select one of the following options:
+ - **Admin for all Authorization System Types** provides **View**, **Control**, and **Approve** permissions for all authorization system types.
+ - **Admin for selected Authorization System Types** provides **View**, **Control**, and **Approve** permissions for selected authorization system types.
+ - **Custom** provides **View**, **Control**, and **Approve** permissions for the authorization system types you select.
+
+You can also select the following options:
+
+- **Reload**: Select this option to refresh the information displayed in the **User** table.
+- **Search**: Enter a name or email address to search for a specific user.
+
+## Manage groups
+
+Use the **Groups** tab to display the following information about groups:
+
+- **Name**: Displays the registered user's name and email address.
+- **Permissions**:
+ - The **Authorization Systems** and the type of permissions the user has been granted: **Admin for all Authorization System Types**, **Admin for selected Authorization System Types**, or **Custom**.
+ - Information about the **Viewer**, **Controller**, **Approver**, and **Requestor**.
+- **Modified By**: The email address of the user who modified the group.
+- **Modified On**: The date the user last modified the group.
+
+- The ellipses **(...)** menu: Select the ellipses to:
+
+ - **View Permissions**: Select this option to view details about the group's permissions, and then select one of the following options:
+ - **Admin for all Authorization System Types** provides **View**, **Control**, and **Approve** permissions for all authorization system types.
+ - **Admin for selected Authorization System Types** provides **View**, **Control**, and **Approve** permissions for selected authorization system types.
+ - **Custom** provides **View**, **Control**, and **Approve** permissions for specific authorization system types that you select.
+
+ - **Edit Permissions**: Select this option to modify the group's permissions.
+ - **Delete**: Select this option to delete the group's permissions.
+
+ The **Delete Permission** box asks you to confirm that you want to delete the group.
+ - Select **Delete** if you want to delete the group, **Cancel** to discard your changes.
++
+You can also select the following options:
+
+- **Reload**: Select this option to refresh the information displayed in the **User** table.
+- **Search**: Enter a name or email address to search for a specific user.
+- **Filters**: Select the authorization systems and accounts you want to display.
+- **Create Permission**: Create a group and set up its permissions. For more information, see [Create group-based permissions](how-to-create-group-based-permissions.md)
+++
+## Next steps
+
+- For information about how to view information about active and completed tasks, see [View information about active and completed tasks](ui-tasks.md).
+- For information about how to view personal and organization information, see [View personal and organization information](product-account-settings.md).
+- For information about how to select group-based permissions settings, see [Select group-based permissions settings](how-to-create-group-based-permissions.md).
active-directory Usage Analytics Access Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-access-keys.md
+
+ Title: View analytic information about access keys in Permissions Management
+description: How to view analytic information about access keys in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View analytic information about access keys
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+The **Analytics** dashboard in Permissions Management provides details about identities, resources, and tasks that you can use make informed decisions about granting permissions, and reducing risk on unused permissions.
+
+- **Users**: Tracks assigned permissions and usage of various identities.
+- **Groups**: Tracks assigned permissions and usage of the group and the group members.
+- **Active Resources**: Tracks active resources (used in the last 90 days).
+- **Active Tasks**: Tracks active tasks (performed in the last 90 days).
+- **Access Keys**: Tracks the permission usage of access keys for a given user.
+- **Serverless Functions**: Tracks assigned permissions and usage of the serverless functions.
+
+This article describes how to view usage analytics about access keys.
+
+## Create a query to view access keys
+
+When you select **Access keys**, the **Analytics** dashboard provides a high-level overview of tasks used by various identities.
+
+1. On the main **Analytics** dashboard, select **Access Keys** from the drop-down list at the top of the screen.
+
+ The following components make up the **Access Keys** dashboard:
+
+ - **Authorization System Type**: Select the authorization you want to use: Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
+ - **Authorization System**: Select from a **List** of accounts and **Folders***.
+ - **Key Status**: Select **All**, **Active**, or **Inactive**.
+ - **Key Activity State**: Select **All**, how long the access key has been used, or **Not Used**.
+ - **Key Age**: Select **All** or how long ago the access key was created.
+ - **Task Type**: Select **All** tasks, **High Risk Tasks** or, for a list of tasks where users have deleted data, select **Delete Tasks**.
+ - **Search**: Enter criteria to find specific tasks.
+1. Select **Apply** to display the criteria you've selected.
+
+ Select **Reset Filter** to discard your changes.
++
+## View the results of your query
+
+The **Access Keys** table displays the results of your query.
+
+- **Access Key ID**: Provides the ID for the access key.
+ - To view details about the access keys, select the down arrow to the left of the ID.
+- The **Owner** name.
+- The **Account** number.
+- The **Permission Creep Index (PCI)**: Provides the following information:
+ - **Index**: A numeric value assigned to the PCI.
+ - **Since**: How many days the PCI value has been at the displayed level.
+- **Tasks** Displays the number of **Granted** and **Executed** tasks.
+- **Resources**: The number of resources used.
+- **Access Key Age**: How old the access key is, in days.
+- **Last Used**: How long ago the access key was last accessed.
+
+## Apply filters to your query
+
+There are many filter options within the **Active Tasks** screen, including filters by **Authorization System**, filters by **User** and filters by **Task**.
+Filters can be applied in one, two, or all three categories depending on the type of information you're looking for.
+
+### Apply filters by authorization system type
+
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset Filter** to discard your changes.
++
+### Apply filters by authorization system
+
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select accounts from a **List** of accounts and **Folders**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset Filter** to discard your changes.
+
+### Apply filters by key status
+
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Key Status** dropdown, select the type of key: **All**, **Active**, or **Inactive**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset Filter** to discard your changes.
+
+### Apply filters by key activity status
+
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Key Activity State** dropdown, select **All**, the duration for how long the access key has been used, or **Not Used**.
+
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset Filter** to discard your changes.
+
+### Apply filters by key age
+
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Key Age** dropdown, select **All** or how long ago the access key was created.
+
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset Filter** to discard your changes.
+
+### Apply filters by task type
+
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Task Type** dropdown, select **All** tasks, **High Risk Tasks** or, for a list of tasks where users have deleted data, select **Delete tasks**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset Filter** to discard your changes.
+++
+## Export the results of your query
+
+- To view a report of the results of your query as a comma-separated values (CSV) file, select **Export**, and then select **CSV** or **CSV (Detailed)**.
+
+## Next steps
+
+- To view active tasks, see [View usage analytics about active tasks](usage-analytics-active-tasks.md).
+- To view assigned permissions and usage by users, see [View usage analytics about users](usage-analytics-users.md).
+- To view assigned permissions and usage of the group and the group members, see [View usage analytics about groups](usage-analytics-groups.md).
+- To view active resources, see [View usage analytics about active resources](usage-analytics-active-resources.md).
+- To view assigned permissions and usage of the serverless functions, see [View usage analytics about serverless functions](usage-analytics-serverless-functions.md).
active-directory Usage Analytics Active Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-active-resources.md
+
+ Title: View analytic information about active resources in Permissions Management
+description: How to view usage analytics about active resources in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View analytic information about active resources
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+The **Analytics** dashboard in Permissions Management collects detailed information, analyzes, reports on, and visualizes data about all identity types. System administrators can use the information to make informed decisions about granting permissions and reducing risk on unused permissions for:
+
+- **Users**: Tracks assigned permissions and usage of various identities.
+- **Groups**: Tracks assigned permissions and usage of the group and the group members.
+- **Active Resources**: Tracks active resources (used in the last 90 days).
+- **Active Tasks**: Tracks active tasks (performed in the last 90 days).
+- **Access Keys**: Tracks the permission usage of access keys for a given user.
+- **Serverless Functions**: Tracks assigned permissions and usage of the serverless functions.
+
+This article describes how to view usage analytics about active resources.
+
+## Create a query to view active resources
+
+1. On the main **Analytics** dashboard, select **Active Resources** from the drop-down list at the top of the screen.
+
+ The dashboard only lists tasks that are active. The following components make up the **Active Resources** dashboard:
+1. From the dropdowns, select:
+ - **Authorization System Type**: The authorization you want to use: Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
+ - **Authorization System**: The **List** of accounts and **Folders** you want to include.
+ - **Tasks Type**: Select **All** tasks, **High Risk Tasks** or, for a list of tasks where users have deleted data, select **Delete Tasks**.
+ - **Service Resource Type**: The service resource type.
+ - **Search**: Enter criteria to find specific tasks.
+
+1. Select **Apply** to display the criteria you've selected.
+
+ Select **Reset Filter** to discard your changes.
++
+## View the results of your query
+
+The **Active Resources** table displays the results of your query:
+
+- **Resource Name**: Provides the name of the task.
+ - To view details about the task, select the down arrow.
+- **Account**: The name of the account.
+- **Resources Type**: The type of resources used, for example, **bucket** or **key**.
+- **Tasks**: Displays the number of **Granted** and **Executed** tasks.
+- **Number of Users**: The number of users with access and accessed.
+- Select the ellipses **(...)** and select **Tags** to add a tag.
+
+## Add a tag to an active resource
+
+1. Select the ellipses **(...)** and select **Tags**.
+1. From the **Select a Tag** dropdown, select a tag.
+1. To create a custom tag select **New Custom Tag**, add a tag name, and then select **Create**.
+1. In the **Value (Optional)** box, enter a value.
+1. Select the ellipses **(...)** to select **Advanced Save** options, and then select **Save**.
+1. To add the tag to the serverless function, select **Add Tag**.
++
+## Apply filters to your query
+
+There are many filter options within the **Active Resources** screen, including filters by **Authorization System**, filters by **User** and filters by **Task**.
+Filters can be applied in one, two, or all three categories depending on the type of information you're looking for.
+
+### Apply filters by authorization system
+
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset Filter** to discard your changes.
++
+### Apply filters by authorization system type
+
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset Filter** to discard your changes.
+
+### Apply filters by task type
+
+You can filter user details by type of user, user role, app, or service used, or by resource.
+
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Task Type**, select the type of user: **All**, **User**, **Role/App/Service a/c**, or **Resource**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset Filter** to discard your changes.
++
+### Apply filters by service resource type
+
+You can filter user details by type of user, user role, app, or service used, or by resource.
+
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Service Resource Type**, select the type of service resource.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset Filter** to discard your changes.
+
+## Export the results of your query
+
+- To view a report of the results of your query as a comma-separated values (CSV) file, select **Export**, and then select **CSV**.
++
+## Next steps
+
+- To track active tasks, see [View usage analytics about active tasks](usage-analytics-active-tasks.md).
+- To track assigned permissions and usage of users, see [View usage analytics about users](usage-analytics-users.md).
+- To track assigned permissions and usage of the group and the group members, see [View usage analytics about groups](usage-analytics-groups.md).
+- To track the permission usage of access keys for a given user, see [View usage analytics about access keys](usage-analytics-access-keys.md).
+- To track assigned permissions and usage of the serverless functions, see [View usage analytics about serverless functions](usage-analytics-serverless-functions.md).
active-directory Usage Analytics Active Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-active-tasks.md
+
+ Title: View analytic information about active tasks in Permissions Management
+description: How to view analytic information about active tasks in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View analytic information about active tasks
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+The **Analytics** dashboard in Permissions Management collects detailed information, analyzes, reports on, and visualizes data about all identity types. System administrators can use the information to make informed decisions about granting permissions and reducing risk on unused permissions for:
+
+- **Users**: Tracks assigned permissions and usage of various identities.
+- **Groups**: Tracks assigned permissions and usage of the group and the group members.
+- **Active Resources**: Tracks active resources (used in the last 90 days).
+- **Active Tasks**: Tracks active tasks (performed in the last 90 days).
+- **Access Keys**: Tracks the permission usage of access keys for a given user.
+- **Serverless Functions**: Tracks assigned permissions and usage of the serverless functions.
+
+This article describes how to view usage analytics about active tasks.
+
+## Create a query to view active tasks
+
+When you select **Active Tasks**, the **Analytics** dashboard provides a high-level overview of tasks used by various identities.
+
+1. On the main **Analytics** dashboard, select **Active Tasks** from the drop-down list at the top of the screen.
+
+ The dashboard only lists tasks that are active. The following components make up the **Active Tasks** dashboard:
+
+ - **Authorization System Type**: Select the authorization you want to use: Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
+ - **Authorization System**: Select from a **List** of accounts and **Folders***.
+ - **Tasks Type**: Select **All** tasks, **High Risk tasks** or, for a list of tasks where users have deleted data, select **Delete Tasks**.
+ - **Search**: Enter criteria to find specific tasks.
+
+1. Select **Apply** to display the criteria you've selected.
+
+ Select **Reset Filter** to discard your changes.
++
+## View the results of your query
+
+The **Active Tasks** table displays the results of your query.
+
+- **Task Name**: Provides the name of the task.
+ - To view details about the task, select the down arrow in the table.
+
+ - A **Normal Task** icon displays to the left of the task name if the task is normal (that is, not risky).
+ - A **Deleted Task** icon displays to the left of the task name if the task involved deleting data.
+ - A **High-Risk Task** icon displays to the left of the task name if the task is high-risk.
+
+- **Performed on (resources)**: The number of resources on which the task was used.
+
+- **Number of Users**: Displays how many users performed tasks. The tasks are organized into the following columns:
+ - **With Access**: Displays the number of users that have access to the task but haven't accessed it.
+ - **Accessed**: Displays the number of users that have accessed the task.
++
+## Apply filters to your query
+
+There are many filter options within the **Active Tasks** screen, including **Authorization System**, **User**, and **Task**.
+Filters can be applied in one, two, or all three categories depending on the type of information you're looking for.
+
+### Apply filters by authorization system type
+
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset Filter** to discard your changes.
+
+### Apply filters by authorization system
+
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select accounts from a **List** of accounts and **Folders**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset Filter** to discard your changes.
++
+### Apply filters by task type
+
+You can filter user details by type of user, user role, app, or service used, or by resource.
+
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Task Type** dropdown, select the type of tasks: **All**, **High Risk Tasks**, or **Delete Tasks**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset Filter** to discard your changes.
++
+## Export the results of your query
+
+- To view a report of the results of your query as a comma-separated values (CSV) file, select **Export**, and then select **CSV**.
+
+## Next steps
+
+- To view assigned permissions and usage by users, see [View analytic information about users](usage-analytics-users.md).
+- To view assigned permissions and usage of the group and the group members, see [View analytic information about groups](usage-analytics-groups.md).
+- To view active resources, see [View analytic information about active resources](usage-analytics-active-resources.md).
+- To view the permission usage of access keys for a given user, see [View analytic information about access keys](usage-analytics-access-keys.md).
+- To view assigned permissions and usage of the serverless functions, see [View analytic information about serverless functions](usage-analytics-serverless-functions.md).
active-directory Usage Analytics Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-groups.md
+
+ Title: View analytic information about groups in Permissions Management
+description: How to view analytic information about groups in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View analytic information about groups
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+The **Analytics** dashboard in Permissions Management collects detailed information, analyzes, reports on, and visualizes data about all identity types. System administrators can use the information to make informed decisions about granting permissions and reducing risk on unused permissions for:
+
+- **Users**: Tracks assigned permissions and usage of various identities.
+- **Groups**: Tracks assigned permissions and usage of the group and the group members.
+- **Active Resources**: Tracks active resources (used in the last 90 days).
+- **Active Tasks**: Tracks active tasks (performed in the last 90 days).
+- **Access Keys**: Tracks the permission usage of access keys for a given user.
+- **Serverless Functions**: Tracks assigned permissions and usage of the serverless functions.
+
+This article describes how to view usage analytics about groups.
+
+## Create a query to view groups
+
+When you select **Groups**, the **Usage Analytics** dashboard provides a high-level overview of groups.
+
+1. On the main **Analytics** dashboard, select **Groups** from the drop-down list at the top of the screen.
+
+ The following components make up the **Groups** dashboard:
+
+ - **Authorization System Type**: Select the authorization you want to use: Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
+ - **Authorization System**: Select from a **List** of accounts and **Folders**.
+ - **Group Type**: Select **All**, **ED**, or **Local**.
+ - **Group Activity Status**: Select **All**, **Active**, or **Inactive**.
+ - **Tasks Type**: Select **All**, **High Risk Tasks**, or **Delete Tasks**
+ - **Search**: Enter group name to find specific group.
+1. To display the criteria you've selected, select **Apply**.
+ - **Reset Filter**: Select to discard your changes.
++
+## View the results of your query
+
+The **Groups** table displays the results of your query:
+
+- **Group Name**: Provides the name of the group.
+ - To view details about the group, select the down arrow.
+- A **Group Type** icon displays to the left of the group name to describe the type of group (**ED** or **Local**).
+- The **Domain/Account** name.
+- The **Permission Creep Index (PCI)**: Provides the following information:
+ - **Index**: A numeric value assigned to the PCI.
+ - **Since**: How many days the PCI value has been at the displayed level.
+- **Tasks**: Displays the number of **Granted** and **Executed** tasks.
+- **Resources**: The number of resources used.
+- **Users**: The number of users who accessed the group.
+- Select the ellipses **(...)** and select **Tags** to add a tag.
+
+## Add a tag to a group
+
+1. Select the ellipses **(...)** and select **Tags**.
+1. From the **Select a Tag** dropdown, select a tag.
+1. To create a custom tag select **New Custom Tag**, add a tag name, and then select **Create**.
+1. In the **Value (Optional)** box, enter a value.
+1. Select the ellipses **(...)** to select **Advanced Save** options, and then select **Save**.
+1. To add the tag to the serverless function, select **Add Tag**.
+
+## View detailed information about a group
+
+1. Select the down arrow to the left of the **Group Name**.
+
+ The list of **Tasks** organized by **Unused** and **Used** displays.
+
+1. Select the arrow to the left of the group name to view details about the task.
+1. Select **Information** (**i**) to view when the task was last used.
+1. From the **Tasks** dropdown, select **All Tasks**, **High Risk Tasks**, and **Delete Tasks**.
+1. The pane on the right displays a list of **Users**, **Policies** for **AWS** and **Roles** for **GCP or AZURE**, and **Tags**.
+
+## Apply filters to your query
+
+There are many filter options within the **Groups** screen, including filters by **Authorization System Type**, **Authorization System**, **Group Type**, **Group Activity Status**, and **Tasks Type**.
+Filters can be applied in one, two, or all three categories depending on the type of information you're looking for.
+
+### Apply filters by authorization system type
+
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset Filter** to discard your changes.
+
+### Apply filters by authorization system
+
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select accounts from a **List** of accounts and **Folders**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset Filter** to discard your changes.
++
+### Apply filters by group type
+
+You can filter user details by type of user, user role, app, or service used, or by resource.
+
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Group Type** dropdown, select the type of user: **All**, **ED**, or **Local**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset Filter** to discard your changes.
+
+### Apply filters by group activity status
+
+You can filter user details by type of user, user role, app, or service used, or by resource.
+
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Group Activity Status** dropdown, select the type of user: **All**, **Active**, or **Inactive**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset Filter** to discard your changes.
+
+### Apply filters by tasks type
+
+You can filter user details by type of user, user role, app, or service used, or by resource.
+
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Tasks Type** dropdown, select the type of user: **All**, **High Risk Tasks**, or **Delete Tasks**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset Filter** to discard your changes.
++
+## Export the results of your query
+
+- To view a report of the results of your query as a comma-separated values (CSV) file, select **Export**, and then select **CSV**.
+- To view a list of members of the groups in your query, select **Export**, and then select **Memberships**.
+++
+## Next steps
+
+- To view active tasks, see [View analytic information about active tasks](usage-analytics-active-tasks.md).
+- To view assigned permissions and usage by users, see [View analytic information about users](usage-analytics-users.md).
+- To view active resources, see [View analytic information about active resources](usage-analytics-active-resources.md).
+- To view the permission usage of access keys for a given user, see [View analytic information about access keys](usage-analytics-access-keys.md).
+- To view assigned permissions and usage of the serverless functions, see [View analytic information about serverless functions](usage-analytics-serverless-functions.md).
active-directory Usage Analytics Home https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-home.md
+
+ Title: View analytic information with the Analytics dashboard in Permissions Management
+description: How to use the Analytics dashboard in Permissions Management to view details about users, groups, active resources, active tasks, access keys, and serverless functions.
+++++++ Last updated : 02/23/2022+++
+# View analytic information with the Analytics dashboard
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+This article provides a brief overview of the Analytics dashboard in Permissions Management, and the type of analytic information it provides for Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
+
+## Display the Analytics dashboard
+
+- From the Permissions Management home page, select the **Analytics** tab.
+
+ The **Analytics** dashboard displays detailed information about:
+
+ - **Users**: Tracks assigned permissions and usage by users. For more information, see [View analytic information about users](usage-analytics-users.md).
+
+ - **Groups**: Tracks assigned permissions and usage of the group and the group members. For more information, see [View analytic information about groups](usage-analytics-groups.md).
+
+ - **Active Resources**: Tracks resources that have been used in the last 90 days. For more information, see [View analytic information about active resources](usage-analytics-active-resources.md).
+
+ - **Active Tasks**: Tracks tasks that have been performed in the last 90 days. For more information, see [View analytic information about active tasks](usage-analytics-active-tasks.md).
+
+ - **Access Keys**: Tracks the permission usage of access keys for a given user. For more information, see [View analytic information about access keys](usage-analytics-access-keys.md).
+
+ - **Serverless Functions**: Tracks assigned permissions and usage of the serverless functions for AWS only. For more information, see [View analytic information about serverless functions](usage-analytics-serverless-functions.md).
+
+ System administrators can use this information to make decisions about granting permissions and reducing risk on unused permissions.
+++
+## Next steps
+
+- To view active tasks, see [View analytic information about active tasks](usage-analytics-active-tasks.md).
+- To view assigned permissions and usage by users, see [View analytic information about users](usage-analytics-users.md).
+- To view assigned permissions and usage of the group and the group members, see [View analytic information about groups](usage-analytics-groups.md).
+- To view active resources, see [View analytic information about active resources](usage-analytics-active-resources.md).
+- To view the permission usage of access keys for a given user, see [View analytic information about access keys](usage-analytics-access-keys.md).
+- To view assigned permissions and usage of the serverless functions, see [View analytic information about serverless functions](usage-analytics-serverless-functions.md).
active-directory Usage Analytics Serverless Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-serverless-functions.md
+
+ Title: View analytic information about serverless functions in Permissions Management
+description: How to view analytic information about serverless functions in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View analytic information about serverless functions
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+The **Analytics** dashboard in Permissions Management collects detailed information, analyzes, reports on, and visualizes data about all identity types. System administrators can use the information to make informed decisions about granting permissions and reducing risk on unused permissions for:
+
+- **Users**: Tracks assigned permissions and usage of various identities.
+- **Groups**: Tracks assigned permissions and usage of the group and the group members.
+- **Active Resources**: Tracks active resources (used in the last 90 days).
+- **Active Tasks**: Tracks active tasks (performed in the last 90 days).
+- **Access Keys**: Tracks the permission usage of access keys for a given user.
+- **Serverless Functions**: Tracks assigned permissions and usage of the serverless functions.
+
+This article describes how to view usage analytics about serverless functions.
+
+## Create a query to view serverless functions
+
+When you select **Serverless Functions**, the **Analytics** dashboard provides a high-level overview of tasks used by various identities.
+
+1. On the main **Analytics** dashboard, select **Serverless Functions** from the dropdown list at the top of the screen.
+
+ The following components make up the **Serverless Functions** dashboard:
+
+ - **Authorization System Type**: Select the authorization you want to use: Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
+ - **Authorization System**: Select from a **List** of accounts and **Folders**.
+ - **Search**: Enter criteria to find specific tasks.
+1. Select **Apply** to display the criteria you've selected.
+
+ Select **Reset Filter** to discard your changes.
++
+## View the results of your query
+
+The **Serverless Functions** table displays the results of your query.
+
+- **Function Name**: Provides the name of the serverless function.
+ - To view details about a serverless function, select the down arrow to the left of the function name.
+- A **Function Type** icon displays to the left of the function name to describe the type of serverless function, for example **Lambda function**.
+- The **Permission Creep Index (PCI)**: Provides the following information:
+ - **Index**: A numeric value assigned to the PCI.
+ - **Since**: How many days the PCI value has been at the displayed level.
+- **Tasks**: Displays the number of **Granted** and **Executed** tasks.
+- **Resources**: The number of resources used.
+- **Last Activity On**: The date the function was last accessed.
+- Select the ellipses **(...)**, and then select **Tags** to add a tag.
+
+## Add a tag to a serverless function
+
+1. Select the ellipses **(...)** and select **Tags**.
+1. From the **Select a Tag** dropdown, select a tag.
+1. To create a custom tag select **New Custom Tag**, add a tag name, and then select **Create**.
+1. In the **Value (Optional)** box, enter a value.
+1. Select the ellipses **(...)** to select **Advanced Save** options, and then select **Save**.
+1. To add the tag to the serverless function, select **Add Tag**.
+
+## View detailed information about a serverless function
+
+1. Select the down arrow to the left of the function name to display the following:
+
+ - A list of **Tasks** organized by **Used** and **Unused**.
+ - **Versions**, if a version is available.
+
+1. Select the arrow to the left of the task name to view details about the task.
+1. Select **Information** (**i**) to view when the task was last used.
+1. From the **Tasks** dropdown, select **All Tasks**, **High Risk Tasks**, and **Delete Tasks**.
++
+## Apply filters to your query
+
+You can filter the **Serverless Functions** results by **Authorization System Type** and **Authorization System**.
+
+### Apply filters by authorization system type
+
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset Filter** to discard your changes.
++
+### Apply filters by authorization system
+
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select accounts from a **List** of accounts and **Folders**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset Filter** to discard your changes.
+++
+## Next steps
+
+- To view active tasks, see [View usage analytics about active tasks](usage-analytics-active-tasks.md).
+- To view assigned permissions and usage by users, see [View analytic information about users](usage-analytics-users.md).
+- To view assigned permissions and usage of the group and the group members, see [View analytic information about groups](usage-analytics-groups.md).
+- To view active resources, see [View analytic information about active resources](usage-analytics-active-resources.md).
+- To view the permission usage of access keys for a given user, see [View analytic information about access keys](usage-analytics-access-keys.md).
active-directory Usage Analytics Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-users.md
+
+ Title: View analytic information about users in Permissions Management
+description: How to view analytic information about users in Permissions Management.
+++++++ Last updated : 02/23/2022+++
+# View analytic information about users
+
+> [!IMPORTANT]
+> Microsoft Entra Permissions Management is currently in PREVIEW.
+> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+
+The **Analytics** dashboard in Permissions Management collects detailed information, analyzes, reports on, and visualizes data about all identity types. System administrators can use the information to make informed decisions about granting permissions and reducing risk on unused permissions for:
+
+- **Users**: Tracks assigned permissions and usage of various identities.
+- **Groups**: Tracks assigned permissions and usage of the group and the group members.
+- **Active Resources**: Tracks active resources (used in the last 90 days).
+- **Active Tasks**: Tracks active tasks (performed in the last 90 days).
+- **Access Keys**: Tracks the permission usage of access keys for a given user.
+- **Serverless Functions**: Tracks assigned permissions and usage of the serverless functions.
+
+This article describes how to view usage analytics about users.
+
+## Create a query to view users
+
+When you select **Users**, the **Analytics** dashboard provides a high-level overview of tasks used by various identities.
+
+1. On the main **Analytics** dashboard, select **Users** from the drop-down list at the top of the screen.
+
+ The following components make up the **Users** dashboard:
+
+ - **Authorization System Type**: Select the authorization you want to use: Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
+ - **Authorization System**: Select from a **List** of accounts and **Folders***.
+ - **Identity Type**: Select **All** identity types, **User**, **Role/App/Service a/c** or **Resource**.
+ - **Search**: Enter criteria to find specific tasks.
+1. Select **Apply** to display the criteria you've selected.
+
+ Select **Reset filter** to discard your changes.
++
+## View the results of your query
+
+The **Identities** table displays the results of your query.
+
+- **Name**: Provides the name of the group.
+ - To view details about the group, select the down arrow.
+- The **Domain/Account** name.
+- The **Permission Creep Index (PCI)**: Provides the following information:
+ - **Index**: A numeric value assigned to the PCI.
+ - **Since**: How many days the PCI value has been at the displayed level.
+- **Tasks**: Displays the number of **Granted** and **Executed** tasks.
+- **Resources**: The number of resources used.
+- **User Groups**: The number of users who accessed the group.
+- **Last Activity On**: The date the function was last accessed.
+- The ellipses **(...)**: Select **Tags** to add a tag.
+
+ If you're using AWS, another selection is available from the ellipses menu: **Auto Remediate**. You can use this option to remediate your results automatically.
+
+## Add a tag to a user
+
+1. Select the ellipses **(...)** and select **Tags**.
+1. From the **Select a Tag** dropdown, select a tag.
+1. To create a custom tag select **New Custom Tag**, add a tag name, and then select **Create**.
+1. In the **Value (Optional)** box, enter a value.
+1. Select the ellipses **(...)** to select **Advanced Save** options, and then select **Save**.
+1. To add the tag to the serverless function, select **Add Tag**.
+
+## Set the auto-remediate option (AWS only)
+
+- Select the ellipses **(...)** and select **Auto Remediate**.
+
+ A message displays to confirm that your remediation settings are automatically updated.
+
+## Apply filters to your query
+
+There are many filter options within the **Users** screen, including filters by **Authorization System**, **Identity Type**, and **Identity State**.
+Filters can be applied in one, two, or all three categories depending on the type of information you're looking for.
+
+### Apply filters by authorization system type
+
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset Filter** to discard your changes.
++
+### Apply filters by authorization system
+
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select accounts from a **List** of accounts and **Folders**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
+
+### Apply filters by identity type
+
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Identity Type**, select the type of user: **All**, **User**, **Role/App/Service a/c**, or **Resource**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset Filter** to discard your changes.
+
+### Apply filters by identity subtype
+
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Identity Subtype**, select the type of user: **All**, **ED**, **Local**, or **Cross Account**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset filter** to discard your changes.
+
+### Apply filters by identity state
+
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Identity State**, select the type of user: **All**, **Active**, or **Inactive**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset Filter** to discard your changes.
++
+### Apply filters by identity filters
+
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Identity Type**, select: **Risky** or **Incl. in PCI Calculation Only**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset Filter** to discard your changes.
++
+### Apply filters by task type
+
+You can filter user details by type of user, user role, app, or service used, or by resource.
+
+1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**.
+1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
+1. From the **Task Type**, select the type of user: **All** or **High Risk Tasks**.
+1. Select **Apply** to run your query and display the information you selected.
+
+ Select **Reset Filter** to discard your changes.
++
+## Export the results of your query
+
+- To export a report of the results of your query as a comma-separated values (CSV) file, select **Export**, and then select **CSV**.
+- To export the data in a detailed comma-separated values (CSV) file format, select **Export** and then select **CSV (Detailed)**.
+- To export a report of user permissions, select **Export** and then select **Permissions**.
++
+## Next steps
+
+- To view active tasks, see [View analytic information about active tasks](usage-analytics-active-tasks.md).
+- To view assigned permissions and usage of the group and the group members, see [View analytic information about groups](usage-analytics-groups.md).
+- To view active resources, see [View analytic information about active resources](usage-analytics-active-resources.md).
+- To view the permission usage of access keys for a given user, see [View analytic information about access keys](usage-analytics-access-keys.md).
+- To view assigned permissions and usage of the serverless functions, see [View analytic information about serverless functions](usage-analytics-serverless-functions.md).
active-directory Msal Js Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-js-sso.md
# Single sign-on with MSAL.js
-Single sign-on (SSO) provides a more seamless experience by reducing the number of times your users are asked for their credentials. Users enter their credentials once, and the established session can be reused by other applications on the device without further prompting.
+Single sign-on (SSO) provides a more seamless experience by reducing the number of times your users are asked for their credentials. Users enter their credentials once, and the established session can be reused by other applications on the device without further prompting.
-Azure Active Directory (Azure AD) enables SSO by setting a session cookie when a user first authenticates. MSAL.js allows use of the session cookie for SSO between the browser tabs opened for one or several applications.
+Azure Active Directory (Azure AD) enables SSO by setting a session cookie when a user authenticates for the first time. MSAL.js allows the usage of the session cookie for SSO between the browser tabs opened for one or several applications.
-## SSO between browser tabs
+## SSO between browser tabs for the same app
-When a user has your application open in several tabs and signs in on one of them, they're signed into the same app open on the other tabs without being prompted. MSAL.js caches the ID token for the user in the browser `localStorage` and will sign the user in to the application on the other open tabs.
-
-By default, MSAL.js uses `sessionStorage`, which doesn't allow the session to be shared between tabs. To get SSO between tabs, make sure to set the `cacheLocation` in MSAL.js to `localStorage` as shown below.
+When a user has your application open in several tabs and signs in on one of them, they can be signed into the same app open on the other tabs without being prompted. To do so, you'll need to set the *cacheLocation* in MSAL.js configuration object to `localStorage` as shown below.
```javascript const config = { auth: {
- clientId: "abcd-ef12-gh34-ikkl-ashdjhlhsdg",
+ clientId: "1111-2222-3333-4444-55555555",
}, cache: { cacheLocation: "localStorage",
const config = {
const msalInstance = new msal.PublicClientApplication(config); ```
-## SSO between apps
-
-When a user authenticates, a session cookie is set on the Azure AD domain in the browser. MSAL.js relies on this session cookie to provide SSO for the user between different applications. MSAL.js also caches the ID tokens and access tokens of the user in the browser storage per application domain. As a result, the SSO behavior varies for different cases:
-
-### Applications on the same domain
-
-When applications are hosted on the same domain, the user can sign into an app once and then get authenticated to the other apps without a prompt. MSAL.js uses the tokens cached for the user on the domain to provide SSO.
-
-### Applications on different domain
-
-When applications are hosted on different domains, the tokens cached on domain A cannot be accessed by MSAL.js in domain B.
-
-When a user signed in on domain A navigates to an application on domain B, they're typically redirected or prompted to sign in. Because Azure AD still has the user's session cookie, it signs in the user without prompting for credentials.
+## SSO between different apps
-If the user has multiple user accounts in a session with Azure AD, the user is prompted to pick an account to sign in with.
+When a user authenticates, a session cookie is set on the Azure AD domain in the browser. MSAL.js relies on this session cookie to provide SSO for the user between different applications. MSAL.js also caches the ID tokens and access tokens of the user in the browser storage per application domain.
-### Automatic account selection
+MSAL.js offers the `ssoSilent` method to sign-in the user and obtain tokens without an interaction. However, if the user has multiple user accounts in a session with Azure AD, then the user is prompted to pick an account to sign in with. As such, there are two ways to achieve SSO using `ssoSilent` method.
-When a user is signed in concurrently to multiple Azure AD accounts on the same device, you might find you have the need to bypass the account selection prompt.
+### With user hint
-**Using a session ID**
+To improve performance and ensure that the authorization server will look for the correct account session. You can pass one of the following options in the request object of the `ssoSilent` method to obtain the token silently.
-Use the session ID (SID) in silent authentication requests you make with `acquireTokenSilent` in MSAL.js.
+- Session ID `sid` (which can be retrieved from `idTokenClaims` of an `account` object)
+- `login_hint` (which can be retrieved from the `account` object username property or the `upn` claim in the ID token)
+- `account` (which can be retrieved from using one the [account methods](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/login-user.md#account-apis))
-To use a SID, add `sid` as an [optional claim](active-directory-optional-claims.md) to your app's ID tokens. The `sid` claim allows an application to identify a user's Azure AD session independent of their account name or username. To learn how to add optional claims like `sid`, see [Provide optional claims to your app](active-directory-optional-claims.md).
+#### Using a session ID
-The SID is bound to the session cookie and won't cross browser contexts. You can use the SID only with `acquireTokenSilent`.
+To use a session ID, add `sid` as an [optional claim](active-directory-optional-claims.md) to your app's ID tokens. The `sid` claim allows an application to identify a user's Azure AD session independent of their account name or username. To learn how to add optional claims like `sid`, see [Provide optional claims to your app](active-directory-optional-claims.md). Use the session ID (SID) in silent authentication requests you make with `ssoSilent` in MSAL.js.
```javascript
-var request = {
+const request = {
scopes: ["user.read"], sid: sid, };
- msalInstance.acquireTokenSilent(request)
- .then(function (response) {
- const token = response.accessToken;
- })
- .catch(function (error) {
- //handle error
- });
+ try {
+ const loginResponse = await msalInstance.ssoSilent(request);
+} catch (err) {
+ if (err instanceof InteractionRequiredAuthError) {
+ const loginResponse = await msalInstance.loginPopup(request).catch(error => {
+ // handle error
+ });
+ } else {
+ // handle error
+ }
+}
```
-**Using a login hint**
+#### Using a login hint
To bypass the account selection prompt typically shown during interactive authentication requests (or for silent requests when you haven't configured the `sid` optional claim), provide a `loginHint`. In multi-tenant applications, also include a `domain_hint`. ```javascript
-var request = {
+const request = {
scopes: ["user.read"], loginHint: preferred_username, extraQueryParameters: { domain_hint: "organizations" }, };
- msalInstance.loginRedirect(request);
+try {
+ const loginResponse = await msalInstance.ssoSilent(request);
+} catch (err) {
+ if (err instanceof InteractionRequiredAuthError) {
+ const loginResponse = await msalInstance.loginPopup(request).catch(error => {
+ // handle error
+ });
+ } else {
+ // handle error
+ }
+}
``` Get the values for `loginHint` and `domain_hint` from the user's **ID token**:
Get the values for `loginHint` and `domain_hint` from the user's **ID token**:
For more information about login hint and domain hint, see [Microsoft identity platform and OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md).
-## SSO without MSAL.js login
+#### Using an account object
-By design, MSAL.js requires that a login method is called to establish a user context before getting tokens for APIs. Since login methods are interactive, the user sees a prompt.
+If you know the user account information, you can also retrieve the user account by using the `getAccountByUsername()` or `getAccountByHomeId()` methods:
-There are certain cases in which applications have access to the authenticated user's context or ID token through authentication initiated in another application and want to use SSO to acquire tokens without first signing in through MSAL.js.
+```javascript
+const username = "test@contoso.com";
+const myAccount = msalInstance.getAccountByUsername(username);
+
+const request = {
+ scopes: ["User.Read"],
+ account: myAccount
+};
-An example: A user is signed in to Microsoft account in a browser that hosts another JavaScript application running as an add-on or plugin, which requires a Microsoft account sign-in.
+try {
+ const loginResponse = await msalInstance.ssoSilent(request);
+} catch (err) {
+ if (err instanceof InteractionRequiredAuthError) {
+ const loginResponse = await msalInstance.loginPopup(request).catch(error => {
+ // handle error
+ });
+ } else {
+ // handle error
+ }
+}
+```
-The SSO experience in this scenario can be achieved as follows:
+### Without user hint
-Pass the `sid` if available (or `login_hint` and optionally `domain_hint`) as request parameters to the MSAL.js `acquireTokenSilent` call as follows:
+You can attempt to use the `ssoSilent` method without passing any `account`, `sid` or `login_hint` as shown in the code below:
```javascript
-var request = {
- scopes: ["user.read"],
- loginHint: preferred_username,
- extraQueryParameters: { domain_hint: "organizations" },
+const request = {
+ scopes: ["User.Read"]
};
-msalInstance.acquireTokenSilent(request)
- .then(function (response) {
- const token = response.accessToken;
- })
- .catch(function (error) {
- //handle error
- });
+try {
+ const loginResponse = await msalInstance.ssoSilent(request);
+} catch (err) {
+ if (err instanceof InteractionRequiredAuthError) {
+ const loginResponse = await msalInstance.loginPopup(request).catch(error => {
+ // handle error
+ });
+ } else {
+ // handle error
+ }
+}
```
+However, there's a likelihood of silent sign-in errors if the application has multiple users in a single browser session or if the user has multiple accounts for that single browser session. You may see the following error in the case of multiple accounts:
+
+```txt
+InteractionRequiredAuthError: interaction_required: AADSTS16000: Either multiple user identities are available for the current request or selected account is not supported for the scenario.
+```
+
+The error indicates that the server couldn't determine which account to sign into, and will require either one of the parameters above (`account`, `login_hint`, `sid`) or an interactive sign-in to choose the account.
+
+## Considerations when using `ssoSilent`
+
+### Redirect URI (reply URL)
+
+For better performance and to help avoid issues, set the `redirectUri` to a blank page or other page that doesn't use MSAL.
+
+- If your application users only popup and silent methods, set the `redirectUri` on the `PublicClientApplication` configuration object.
+- If your application also uses redirect methods, set the `redirectUri` on a per-request basis.
+
+### Third-party cookies
+
+`ssoSilent` attempts to open a hidden iframe and reuse an existing session with Azure AD. This won't work in browsers that block third-party cookies such as safari, and will lead to an interaction error:
+
+```txt
+InteractionRequiredAuthError: login_required: AADSTS50058: A silent sign-in request was sent but no user is signed in. The cookies used to represent the user's session were not sent in the request to Azure AD
+```
+
+To resolve the error, the user must create an interactive authentication request using the `loginPopup()` or `loginRedirect()`.
+
+Additionally, the request object is required when using the **silent** methods. If you already have the user's sign-in information, you can pass either the `loginHint` or `sid` optional parameters to sign-in a specific account.
+ ## SSO in ADAL.js to MSAL.js update MSAL.js brings feature parity with ADAL.js for Azure AD authentication scenarios. To make the migration from ADAL.js to MSAL.js easy and to avoid prompting your users to sign in again, the library reads the ID token representing userΓÇÖs session in ADAL.js cache, and seamlessly signs in the user in MSAL.js.
To take advantage of the SSO behavior when updating from ADAL.js, you'll need to
// In ADAL.js window.config = {
- clientId: "g075edef-0efa-453b-997b-de1337c29185",
+ clientId: "1111-2222-3333-4444-55555555",
cacheLocation: "localStorage", };
var authContext = new AuthenticationContext(config);
// In latest MSAL.js version const config = { auth: {
- clientId: "abcd-ef12-gh34-ikkl-ashdjhlhsdg",
+ clientId: "1111-2222-3333-4444-55555555",
}, cache: { cacheLocation: "localStorage",
Once the `cacheLocation` is configured, MSAL.js can read the cached state of the
For more information about SSO, see: -- [Single Sign-On SAML protocol](single-sign-on-saml-protocol.md)
+- [Single Sign-on SAML protocol](single-sign-on-saml-protocol.md)
+- [Optional token claims](active-directory-optional-claims.md)
- [Configurable token lifetimes](active-directory-configurable-token-lifetimes.md)
active-directory Scenario Spa Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-sign-in.md
Before you can get tokens to access APIs in your application, you need an authen
You can also optionally pass the scopes of the APIs for which you need the user to consent at the time of sign-in. > [!NOTE]
-> If your application already has access to an authenticated user context or ID token, you can skip the login step and directly acquire tokens. For details, see [SSO without MSAL.js login](msal-js-sso.md#sso-without-msaljs-login).
+> If your application already has access to an authenticated user context or ID token, you can skip the login step and directly acquire tokens. For details, see [SSO with user hint](msal-js-sso.md#with-user-hint).
## Choosing between a pop-up or redirect experience
active-directory Scenario Web App Sign User App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-app-configuration.md
In the Azure portal, the reply URIs that you register on the **Authentication**
# [Node.js](#tab/nodejs)
-Here, the configuration parameters reside in `index.js`
+Here, the configuration parameters reside in *.env* as environment variables:
-```javascript
-const REDIRECT_URI = "http://localhost:3000/redirect";
+These parameters are used to create a configuration object in *authConfig.js* file, which will eventually be used to initialize MSAL Node:
-const config = {
- auth: {
- clientId: "Enter_the_Application_Id_Here",
- authority: "https://login.microsoftonline.com/Enter_the_Tenant_Info_Here/",
- clientSecret: "Enter_the_Client_Secret_Here"
- },
- system: {
- loggerOptions: {
- loggerCallback(loglevel, message, containsPii) {
- console.log(message);
- },
- piiLoggingEnabled: false,
- logLevel: msal.LogLevel.Verbose,
- }
- }
-};
-```
-In the Azure portal, the reply URIs that you register on the Authentication page for your application need to match the redirectUri instances that the application defines (`http://localhost:3000/redirect`).
+In the Azure portal, the reply URIs that you register on the Authentication page for your application need to match the redirectUri instances that the application defines (`http://localhost:3000/auth/redirect`).
> [!NOTE] > This quickstart proposes to store the client secret in the configuration file for simplicity. In your production app, you'd want to use other ways to store your secret, such as a key vault or an environment variable.
For details about the authorization code flow that this method triggers, see the
# [Node.js](#tab/nodejs)
-```javascript
-const msal = require('@azure/msal-node');
+Node sample the Express framework. MSAL is initialized in *auth* route handler:
-// Create msal application object
-const cca = new msal.ConfidentialClientApplication(config);
-```
# [Python](#tab/python)
active-directory Scenario Web App Sign User App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-app-registration.md
By default, the sample uses:
1. When the **Register an application page** appears, enter your application's registration information: 1. Enter a **Name** for your application, for example `node-webapp`. Users of your app might see this name, and you can change it later.
- 1. Change **Supported account types** to **Accounts in any organizational directory and personal Microsoft accounts (e.g. Skype, Xbox, Outlook.com)**.
- 1. In the **Redirect URI (optional)** section, select **Web** in the combo box and enter the following redirect URI: `http://localhost:3000/redirect`.
+ 1. Change **Supported account types** to **Accounts in this organizational directory only**.
+ 1. In the **Redirect URI (optional)** section, select **Web** in the combo box and enter the following redirect URI: `http://localhost:3000/auth/redirect`.
1. Select **Register** to create the application. 1. On the app's **Overview** page, find the **Application (client) ID** value and record it for later. You'll need it to configure the configuration file for this project. 1. Under **Manage**, select **Certificates & secrets**.
active-directory Scenario Web App Sign User Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-sign-in.md
else
# [Java](#tab/java)
-In our Java quickstart, the sign-in button is located in the [main/resources/templates/https://docsupdatetracker.net/index.html](https://github.com/Azure-Samples/ms-identity-java-webapp/blob/master/msal-java-webapp-sample/src/main/resources/templates/https://docsupdatetracker.net/index.html) file.
+In the Java quickstart, the sign-in button is located in the [main/resources/templates/https://docsupdatetracker.net/index.html](https://github.com/Azure-Samples/ms-identity-java-webapp/blob/master/msal-java-webapp-sample/src/main/resources/templates/https://docsupdatetracker.net/index.html) file.
```html <!DOCTYPE html>
In our Java quickstart, the sign-in button is located in the [main/resources/tem
# [Node.js](#tab/nodejs)
-In the Node.js quickstart, there's no sign-in button. The code-behind automatically prompts the user for sign-in when it's reaching the root of the web app.
+In the Node.js quickstart, the code for the sign-in button is located in *index.hbs* template file.
-```javascript
-app.get('/', (req, res) => {
- // authentication logic
-});
-```
+
+This template is served via the main (index) route of the app:
+ # [Python](#tab/python)
public class AuthPageController {
# [Node.js](#tab/nodejs)
-Unlike other platforms, here the MSAL Node takes care of letting the user sign in from the login page.
-
-```javascript
-
-// 1st leg of auth code flow: acquire a code
-app.get('/', (req, res) => {
- const authCodeUrlParameters = {
- scopes: ["user.read"],
- redirectUri: REDIRECT_URI,
- };
-
- // get url to sign user in and consent to scopes needed for application
- pca.getAuthCodeUrl(authCodeUrlParameters).then((response) => {
- res.redirect(response);
- }).catch((error) => console.log(JSON.stringify(error)));
-});
-
-// 2nd leg of auth code flow: exchange code for token
-app.get('/redirect', (req, res) => {
- const tokenRequest = {
- code: req.query.code,
- scopes: ["user.read"],
- redirectUri: REDIRECT_URI,
- };
-
- pca.acquireTokenByCode(tokenRequest).then((response) => {
- console.log("\nResponse: \n:", response);
- res.sendStatus(200);
- }).catch((error) => {
- console.log(error);
- res.status(500).send(error);
- });
-});
-```
+When the user selects the **Sign in** link, which triggers the `/auth/signin` route, the sign-in controller takes over to authenticate the user with Microsoft identity platform.
+ # [Python](#tab/python)
In our Java quickstart, the sign-out button is located in the main/resources/tem
# [Node.js](#tab/nodejs)
-This sample application does not implement sign-out.
# [Python](#tab/python)
In Java, sign-out is handled by calling the Microsoft identity platform `logout`
# [Node.js](#tab/nodejs)
-This sample application does not implement sign-out.
+When the user selects the **Sign out** button, the app triggers the `/signout` route, which destroys the session and redirects the browser to Microsoft identity platform sign-out endpoint.
+ # [Python](#tab/python)
In the Java quickstart, the post-logout redirect URI just displays the index.htm
# [Node.js](#tab/nodejs)
-This sample application does not implement sign-out.
+In the Node quickstart, the post-logout redirect URI is used to redirect the browser back to sample home page after the user completes the logout process with the Microsoft identity platform.
# [Python](#tab/python)
If you want to learn more about sign-out, read the protocol documentation that's
## Next steps Move on to the next article in this scenario,
-[Move to production](scenario-web-app-sign-user-production.md).
+[Move to production](scenario-web-app-sign-user-production.md).
active-directory Tutorial V2 Nodejs Webapp Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-nodejs-webapp-msal.md
Last updated 02/17/2021
-# Tutorial: Sign in users in a Node.js & Express web app
+# Tutorial: Sign in users and acquire a token for Microsoft Graph in a Node.js & Express web app
-In this tutorial, you build a web app that signs-in users. The web app you build uses the [Microsoft Authentication Library (MSAL) for Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node).
+In this tutorial, you build a web app that signs-in users and acquires access tokens for calling Microsoft Graph. The web app you build uses the [Microsoft Authentication Library (MSAL) for Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node).
Follow the steps in this tutorial to:
First, complete the steps in [Register an application with the Microsoft identit
Use the following settings for your app registration: - Name: `ExpressWebApp` (suggested)-- Supported account types: **Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)**
+- Supported account types: **Accounts in this organizational directory only**
- Platform type: **Web**-- Redirect URI: `http://localhost:3000/redirect`
+- Redirect URI: `http://localhost:3000/auth/redirect`
- Client secret: `*********` (record this value for use in a later step - it's shown only once) ## Create the project
-Create a folder to host your application, for example *ExpressWebApp*.
+Use the [Express application generator tool](https://expressjs.com/en/starter/generator.html) to create an application skeleton.
-1. First, change to your project directory in your terminal and then run the following `npm` commands:
+1. First, install the [express-generator](https://www.npmjs.com/package/express-generator) package:
```console
- npm init -y
- npm install --save express
+ npm install -g express-generator
```
-2. Next, create file named *index.js* and add the following code:
-
-```JavaScript
- const express = require("express");
- const msal = require('@azure/msal-node');
-
- const SERVER_PORT = process.env.PORT || 3000;
-
- // Create Express App and Routes
- const app = express();
-
- app.listen(SERVER_PORT, () => console.log(`Msal Node Auth Code Sample app listening on port ${SERVER_PORT}!`))
+2. Then, create an application skeleton as follows:
+
+```console
+ express --view=hbs /ExpressWebApp && cd /ExpressWebApp
+ npm install
```
-You now have a simple web server running on port 3000. The file and folder structure of your project should look similar to the following:
+You now have a simple Express web app. The file and folder structure of your project should look similar to the following:
``` ExpressWebApp/
-Γö£ΓöÇΓöÇ index.js
+Γö£ΓöÇΓöÇ bin/
+| ΓööΓöÇΓöÇ wwww
+Γö£ΓöÇΓöÇ public/
+| Γö£ΓöÇΓöÇ images/
+| Γö£ΓöÇΓöÇ javascript/
+| ΓööΓöÇΓöÇ stylesheets/
+| ΓööΓöÇΓöÇ style.css
+Γö£ΓöÇΓöÇ routes/
+| Γö£ΓöÇΓöÇ index.js
+| ΓööΓöÇΓöÇ users.js
+Γö£ΓöÇΓöÇ views/
+| Γö£ΓöÇΓöÇ error.hbs
+| Γö£ΓöÇΓöÇ index.hbs
+| ΓööΓöÇΓöÇ layout.hbs
+Γö£ΓöÇΓöÇ app.js
ΓööΓöÇΓöÇ package.json ``` ## Install the auth library
-Locate the root of your project directory in a terminal and install the MSAL Node package via NPM.
+Locate the root of your project directory in a terminal and install the MSAL Node package via npm.
```console npm install --save @azure/msal-node ```
-## Add app registration details
+## Install other dependencies
+
+The web app sample in this tutorial uses the [express-session](https://www.npmjs.com/package/express-session) package for session management, [dotenv](https://www.npmjs.com/package/dotenv) package for reading environment parameters during development, and [axios](https://www.npmjs.com/package/axios) for making network calls to the Microsoft Graph API. Install these via npm:
-In the *index.js* file you've created earlier, add the following code:
-
-```JavaScript
- // Before running the sample, you will need to replace the values in the config,
- // including the clientSecret
- const config = {
- auth: {
- clientId: "Enter_the_Application_Id",
- authority: "Enter_the_Cloud_Instance_Id_Here/Enter_the_Tenant_Id_here",
- clientSecret: "Enter_the_Client_secret"
- },
-     system: {
-         loggerOptions: {
-             loggerCallback(loglevel, message, containsPii) {
-                 console.log(message);
-             },
-          piiLoggingEnabled: false,
-          logLevel: msal.LogLevel.Verbose,
-         }
-     }
- };
+```console
+ npm install --save express-session dotenv axios
```
+## Add app registration details
+
+1. Create an *.env* file in the root of your project folder. Then add the following code:
++ Fill in these details with the values you obtain from Azure app registration portal: -- `Enter_the_Tenant_Id_here` should be one of the following:
+- `Enter_the_Cloud_Instance_Id_Here`: The Azure cloud instance in which your application is registered.
+ - For the main (or *global*) Azure cloud, enter `https://login.microsoftonline.com/` (include the trailing forward-slash).
+ - For **national** clouds (for example, China), you can find appropriate values in [National clouds](authentication-national-cloud.md).
+- `Enter_the_Tenant_Info_here` should be one of the following:
- If your application supports *accounts in this organizational directory*, replace this value with the **Tenant ID** or **Tenant name**. For example, `contoso.microsoft.com`. - If your application supports *accounts in any organizational directory*, replace this value with `organizations`. - If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with `common`. - To restrict support to *personal Microsoft accounts only*, replace this value with `consumers`. - `Enter_the_Application_Id_Here`: The **Application (client) ID** of the application you registered.-- `Enter_the_Cloud_Instance_Id_Here`: The Azure cloud instance in which your application is registered.
- - For the main (or *global*) Azure cloud, enter `https://login.microsoftonline.com`.
- - For **national** clouds (for example, China), you can find appropriate values in [National clouds](authentication-national-cloud.md).
- `Enter_the_Client_secret`: Replace this value with the client secret you created earlier. To generate a new key, use **Certificates & secrets** in the app registration settings in the Azure portal. > [!WARNING] > Any plaintext secret in source code poses an increased security risk. This article uses a plaintext client secret for simplicity only. Use [certificate credentials](active-directory-certificate-credentials.md) instead of client secrets in your confidential client applications, especially those apps you intend to deploy to production.
-## Add code for user login
-
-In the *index.js* file you've created earlier, add the following code:
-
-```JavaScript
- // Create msal application object
- const cca = new msal.ConfidentialClientApplication(config);
-
- app.get('/', (req, res) => {
- const authCodeUrlParameters = {
- scopes: ["user.read"],
- redirectUri: "http://localhost:3000/redirect",
- };
-
- // get url to sign user in and consent to scopes needed for application
- cca.getAuthCodeUrl(authCodeUrlParameters).then((response) => {
- res.redirect(response);
- }).catch((error) => console.log(JSON.stringify(error)));
- });
-
- app.get('/redirect', (req, res) => {
- const tokenRequest = {
- code: req.query.code,
- scopes: ["user.read"],
- redirectUri: "http://localhost:3000/redirect",
- };
-
- cca.acquireTokenByCode(tokenRequest).then((response) => {
- console.log("\nResponse: \n:", response);
- res.sendStatus(200);
- }).catch((error) => {
- console.log(error);
- res.status(500).send(error);
- });
- });
-```
+- `Enter_the_Graph_Endpoint_Here`: The Microsoft Graph API cloud instance that your app will call. For the main (global) Microsoft Graph API service, enter `https://graph.microsoft.com/` (include the trailing forward-slash).
+- `Enter_the_Express_Session_Secret_Here` the secret used to sign the Express session cookie. Choose a random string of characters to replace this string with, such as your client secret.
++
+2. Next, create a file named *authConfig.js* in the root of your project for reading in these parameters. Once created, add the following code there:
++
+## Add code for user login and token acquisition
++
+2. Next, update the *index.js* route by replacing the existing code with the following:
++
+3. Finally, update the *users.js* route by replacing the existing code with the following:
-## Test sign in
+
+## Add code for calling the Microsoft Graph API
+
+Create a file named **fetch.js** in the root of your project and add the following code:
++
+## Add views for displaying data
+
+1. In the *views* folder, update the *index.hbs* file by replacing the existing code with the following:
++
+2. Still in the same folder, create another file named *id.hbs* for displaying the contents of user's ID token:
++
+3. Finally, create another file named *profile.hbs* for displaying the result of the call made to Microsoft Graph:
++
+## Register routers and add state management
+
+In the *app.js* file in the root of the project folder, register the routes you have created earlier and add session support for tracking authentication state using the **express-session** package. Replace the existing code there with the following:
++
+## Test sign in and call Microsoft Graph
You've completed creation of the application and are now ready to test the app's functionality. 1. Start the Node.js console app by running the following command from within the root of your project folder: ```console
- node index.js
+ npm start
```
-2. Open a browser window and navigate to `http://localhost:3000`. You should see a sign-in screen:
+2. Open a browser window and navigate to `http://localhost:3000`. You should see a welcome page:
++
+3. Select **Sign in** link. You should see the Azure AD sign-in screen:
:::image type="content" source="media/tutorial-v2-nodejs-webapp-msal/sign-in-screen.png" alt-text="Azure AD sign-in screen displaying":::
-3. Once you enter your credentials, you should see a consent screen asking you to approve the permissions for the app.
+4. Once you enter your credentials, you should see a consent screen asking you to approve the permissions for the app.
:::image type="content" source="media/tutorial-v2-nodejs-webapp-msal/consent-screen.png" alt-text="Azure AD consent screen displaying":::
+5. Once you consent, you should be redirected back to application home page.
++
+6. Select the **View ID Token** link for displaying the contents of the signed-in user's ID token.
++
+7. Go back to the home page, and select the **Acquire an access token and call the Microsoft Graph API** link. Once you do, you should see the response from Microsoft Graph /me endpoint for the signed-in user.
++
+8. Go back to the home page, and select the **Sign out** link. You should see the Azure AD sign-out screen.
++ ## How the application works
-In this tutorial, you initialized an MSAL Node [ConfidentialClientApplication](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-node/docs/initialize-confidential-client-application.md) object by passing it a configuration object (*msalConfig*) that contains parameters obtained from your Azure AD app registration on Azure portal. The web app you created uses the [OAuth 2.0 Authorization code grant flow](./v2-oauth2-auth-code-flow.md) to sign-in users and obtain ID and access tokens.
+In this tutorial, you instantiated an MSAL Node [ConfidentialClientApplication](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-node/docs/initialize-confidential-client-application.md) object by passing it a configuration object (*msalConfig*) that contains parameters obtained from your Azure AD app registration on Azure portal. The web app you created uses the [OpenID Connect protocol](./v2-protocols-oidc.md) to sign-in users and the [OAuth 2.0 Authorization code grant flow](./v2-oauth2-auth-code-flow.md) obtain access tokens.
## Next steps
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
The AADLoginForWindows extension must install successfully in order for the VM t
1. Ensure the required endpoints are accessible from the VM using PowerShell:
- - `curl https://login.microsoftonline.com/ -D -`
- - `curl https://login.microsoftonline.com/<TenantID>/ -D -`
- - `curl https://enterpriseregistration.windows.net/ -D -`
- - `curl https://device.login.microsoftonline.com/ -D -`
- - `curl https://pas.windows.net/ -D -`
+ - `curl.exe https://login.microsoftonline.com/ -D -`
+ - `curl.exe https://login.microsoftonline.com/<TenantID>/ -D -`
+ - `curl.exe https://enterpriseregistration.windows.net/ -D -`
+ - `curl.exe https://device.login.microsoftonline.com/ -D -`
+ - `curl.exe https://pas.windows.net/ -D -`
> [!NOTE]
- > Replace `<TenantID>` with the Azure AD Tenant ID that is associated with the Azure subscription.<br/> `enterpriseregistration.windows.net` and `pas.windows.net` should return 404 Not Found, which is expected behavior.
+ > Replace `<TenantID>` with the Azure AD Tenant ID that is associated with the Azure subscription.<br/> `login.microsoftonline.com/<TenantID>`, `enterpriseregistration.windows.net`, and `pas.windows.net` should return 404 Not Found, which is expected behavior.
1. The Device State can be viewed by running `dsregcmd /status`. The goal is for Device State to show as `AzureAdJoined : YES`.
active-directory Hybrid Azuread Join Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/hybrid-azuread-join-control.md
Use the following example to create a Group Policy Object (GPO) to deploy a regi
### Configure AD FS settings
-If you're using AD FS, you first need to configure client-side SCP using the instructions mentioned earlier by linking the GPO to your AD FS servers. The SCP object defines the source of authority for device objects. It can be on-premises or Azure AD. When client-side SCP is configured for AD FS, the source for device objects is established as Azure AD.
+If your Azure AD is federated with AD FS, you first need to configure client-side SCP using the instructions mentioned earlier by linking the GPO to your AD FS servers. The SCP object defines the source of authority for device objects. It can be on-premises or Azure AD. When client-side SCP is configured for AD FS, the source for device objects is established as Azure AD.
> [!NOTE]
-> If you failed to configure client-side SCP on your AD FS servers, the source for device identities would be considered as on-premises. ADFS will then start deleting device objects from on-premises directory after the stipulated period defined in the ADFS Device Registration's attribute "MaximumInactiveDays". ADFS Device Registration objects can be found using the [Get-AdfsDeviceRegistration cmdlet](/powershell/module/adfs/get-adfsdeviceregistration).
+> If you failed to configure client-side SCP on your AD FS servers, the source for device identities would be considered as on-premises. AD FS will then start deleting device objects from on-premises directory after the stipulated period defined in the AD FS Device Registration's attribute "MaximumInactiveDays". AD FS Device Registration objects can be found using the [Get-AdfsDeviceRegistration cmdlet](/powershell/module/adfs/get-adfsdeviceregistration).
## Supporting down-level devices
active-directory Licensing Groups Assign https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-groups-assign.md
Previously updated : 12/02/2020 Last updated : 05/26/2022
In this example, the Azure AD organization contains a security group called **HR
> [!NOTE] > Some Microsoft services are not available in all locations. Before a license can be assigned to a user, the administrator has to specify the Usage location property on the user. >
-> For group license assignment, any users without a usage location specified inherit the location of the directory. If you have users in multiple locations, we recommend that you always set usage location as part of your user creation flow in Azure AD (e.g. via AAD Connect configuration) - that ensures the result of license assignment is always correct and users do not receive services in locations that are not allowed.
+> For group license assignment, any users without a usage location specified inherit the location of the directory. If you have users in multiple locations, we recommend that you always set usage location as part of your user creation flow in Azure AD. For example, configure Azure AD Connect configuration to set usage location. This recommendation makes sure the result of license assignment is always correct and users do not receive services in locations that are not allowed.
## Step 1: Assign the required licenses
In this example, the Azure AD organization contains a security group called **HR
1. Select a user or group, and then use the **Select** button at the bottom of the page to confirm your selection.
+ >[!NOTE]
+ >When assigning licenses to a group with service plans that have dependencies on other service plans, they must both be assigned together in the same group, otherwise the service plan with the dependency will be disabled.
+ 1. On the **Assign license** page, click **Assignment options**, which displays all service plans included in the two products that we selected previously. Find **Yammer Enterprise** and turn it **Off** to disable that service from the product license. Confirm by clicking **OK** at the bottom of **License options**. ![select service plans for licenses](./media/licensing-groups-assign/assignment-options.png)
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/whats-new-docs.md
Title: "What's new in Azure Active Directory External Identities" description: "New and updated documentation for the Azure Active Directory External Identities." Previously updated : 05/02/2022 Last updated : 06/01/2022
Welcome to what's new in Azure Active Directory External Identities documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the External Identities service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md). +
+## May 2022
+
+### New articles
+
+- [Configure Microsoft cloud settings for B2B collaboration (Preview)](cross-cloud-settings.md)
+
+### Updated articles
+
+- [Configure Microsoft cloud settings for B2B collaboration (Preview)](cross-cloud-settings.md)
+- [Overview: Cross-tenant access with Azure AD External Identities (Preview)](cross-tenant-access-overview.md)
+- [Example: Configure SAML/WS-Fed based identity provider federation with AD FS](direct-federation-adfs.md)
+- [Federation with SAML/WS-Fed identity providers for guest users](direct-federation.md)
+- [External Identities documentation](index.yml)
+- [Quickstart: Add a guest user and send an invitation](b2b-quickstart-add-guest-users-portal.md)
+- [B2B collaboration overview](what-is-b2b.md)
+- [Leave an organization as a B2B collaboration user](leave-the-organization.md)
+- [Configure external collaboration settings](external-collaboration-settings-configure.md)
+- [B2B direct connect overview (Preview)](b2b-direct-connect-overview.md)
+- [Azure Active Directory External Identities: What's new](whats-new-docs.md)
+- [Configure cross-tenant access settings for B2B collaboration (Preview)](cross-tenant-access-settings-b2b-collaboration.md)
+- [Configure cross-tenant access settings for B2B direct connect (Preview)](cross-tenant-access-settings-b2b-direct-connect.md)
+- [Azure AD B2B in government and national clouds](b2b-government-national-clouds.md)
+- [External Identities in Azure Active Directory](external-identities-overview.md)
+- [Troubleshooting Azure Active Directory B2B collaboration](troubleshoot.md)
+ ## April 2022 ### Updated articles
Welcome to what's new in Azure Active Directory External Identities documentatio
- [Leave an organization as a B2B collaboration user](leave-the-organization.md) - [Configure external collaboration settings](external-collaboration-settings-configure.md) - [Reset redemption status for a guest user (Preview)](reset-redemption-status.md)-
-## February 2022
-
-### Updated articles
--- [Add Google as an identity provider for B2B guest users](google-federation.md)-- [External Identities in Azure Active Directory](external-identities-overview.md)-- [Overview: Cross-tenant access with Azure AD External Identities (Preview)](cross-tenant-access-overview.md)-- [B2B collaboration overview](what-is-b2b.md)-- [Federation with SAML/WS-Fed identity providers for guest users (preview)](direct-federation.md)-- [Quickstart: Add a guest user with PowerShell](b2b-quickstart-invite-powershell.md)-- [Tutorial: Bulk invite Azure AD B2B collaboration users](tutorial-bulk-invite.md)-- [Azure Active Directory B2B best practices](b2b-fundamentals.md)-- [Azure Active Directory B2B collaboration FAQs](faq.yml)-- [Email one-time passcode authentication](one-time-passcode.md)-- [Azure Active Directory B2B collaboration invitation redemption](redemption-experience.md)-- [Troubleshooting Azure Active Directory B2B collaboration](troubleshoot.md)-- [Properties of an Azure Active Directory B2B collaboration user](user-properties.md)-- [Authentication and Conditional Access for External Identities](authentication-conditional-access.md)
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
The What's new in Azure Active Directory? release notes provide information abou
+## November 2021
+
+### Tenant enablement of combined security information registration for Azure Active Directory
+
+**Type:** Plan for change
+**Service category:** MFA
+**Product capability:** Identity Security & Protection
+
+We previously announced in April 2020, a new combined registration experience enabling users to register authentication methods for SSPR and multi-factor authentication at the same time was generally available for existing customer to opt in. Any Azure AD tenants created after August 2020 automatically have the default experience set to combined registration. Starting 2022, Microsoft will be enabling the MF).
+
++
+### Windows users will see prompts more often when switching user accounts
+
+**Type:** Fixed
+**Service category:** Authentications (Logins)
+**Product capability:** User Authentication
+
+A problematic interaction between Windows and a local Active Directory Federation Services (ADFS) instance can result in users attempting to sign into another account, but be silently signed into their existing account instead, with no warning. For federated IdPs such as ADFS, that support the [prompt=login](/windows-server/identity/ad-fs/operations/ad-fs-prompt-login) pattern, Azure AD will now trigger a fresh login at ADFS when a user is directed to ADFS with a login hint. This ensures that the user is signed into the account they requested, rather than being silently signed into the account they're already signed in with.
+
+For more information, see the [change notice](../develop/reference-breaking-changes.md).
+
++
+### Public preview - Conditional Access Overview Dashboard
+
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** Monitoring & Reporting
+
+The new Conditional Access overview dashboard enables all tenants to see insights about the impact of their Conditional Access policies without requiring an Azure Monitor subscription. This built-in dashboard provides tutorials to deploy policies, a summary of the policies in your tenant, a snapshot of your policy coverage, and security recommendations. [Learn more](../conditional-access/overview.md).
+
++
+### Public preview - SSPR writeback is now available for disconnected forests using Azure AD Connect cloud sync
+
+**Type:** New feature
+**Service category:** Azure AD Connect Cloud Sync
+**Product capability:** Identity Lifecycle Management
+
+The Public Preview feature for Azure AD Connect Cloud Sync Password writeback provides customers the capability to writeback a userΓÇÖs password changes in the cloud to the on-premises directory in real time using the lightweight Azure AD cloud provisioning agent.[Learn more](../authentication/tutorial-enable-cloud-sync-sspr-writeback.md).
+++
+### Public preview - Conditional Access for workload identities
+
+**Type:** New feature
+**Service category:** Conditional Access for workload identities
+**Product capability:** Identity Security & Protection
+
+Previously, Conditional Access policies applied only to users when they access apps and services like SharePoint online or the Azure portal. This preview adds support for Conditional Access policies applied to service principals owned by the organization. You can block service principals from accessing resources from outside trusted-named locations or Azure Virtual Networks. [Learn more](../conditional-access/workload-identity.md).
+++
+### Public preview - Extra attributes available as claims
+
+**Type:** Changed feature
+**Service category:** Enterprise Apps
+**Product capability:** SSO
+
+Several user attributes have been added to the list of attributes available to map to claims to bring attributes available in claims more in line with what is available on the user object in Microsoft Graph. New attributes include mobilePhone and ProxyAddresses. [Learn more](../develop/reference-claims-mapping-policy-type.md#table-3-valid-id-values-per-source).
+
++
+### Public preview - "Session Lifetime Policies Applied" property in the sign-in logs
+
+**Type:** New feature
+**Service category:** Authentications (Logins)
+**Product capability:** Identity Security & Protection
+
+We have recently added other property to the sign-in logs called "Session Lifetime Policies Applied". This property will list all the session lifetime policies that applied to the sign-in for example, Sign-in frequency, Remember multi-factor authentication and Configurable token lifetime. [Learn more](../reports-monitoring/concept-sign-ins.md#authentication-details).
+
++
+### Public preview - Enriched reviews on access packages in entitlement management
+
+**Type:** New feature
+**Service category:** User Access Management
+**Product capability:** Entitlement Management
+
+Entitlement ManagementΓÇÖs enriched review experience allows even more flexibility on access packages reviews. Admins can now choose what happens to access if the reviewers don't respond, provide helper information to reviewers, or decide whether a justification is necessary. [Learn more](../governance/entitlement-management-access-reviews-create.md).
+
++
+### General availability - randomString and redact provisioning functions
+
+**Type:** New feature
+**Service category:** Provisioning
+**Product capability:** Outbound to SaaS Applications
+
+
+The Azure AD Provisioning service now supports two new functions, randomString() and Redact():
+- randomString - generate a string based on the length and characters you would like to include or exclude in your string.
+- redact - remove the value of the attribute from the audit and provisioning logs. [Learn more](../app-provisioning/functions-for-customizing-application-data.md#randomstring).
+++
+### General availability - Now access review creators can select users and groups to receive notification on completion of reviews
+
+**Type:** New feature
+**Service category:** Access Reviews
+**Product capability:** Identity Governance
+
+Now access review creators can select users and groups to receive notification on completion of reviews. [Learn more](../governance/create-access-review.md).
+
+
+
+### General availability - Azure AD users can now view and report suspicious sign-ins and manage their accounts within Microsoft Authenticator
+
+**Type:** New feature
+**Service category:** Microsoft Authenticator App
+**Product capability:** Identity Security & Protection
+
+This feature allows Azure AD users to manage their work or school accounts within the Microsoft Authenticator app. The management features will allow users to view sign-in history and sign-in activity. Users can also report any suspicious or unfamiliar activity, change their Azure AD account passwords, and update the account's security information.
+
+For more information on how to use this feature visit [View and search your recent sign-in activity from the My Sign-ins page](../user-help/my-account-portal-sign-ins-page.md).
+++
+### General availability - New Microsoft Authenticator app icon
+
+**Type:** New feature
+**Service category:** Microsoft Authenticator App
+**Product capability:** Identity Security & Protection
+
+New updates have been made to the Microsoft Authenticator app icon. To learn more about these updates, see the [Microsoft Authenticator app](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/microsoft-authenticator-app-easier-ways-to-add-or-manage/ba-p/2464408) blog post.
+++
+### General availability - Azure AD single Sign-on and device-based Conditional Access support in Firefox on Windows 10/11
+
+**Type:** New feature
+**Service category:** Authentications (Logins)
+**Product capability:** SSO
+
+We now support native single sign-on (SSO) support and device-based Conditional Access to Firefox browser on Windows 10 and Windows Server 2019 starting in Firefox version 91. [Learn more](../conditional-access/require-managed-devices.md#prerequisites).
+
++
+### New provisioning connectors in the Azure AD Application Gallery - November 2021
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
+
+- [Appaegis Isolation Access Cloud](../saas-apps/appaegis-isolation-access-cloud-provisioning-tutorial.md)
+- [BenQ IAM](../saas-apps/benq-iam-provisioning-tutorial.md)
+- [BIC Cloud Design](../saas-apps/bic-cloud-design-provisioning-tutorial.md)
+- [Chaos](../saas-apps/chaos-provisioning-tutorial.md)
+- [directprint.io](../saas-apps/directprint-io-provisioning-tutorial.md)
+- [Documo](../saas-apps/documo-provisioning-tutorial.md)
+- [Facebook Work Accounts](../saas-apps/facebook-work-accounts-provisioning-tutorial.md)
+- [introDus Pre and Onboarding Platform](../saas-apps/introdus-pre-and-onboarding-platform-provisioning-tutorial.md)
+- [Kisi Physical Security](../saas-apps/kisi-physical-security-provisioning-tutorial.md)
+- [Klaxoon](../saas-apps/klaxoon-provisioning-tutorial.md)
+- [Klaxoon SAML](../saas-apps/klaxoon-saml-provisioning-tutorial.md)
+- [MX3 Diagnostics](../saas-apps/mx3-diagnostics-connector-provisioning-tutorial.md)
+- [Netpresenter](../saas-apps/netpresenter-provisioning-tutorial.md)
+- [Peripass](../saas-apps/peripass-provisioning-tutorial.md)
+- [Real Links](../saas-apps/real-links-provisioning-tutorial.md)
+- [Sentry](../saas-apps/sentry-provisioning-tutorial.md)
+- [Teamgo](../saas-apps/teamgo-provisioning-tutorial.md)
+- [Zero](../saas-apps/zero-provisioning-tutorial.md)
+
+For more information about how to better secure your organization by using automated user account provisioning, see [Automate user provisioning to SaaS applications with Azure AD](../manage-apps/user-provisioning.md).
+
++
+### New Federated Apps available in Azure AD Application gallery - November 2021
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** 3rd Party Integration
+
+In November 2021, we have added following 32 new applications in our App gallery with Federation support:
+
+[Tide - Connector](https://gallery.ctinsuretech-tide.com/), [Virtual Risk Manager - USA](../saas-apps/virtual-risk-manager-usa-tutorial.md), [Xorlia Policy Management](https://app.xoralia.com/), [WorkPatterns](https://app.workpatterns.com/oauth2/login?data_source_type=office_365_account_calendar_workspace_sync&utm_source=azure_sso), [GHAE](../saas-apps/ghae-tutorial.md), [Nodetrax Project](../saas-apps/nodetrax-project-tutorial.md), [Touchstone Benchmarking](https://app.touchstonebenchmarking.com/), [SURFsecureID - Azure MFA](../saas-apps/surfsecureid-azure-mfa-tutorial.md), [AiDEA](https://truebluecorp.com/en/prodotti/aidea-en/),[R and D Tax Credit
+
+You can also find the documentation of all the applications [here](../saas-apps/tutorial-list.md).
+
+For listing your application in the Azure AD app gallery, read the details [here](../manage-apps/v2-howto-app-gallery-listing.md).
+++
+### Updated "switch organizations" user experience in My Account.
+
+**Type:** Changed feature
+**Service category:** My Profile/Account
+**Product capability:** End User Experiences
+
+Updated "switch organizations" user interface in My Account. This visually improves the UI and provides the end-user with clear instructions. Added a manage organizations link to blade per customer feedback. [Learn more](https://support.microsoft.com/account-billing/switch-organizations-in-your-work-or-school-account-portals-c54c32c9-2f62-4fad-8c23-2825ed49d146).
+
++ ## October 2021 ### Limits on the number of configured API permissions for an application registration will be enforced starting in October 2021
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
Azure AD receives improvements on an ongoing basis. To stay up to date with the
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Active Directory](whats-new-archive.md).
+## May 2022
+
+### General Availability: Tenant-based service outage notifications
+
+**Type:** Plan for change
+**Service category:** Other
+**Product capability:** Platform
+
+
+Azure Service Health will soon support service outage notifications to Tenant Admins for Azure Active Directory issues in the near future. These outages will also appear on the Azure AD admin portal overview page with appropriate links to Azure Service Health. Outage events will be able to be seen by built-in Tenant Administrator Roles. We will continue to send outage notifications to subscriptions within a tenant for a period of transition. More information will be available when this capability is released. The expected release is for June 2022.
+
++++
+### New Federated Apps available in Azure AD Application gallery - May 2022
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** 3rd Party Integration
+
++
+In May 2022 we've added the following 25 new applications in our App gallery with Federation support:
+
+[UserZoom](../saas-apps/userzoom-tutorial.md), [AMX Mobile](https://www.amxsolutions.co.uk/), [i-Sight](../saas-apps/isight-tutorial.md), [Method InSight](https://digital.methodrecycling.com/), [Chronus SAML](../saas-apps/chronus-saml-tutorial.md), [Attendant Console for Microsoft Teams](https://attendant.anywhere365.io/), [Skopenow](../saas-apps/skopenow-tutorial.md), [Fidelity PlanViewer](../saas-apps/fidelity-planviewer-tutorial.md), [Lyve Cloud](../saas-apps/lyve-cloud-tutorial.md), [Framer](../saas-apps/framer-tutorial.md), [Authomize](../saas-apps/authomize-tutorial.md), [gamba!](../saas-apps/gamba-tutorial.md), [Datto File Protection Single Sign On](../saas-apps/datto-file-protection-tutorial.md), [LONEALERT](https://portal.lonealert.co.uk/auth/azure/saml/signin), [Payfactors](https://pf.payfactors.com/client/auth/login), [deBroome Brand Portal](../saas-apps/debroome-brand-portal-tutorial.md), [TeamSlide](../saas-apps/teamslide-tutorial.md), [Sensera Systems](https://sitecloud.senserasystems.com/), [YEAP](https://prismaonline.propay.be/logon/login.aspx), [Monaca Education](https://monaca.education/j), [OpenForms](https://login.openforms.com/Login).
+
+You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial,
+
+For listing your application in the Azure AD app gallery, please read the details here https://aka.ms/AzureADAppRequest
+++
+
++
+
+
+### General Availability ΓÇô My Apps users can make apps from URLs (add sites)
+
+**Type:** New feature
+**Service category:** My Apps
+**Product capability:** End User Experiences
+
+
+When editing a collection using the My Apps portal, users can now add their own sites, in addition to adding apps that have been assigned to them by an admin. To add a site, users must provide a name and URL. For more information on how to use this feature, see: [Customize app collections in the My Apps portal](https://support.microsoft.com/account-billing/customize-app-collections-in-the-my-apps-portal-2dae6b8a-d8b0-4a16-9a5d-71ed4d6a6c1d).
+
++
+
+
+### Public preview - New provisioning connectors in the Azure AD Application Gallery - May 2022
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+
+You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
+
+- [Alinto Protect](../saas-apps/alinto-protect-provisioning-tutorial.md)
+- [Blinq](../saas-apps/blinq-provisioning-tutorial.md)
+- [Cerby](../saas-apps/cerby-provisioning-tutorial.md)
+
+For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
+
++
+
+
+### Public Preview: Confirm safe and compromised in signIns API beta
+
+**Type:** New feature
+**Service category:** Identity Protection
+**Product capability:** Identity Security & Protection
+
+
+The signIns Microsoft Graph API now supports confirming safe and compromised on risky sign-ins. This public preview functionality is available at the beta endpoint. For more information, please check out the Microsoft Graph documentation: [signIn: confirmSafe - Microsoft Graph beta | Microsoft Docs](/graph/api/signin-confirmsafe?view=graph-rest-beta&preserve-view=true)
+
++
+
+
+### Public Preview of Microsoft cloud settings for Azure AD B2B
+
+**Type:** New feature
+**Service category:** B2B
+**Product capability:** B2B/B2C
+**Clouds impacted:** China;Public (M365,GCC);US Gov (GCC-H, DoD)
+
+
+Microsoft cloud settings let you collaborate with organizations from different Microsoft Azure clouds. With Microsoft cloud settings, you can establish mutual B2B collaboration between the following clouds:
+
+-Microsoft Azure global cloud and Microsoft Azure Government
+-Microsoft Azure global cloud and Microsoft Azure China 21Vianet
+
+To learn more about Microsoft cloud settings for B2B collaboration, see: [Cross-tenant access overview - Azure AD | Microsoft Docs](../external-identities/cross-tenant-access-overview.md#microsoft-cloud-settings).
+
++
+
+
+### General Availability of SAML and WS-Fed federation in External Identities
+
+**Type:** Changed feature
+**Service category:** B2B
+**Product capability:** B2B/B2C
+**Clouds impacted:** Public (M365,GCC);US Gov (GCC-H, DoD)
+
+
+When setting up federation with a partner's IdP, new guest users from that domain can use their own IdP-managed organizational account to sign in to your Azure AD tenant and start collaborating with you. There's no need for the guest user to create a separate Azure AD account. To learn more about federating with SAML or WS-Fed identity providers in External Identities, see: [Federation with a SAML/WS-Fed identity provider (IdP) for B2B - Azure AD | Microsoft Docs](../external-identities/direct-federation.md).
+
++
+
+
+### Public Preview - Create Group in Administrative Unit
+
+**Type:** Changed feature
+**Service category:** Directory Management
+**Product capability:** Access Control
+**Clouds impacted:** China;Public (M365,GCC);US Gov (GCC-H, DoD)
+
+
+Groups Administrators assigned over the scope of an administrative unit can now create groups within the administrative unit. This enables scoped group administrators to create groups that they can manage directly, without needing to elevate to Global Administrator or Privileged Role Administrator. For more information, see: [Administrative units in Azure Active Directory](../roles/administrative-units.md).
+
++
+
+
+### Public Preview - Dynamic administrative unit support for onPremisesDistinguishedName property
+
+**Type:** Changed feature
+**Service category:** Directory Management
+**Product capability:** AuthZ/Access Delegation
+**Clouds impacted:** Public (M365,GCC)
+
+
+The public preview of dynamic administrative units now supports the **onPremisesDistinguishedName** property for users. This makes it possible to create dynamic rules which incorporate the organizational unit of the user from on-premises AD. For more information, see: [Manage users or devices for an administrative unit with dynamic membership rules (Preview)](../roles/admin-units-members-dynamic.md).
+
++
+
+
+### General Availability - Improvements to Azure AD Smart Lockout
+
+**Type:** Changed feature
+**Service category:** Other
+**Product capability:** User Management
+**Clouds impacted:** China;Public (M365,GCC);US Gov (GCC-H, DoD);US Nat;US Sec
+
+
+Smart Lockout now synchronizes the lockout state across Azure AD data centers, so the total number of failed sign-in attempts allowed before an account is locked out will match the configured lockout threshold. For more information, see: [Protect user accounts from attacks with Azure Active Directory smart lockout](../authentication/howto-password-smart-lockout.md).
+
++
+
++ ## April 2022 ### General Availability - Microsoft Defender for Endpoint Signal in Identity Protection
We highly recommend enabling this new protection when using Azure AD Multi-Facto
**Service category:** Enterprise Apps **Product capability:** Third Party Integration
-In April 2022 we added the following 24 new applications in our App gallery with Federation support
+In April 2022 we added the following 24 new applications in our App gallery with Federation support:
[X-1FBO](https://www.x1fbo.com/), [select Armor](https://app.clickarmor.c) You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial.
WeΓÇÖre no longer publishing sign-in logs with the following error codes because
-## November 2021
-
-### Tenant enablement of combined security information registration for Azure Active Directory
-
-**Type:** Plan for change
-**Service category:** MFA
-**Product capability:** Identity Security & Protection
-
-We previously announced in April 2020, a new combined registration experience enabling users to register authentication methods for SSPR and multi-factor authentication at the same time was generally available for existing customer to opt in. Any Azure AD tenants created after August 2020 automatically have the default experience set to combined registration. Starting 2022, Microsoft will be enabling the MF).
-
--
-### Windows users will see prompts more often when switching user accounts
-
-**Type:** Fixed
-**Service category:** Authentications (Logins)
-**Product capability:** User Authentication
-
-A problematic interaction between Windows and a local Active Directory Federation Services (ADFS) instance can result in users attempting to sign into another account, but be silently signed into their existing account instead, with no warning. For federated IdPs such as ADFS, that support the [prompt=login](/windows-server/identity/ad-fs/operations/ad-fs-prompt-login) pattern, Azure AD will now trigger a fresh login at ADFS when a user is directed to ADFS with a login hint. This ensures that the user is signed into the account they requested, rather than being silently signed into the account they're already signed in with.
-
-For more information, see the [change notice](../develop/reference-breaking-changes.md).
-
--
-### Public preview - Conditional Access Overview Dashboard
-
-**Type:** New feature
-**Service category:** Conditional Access
-**Product capability:** Monitoring & Reporting
-
-The new Conditional Access overview dashboard enables all tenants to see insights about the impact of their Conditional Access policies without requiring an Azure Monitor subscription. This built-in dashboard provides tutorials to deploy policies, a summary of the policies in your tenant, a snapshot of your policy coverage, and security recommendations. [Learn more](../conditional-access/overview.md).
-
--
-### Public preview - SSPR writeback is now available for disconnected forests using Azure AD Connect cloud sync
-
-**Type:** New feature
-**Service category:** Azure AD Connect Cloud Sync
-**Product capability:** Identity Lifecycle Management
-
-The Public Preview feature for Azure AD Connect Cloud Sync Password writeback provides customers the capability to writeback a userΓÇÖs password changes in the cloud to the on-premises directory in real time using the lightweight Azure AD cloud provisioning agent.[Learn more](../authentication/tutorial-enable-cloud-sync-sspr-writeback.md).
---
-### Public preview - Conditional Access for workload identities
-
-**Type:** New feature
-**Service category:** Conditional Access for workload identities
-**Product capability:** Identity Security & Protection
-
-Previously, Conditional Access policies applied only to users when they access apps and services like SharePoint online or the Azure portal. This preview adds support for Conditional Access policies applied to service principals owned by the organization. You can block service principals from accessing resources from outside trusted-named locations or Azure Virtual Networks. [Learn more](../conditional-access/workload-identity.md).
---
-### Public preview - Extra attributes available as claims
-
-**Type:** Changed feature
-**Service category:** Enterprise Apps
-**Product capability:** SSO
-
-Several user attributes have been added to the list of attributes available to map to claims to bring attributes available in claims more in line with what is available on the user object in Microsoft Graph. New attributes include mobilePhone and ProxyAddresses. [Learn more](../develop/reference-claims-mapping-policy-type.md#table-3-valid-id-values-per-source).
-
--
-### Public preview - "Session Lifetime Policies Applied" property in the sign-in logs
-
-**Type:** New feature
-**Service category:** Authentications (Logins)
-**Product capability:** Identity Security & Protection
-
-We have recently added other property to the sign-in logs called "Session Lifetime Policies Applied". This property will list all the session lifetime policies that applied to the sign-in for example, Sign-in frequency, Remember multi-factor authentication and Configurable token lifetime. [Learn more](../reports-monitoring/concept-sign-ins.md#authentication-details).
-
--
-### Public preview - Enriched reviews on access packages in entitlement management
-**Type:** New feature
-**Service category:** User Access Management
-**Product capability:** Entitlement Management
-
-Entitlement ManagementΓÇÖs enriched review experience allows even more flexibility on access packages reviews. Admins can now choose what happens to access if the reviewers don't respond, provide helper information to reviewers, or decide whether a justification is necessary. [Learn more](../governance/entitlement-management-access-reviews-create.md).
-
--
-### General availability - randomString and redact provisioning functions
-
-**Type:** New feature
-**Service category:** Provisioning
-**Product capability:** Outbound to SaaS Applications
-
-
-The Azure AD Provisioning service now supports two new functions, randomString() and Redact():
-- randomString - generate a string based on the length and characters you would like to include or exclude in your string.-- redact - remove the value of the attribute from the audit and provisioning logs. [Learn more](../app-provisioning/functions-for-customizing-application-data.md#randomstring).---
-### General availability - Now access review creators can select users and groups to receive notification on completion of reviews
-
-**Type:** New feature
-**Service category:** Access Reviews
-**Product capability:** Identity Governance
-
-Now access review creators can select users and groups to receive notification on completion of reviews. [Learn more](../governance/create-access-review.md).
-
-
-
-### General availability - Azure AD users can now view and report suspicious sign-ins and manage their accounts within Microsoft Authenticator
-
-**Type:** New feature
-**Service category:** Microsoft Authenticator App
-**Product capability:** Identity Security & Protection
-
-This feature allows Azure AD users to manage their work or school accounts within the Microsoft Authenticator app. The management features will allow users to view sign-in history and sign-in activity. Users can also report any suspicious or unfamiliar activity, change their Azure AD account passwords, and update the account's security information.
-
-For more information on how to use this feature visit [View and search your recent sign-in activity from the My Sign-ins page](../user-help/my-account-portal-sign-ins-page.md).
---
-### General availability - New Microsoft Authenticator app icon
-
-**Type:** New feature
-**Service category:** Microsoft Authenticator App
-**Product capability:** Identity Security & Protection
-
-New updates have been made to the Microsoft Authenticator app icon. To learn more about these updates, see the [Microsoft Authenticator app](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/microsoft-authenticator-app-easier-ways-to-add-or-manage/ba-p/2464408) blog post.
---
-### General availability - Azure AD single Sign-on and device-based Conditional Access support in Firefox on Windows 10/11
-
-**Type:** New feature
-**Service category:** Authentications (Logins)
-**Product capability:** SSO
-
-We now support native single sign-on (SSO) support and device-based Conditional Access to Firefox browser on Windows 10 and Windows Server 2019 starting in Firefox version 91. [Learn more](../conditional-access/require-managed-devices.md#prerequisites).
-
--
-### New provisioning connectors in the Azure AD Application Gallery - November 2021
-
-**Type:** New feature
-**Service category:** App Provisioning
-**Product capability:** 3rd Party Integration
-
-You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
--- [Appaegis Isolation Access Cloud](../saas-apps/appaegis-isolation-access-cloud-provisioning-tutorial.md)-- [BenQ IAM](../saas-apps/benq-iam-provisioning-tutorial.md)-- [BIC Cloud Design](../saas-apps/bic-cloud-design-provisioning-tutorial.md)-- [Chaos](../saas-apps/chaos-provisioning-tutorial.md)-- [directprint.io](../saas-apps/directprint-io-provisioning-tutorial.md)-- [Documo](../saas-apps/documo-provisioning-tutorial.md)-- [Facebook Work Accounts](../saas-apps/facebook-work-accounts-provisioning-tutorial.md)-- [introDus Pre and Onboarding Platform](../saas-apps/introdus-pre-and-onboarding-platform-provisioning-tutorial.md)-- [Kisi Physical Security](../saas-apps/kisi-physical-security-provisioning-tutorial.md)-- [Klaxoon](../saas-apps/klaxoon-provisioning-tutorial.md)-- [Klaxoon SAML](../saas-apps/klaxoon-saml-provisioning-tutorial.md)-- [MX3 Diagnostics](../saas-apps/mx3-diagnostics-connector-provisioning-tutorial.md)-- [Netpresenter](../saas-apps/netpresenter-provisioning-tutorial.md)-- [Peripass](../saas-apps/peripass-provisioning-tutorial.md)-- [Real Links](../saas-apps/real-links-provisioning-tutorial.md)-- [Sentry](../saas-apps/sentry-provisioning-tutorial.md)-- [Teamgo](../saas-apps/teamgo-provisioning-tutorial.md)-- [Zero](../saas-apps/zero-provisioning-tutorial.md)-
-For more information about how to better secure your organization by using automated user account provisioning, see [Automate user provisioning to SaaS applications with Azure AD](../manage-apps/user-provisioning.md).
-
--
-### New Federated Apps available in Azure AD Application gallery - November 2021
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
-
-In November 2021, we have added following 32 new applications in our App gallery with Federation support:
-
-[Tide - Connector](https://gallery.ctinsuretech-tide.com/), [Virtual Risk Manager - USA](../saas-apps/virtual-risk-manager-usa-tutorial.md), [Xorlia Policy Management](https://app.xoralia.com/), [WorkPatterns](https://app.workpatterns.com/oauth2/login?data_source_type=office_365_account_calendar_workspace_sync&utm_source=azure_sso), [GHAE](../saas-apps/ghae-tutorial.md), [Nodetrax Project](../saas-apps/nodetrax-project-tutorial.md), [Touchstone Benchmarking](https://app.touchstonebenchmarking.com/), [SURFsecureID - Azure MFA](../saas-apps/surfsecureid-azure-mfa-tutorial.md), [AiDEA](https://truebluecorp.com/en/prodotti/aidea-en/),[R and D Tax Credit
-
-You can also find the documentation of all the applications [here](../saas-apps/tutorial-list.md).
-
-For listing your application in the Azure AD app gallery, read the details [here](../manage-apps/v2-howto-app-gallery-listing.md).
---
-### Updated "switch organizations" user experience in My Account.
-
-**Type:** Changed feature
-**Service category:** My Profile/Account
-**Product capability:** End User Experiences
-
-Updated "switch organizations" user interface in My Account. This visually improves the UI and provides the end-user with clear instructions. Added a manage organizations link to blade per customer feedback. [Learn more](https://support.microsoft.com/account-billing/switch-organizations-in-your-work-or-school-account-portals-c54c32c9-2f62-4fad-8c23-2825ed49d146).
-
-
-
active-directory Howto Identity Protection Remediate Unblock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-remediate-unblock.md
If a password reset isn't an option for you, you can choose to dismiss user risk
When you select **Dismiss user risk**, all events are closed and the affected user is no longer at risk. However, because this method doesn't have an impact on the existing password, it doesn't bring the related identity back into a safe state.
+To **Dismiss user risk**, search for and select **Azure AD Risky users**, select the affected user, and select **Dismiss user(s) risk**.
+ ### Close individual risk detections manually You can close individual risk detections manually. By closing risk detections manually, you can lower the user risk level. Typically, risk detections are closed manually in response to a related investigation. For example, when talking to a user reveals that an active risk detection isn't required anymore.
active-directory Admin Consent Workflow Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/admin-consent-workflow-faq.md
- Previously updated : 11/17/2021+ Last updated : 05/27/2022
active-directory App Management Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/app-management-videos.md
+
+ Title: Application management videos
+description: A list of videos about app registrations, enterprise apps, consent and permissions, and app ownership and assignment in Azure AD
++++++++ Last updated : 05/31/2022++++
+# Application management videos
+
+Learn about the key concepts of application management such as App registrations vs enterprise apps, consent and permissions framework and app ownership and, user assignment.
+
+## App registrations and Enterprise apps
+
+Learn about the different use cases and personas involved in App Registrations and Enterprise Apps and how developers and admins interact with each option to manage applications in Azure AD.
+___
+
+ :::column:::
+ [What is the difference between app registrations and enterprise apps?](https://www.youtube.com/watch?v=JeahL9ZtGfQ&list=PLlrxD0HtieHiBPIyUWkqVzoMrgfwKi4dY&index=4&t=2s)(2:01)
+ :::column-end:::
+ :::column:::
+ >[!Video https://www.youtube.com/embed/JeahL9ZtGfQ]
+ :::column-end:::
+++
+## Consent and permissions for admins
+
+Learn about the options available for managing consent to applications in a tenant. Learn how about delegated permissions and how to revoke previously consented permissions to mitigate risks posed by malicious applications.
+___
+
+ :::column:::
+ 1 - [How do I turn on the admin consent workflow?](https://www.youtube.com/watch?v=19v7WSt9HwU&list=PLlrxD0HtieHiBPIyUWkqVzoMrgfwKi4dY&index=4)(1:04)
+ :::column-end:::
+ :::column:::
+ >[!Video https://www.youtube.com/embed/19v7WSt9HwU]
+ :::column-end:::
+ :::column:::
+ 2 - [How do I grant admin consent in the Azure AD portal](https://www.youtube.com/watch?v=LSYcelwdhHI&list=PLlrxD0HtieHiBPIyUWkqVzoMrgfwKi4dY&index=5)(1:19)
+ :::column-end:::
+ :::column:::
+ >[!Video https://www.youtube.com/embed/LSYcelwdhHI]
+ :::column-end:::
+ :::column:::
+ 3 - [How do delegated permissions work](https://www.youtube.com/watch?v=URTrOXCyH1s&list=PLlrxD0HtieHiBPIyUWkqVzoMrgfwKi4dY&index=7)(1:21)
+ :::column-end:::
+ :::column:::
+ >[!Video https://www.youtube.com/embed/URTrOXCyH1s]
+ :::column-end:::
+ :::column:::
+ 4 - [How do I revoke permissions I've previously consented to for an app](https://www.youtube.com/watch?v=A88uh7ICNJU&list=PLlrxD0HtieHiBPIyUWkqVzoMrgfwKi4dY&index=6)(1:34)
+ :::column-end:::
+ :::column:::
+ >[!Video https://www.youtube.com/embed/A88uh7ICNJU]
+ :::column-end:::
++
+## Assigning owners and users to an enterprise app
+Learn about who can assign owners to service principals, how to assign these owners, permissions that owners have, and what to do when an owner leaves the organization.
+Learn how to assign users and, groups to an enterprise application and how and why an enterprise app may show up in a tenant.
+___
+
+ :::column:::
+ 1 - [How can you ensure healthy ownership to manage your Azure AD app ecosystem?](https://www.youtube.com/watch?v=akOrP3mP4UQ&list=PLlrxD0HtieHiBPIyUWkqVzoMrgfwKi4dY&index=1)(2:13)
+ :::column-end:::
+ :::column:::
+ >[!Video https://www.youtube.com/embed/akOrP3mP4UQ]
+ :::column-end:::
+ :::column:::
+ 2 - [How do I manage who can access the applications in my tenant](https://www.youtube.com/watch?v=IVRI9mSPDBA&list=PLlrxD0HtieHiBPIyUWkqVzoMrgfwKi4dY&index=2)(1:48)
+ :::column-end:::
+ :::column:::
+ >[!Video https://www.youtube.com/embed/IVRI9mSPDBA]
+ :::column-end:::
+ :::column:::
+ 3 - [Why is this app in my tenant?](https://www.youtube.com/watch?v=NhbcVt5xOVI&list=PLlrxD0HtieHiBPIyUWkqVzoMrgfwKi4dY&index=8)(1:36)
+ :::column-end:::
+ :::column:::
+ >[!Video https://www.youtube.com/embed/NhbcVt5xOVI]
+ :::column-end:::
+ :::column:::
+
+ :::column-end:::
+ :::column:::
+
+ :::column-end:::
active-directory Configure Admin Consent Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-admin-consent-workflow.md
Previously updated : 03/22/2021 Last updated : 05/27/2022
active-directory Debug Saml Sso Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/debug-saml-sso-issues.md
Previously updated : 02/18/2019 Last updated : 05/27/2022 # Debug SAML-based single sign-on to applications
Learn how to find and fix [single sign-on](what-is-single-sign-on.md) issues for
## Before you begin
-We recommend installing the [My Apps Secure Sign-in Extension](https://support.microsoft.com/account-billing/troubleshoot-problems-with-the-my-apps-portal-d228da80-fcb7-479c-b960-a1e2535cbdff#im-having-trouble-installing-the-my-apps-secure-sign-in-extension). This browser extension makes it easy to gather the SAML request and SAML response information that you need to resolving issues with single sign-on. In case you cannot install the extension, this article shows you how to resolve issues both with and without the extension installed.
+We recommend installing the [My Apps Secure Sign-in Extension](https://support.microsoft.com/account-billing/troubleshoot-problems-with-the-my-apps-portal-d228da80-fcb7-479c-b960-a1e2535cbdff#im-having-trouble-installing-the-my-apps-secure-sign-in-extension). This browser extension makes it easy to gather the SAML request and SAML response information that you need to resolve issues with single sign-on. In case you can't install the extension, this article shows you how to resolve issues both with and without the extension installed.
To download and install the My Apps Secure Sign-in Extension, use one of the following links.
To test SAML-based single sign-on between Azure AD and a target application:
![Screenshot showing the test SAML SSO page](./media/debug-saml-sso-issues/test-single-sign-on.png)
-If you are successfully signed in, the test has passed. In this case, Azure AD issued a SAML response token to the application. The application used the SAML token to successfully sign you in.
+If you're successfully signed in, the test has passed. In this case, Azure AD issued a SAML response token to the application. The application used the SAML token to successfully sign you in.
If you have an error on the company sign-in page or the application's page, use one of the next sections to resolve the error.
To debug this error, you need the error message and the SAML request. The My App
1. When an error occurs, the extension redirects you back to the Azure AD **Test single sign-on** blade. 1. On the **Test single sign-on** blade, select **Download the SAML request**. 1. You should see specific resolution guidance based on the error and the values in the SAML request.
-1. You will see a **Fix it** button to automatically update the configuration in Azure AD to resolve the issue. If you don't see this button, then the sign-in issue is not due to a misconfiguration on Azure AD.
+1. You'll see a **Fix it** button to automatically update the configuration in Azure AD to resolve the issue. If you don't see this button, then the sign-in issue isn't due to a misconfiguration on Azure AD.
If no resolution is provided for the sign-in error, we suggest that you use the feedback textbox to inform us.
If no resolution is provided for the sign-in error, we suggest that you use the
- A statement identifying the root cause of the problem. 1. Go back to Azure AD and find the **Test single sign-on** blade. 1. In the text box above **Get resolution guidance**, paste the error message.
-1. Click **Get resolution guidance** to display steps for resolving the issue. The guidance might require information from the SAML request or SAML response. If you're not using the My Apps Secure Sign-in Extension, you might need a tool such as [Fiddler](https://www.telerik.com/fiddler) to retrieve the SAML request and response.
-1. Verify that the destination in the SAML request corresponds to the SAML Single Sign-On Service URL obtained from Azure AD.
-1. Verify the issuer in the SAML request is the same identifier you have configured for the application in Azure AD. Azure AD uses the issuer to find an application in your directory.
+1. Select **Get resolution guidance** to display steps for resolving the issue. The guidance might require information from the SAML request or SAML response. If you're not using the My Apps Secure Sign-in Extension, you might need a tool such as [Fiddler](https://www.telerik.com/fiddler) to retrieve the SAML request and response.
+1. Verify that the destination in the SAML request corresponds to the SAML Single Sign-on Service URL obtained from Azure AD.
+1. Verify the issuer in the SAML request is the same identifier you've configured for the application in Azure AD. Azure AD uses the issuer to find an application in your directory.
1. Verify AssertionConsumerServiceURL is where the application expects to receive the SAML token from Azure AD. You can configure this value in Azure AD, but it's not mandatory if it's part of the SAML request. ## Resolve a sign-in error on the application page
-You might sign in successfully and then see an error on the application's page. This occurs when Azure AD issued a token to the application, but the application does not accept the response.
+You might sign in successfully and then see an error on the application's page. This occurs when Azure AD issued a token to the application, but the application doesn't accept the response.
To resolve the error, follow these steps, or watch this [short video about how to use Azure AD to troubleshoot SAML SSO](https://www.youtube.com/watch?v=poQCJK0WPUk&list=PLLasX02E8BPBm1xNMRdvP6GtA6otQUqp0&index=8): 1. If the application is in the Azure AD Gallery, verify that you've followed all the steps for integrating the application with Azure AD. To find the integration instructions for your application, see the [list of SaaS application integration tutorials](../saas-apps/tutorial-list.md). 1. Retrieve the SAML response.
- - If the My Apps Secure Sign-in extension is installed, from the **Test single sign-on** blade, click **download the SAML response**.
- - If the extension is not installed, use a tool such as [Fiddler](https://www.telerik.com/fiddler) to retrieve the SAML response.
+ - If the My Apps Secure Sign-in extension is installed, from the **Test single sign-on** blade, select **download the SAML response**.
+ - If the extension isn't installed, use a tool such as [Fiddler](https://www.telerik.com/fiddler) to retrieve the SAML response.
1. Notice these elements in the SAML response token: - User unique identifier of NameID value and format - Claims issued in the token
To resolve the error, follow these steps, or watch this [short video about how t
For more information on the SAML response, see [Single Sign-on SAML protocol](../develop/single-sign-on-saml-protocol.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json).
-1. Now that you have reviewed the SAML response, see [Error on an application's page after signing in](application-sign-in-problem-application-error.md) for guidance on how to resolve the problem.
+1. Now that you've reviewed the SAML response, see [Error on an application's page after signing in](application-sign-in-problem-application-error.md) for guidance on how to resolve the problem.
1. If you're still not able to sign in successfully, you can ask the application vendor what is missing from the SAML response. ## Next steps
active-directory Howto Saml Token Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/howto-saml-token-encryption.md
Previously updated : 03/13/2020 Last updated : 05/27/2022
When you configure a keyCredential using Graph, PowerShell, or in the applicatio
1. From the Azure portal, go to **Azure Active Directory > App registrations**.
-1. Select **All apps** from the dropdown to show all apps, and then select the enterprise application that you want to configure.
+1. Select the **All apps** tab to show all apps, and then select the application that you want to configure.
1. In the application's page, select **Manifest** to edit the [application manifest](../develop/reference-app-manifest.md).
active-directory Review Admin Consent Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/review-admin-consent-requests.md
Previously updated : 03/22/2021 Last updated : 05/27/2022
# Review admin consent requests
-In this article, you learn how to review and take action on admin consent requests. To review and act on consent requests, you must be designated as a reviewer. As a reviewer, you only see admin consent requests that were created after you were designated as a reviewer.
+In this article, you learn how to review and take action on admin consent requests. To review and act on consent requests, you must be designated as a reviewer. As a reviewer, you can view all admin consent requests but you can only act on those requests that were created after you were designated as a reviewer.
## Prerequisites
To review the admin consent requests and take action:
1. In the filter search box, type and select **Azure Active Directory**. 1. From the navigation menu, select **Enterprise applications**. 1. Under **Activity**, select **Admin consent requests**.
-1. Select the application that is being requested.
-1. Review details about the request:
+1. Select **My Pending** tab to view and act on the pending requests.
+1. Select the application that is being requested from the list.
+1. Review details about the request:
+ - To view the application details, select the **App details** tab.
- To see who is requesting access and why, select the **Requested by** tab. - To see what permissions are being requested by the application, select **Review permissions and consent**.
+ :::image type="content" source="media/configure-admin-consent-workflow/review-consent-requests.png" alt-text="Screenshot of the admin consent requests in the portal.":::
+
1. Evaluate the request and take the appropriate action: - **Approve the request**. To approve a request, grant admin consent to the application. Once a request is approved, all requestors are notified that they have been granted access. Approving a request allows all users in your tenant to access the application unless otherwise restricted with user assignment.
- - **Deny the request**. To deny a request, you must provide a justification that will be provided to all requestors. Once a request is denied, all requestors are notified that they have been denied access to the application. Denying a request won't prevent users from requesting admin consent to the app again in the future.
+ - **Deny the request**. To deny a request, you must provide a justification that will be provided to all requestors. Once a request is denied, all requestors are notified that they have been denied access to the application. Denying a request won't prevent users from requesting admin consent to the application again in the future.
- **Block the request**. To block a request, you must provide a justification that will be provided to all requestors. Once a request is blocked, all requestors are notified they've been denied access to the application. Blocking a request creates a service principal object for the application in your tenant in a disabled state. Users won't be able to request admin consent to the application in the future.+
+## Next steps
+- [Review permissions granted to apps](manage-application-permissions.md)
+- [Grant tenant-wide admin consent](grant-admin-consent.md)
active-directory Tutorial Manage Certificates For Federated Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tutorial-manage-certificates-for-federated-single-sign-on.md
Previously updated : 03/31/2022 Last updated : 05/27/2022
# Tutorial: Manage certificates for federated single sign-on
-In this article, we cover common questions and information related to certificates that Azure Active Directory (Azure AD) creates to establish federated single sign-on (SSO) to your software as a service (SaaS) applications. Add applications from the Azure AD app gallery or by using a non-gallery application template. Configure the application by using the federated SSO option.
+In this article, we cover common questions and information related to certificates that Azure Active Directory (Azure AD) creates to establish federated single sign-on (SSO) to your software as a service (SaaS) applications. Add applications from the Azure AD application gallery or by using a non-gallery application template. Configure the application by using the federated SSO option.
This tutorial is relevant only to apps that are configured to use Azure AD SSO through [Security Assertion Markup Language](https://wikipedia.org/wiki/Security_Assertion_Markup_Language) (SAML) federation.
+Using the information in this tutorial, an administrator of the application learns how to:
+
+> [!div class="checklist"]
+> * Generate certificates for gallery and non-gallery applications
+> * Customize the expiration dates for certificates
+> * Add email notification address for certificate expiration dates
+> * Renew certificates
+
+## Prerequisites
+
+- An Azure account with an active subscription. If you don't already have one, [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- One of the following roles: Global Administrator, Privileged Role Administrator, Cloud Application Administrator, or Application Administrator.
+- An enterprise application that has been configured in your Azure AD tenant.
++ ## Auto-generated certificate for gallery and non-gallery applications When you add a new application from the gallery and configure a SAML-based sign-on (by selecting **Single sign-on** > **SAML** from the application overview page), Azure AD generates a certificate for the application that is valid for three years. To download the active certificate as a security certificate (**.cer**) file, return to that page (**SAML-based sign-on**) and select a download link in the **SAML Signing Certificate** heading. You can choose between the raw (binary) certificate or the Base64 (base 64-encoded text) certificate. For gallery applications, this section might also show a link to download the certificate as federation metadata XML (an **.xml** file), depending on the requirement of the application.
Next, download the new certificate in the correct format, upload it to the appli
1. When you want to roll over to the new certificate, go back to the **SAML Signing Certificate** page, and in the newly saved certificate row, select the ellipsis (**...**) and select **Make certificate active**. The status of the new certificate changes to **Active**, and the previously active certificate changes to a status of **Inactive**. 1. Continue following the application's SAML sign-on configuration instructions that you displayed earlier, so that you can upload the SAML signing certificate in the correct encoding format.
-If your application doesn't have any validation for the certificate's expiration, and the certificate matches in both Azure Active Directory and your application, your app is still accessible despite having an expired certificate. Ensure your application can validate the certificate's expiration date.
+If your application doesn't have any validation for the certificate's expiration, and the certificate matches in both Azure Active Directory and your application, your application is still accessible despite having an expired certificate. Ensure your application can validate the certificate's expiration date.
## Add email notification addresses for certificate expiration
If a certificate is about to expire, you can renew it using a procedure that res
1. In the newly saved certificate row, select the ellipsis (**...**) and then select **Make certificate active**. 1. Skip the next two steps.
-1. If the app can only handle one certificate at a time, pick a downtime interval to perform the next step. (Otherwise, if the application doesnΓÇÖt automatically pick up the new certificate but can handle more than one signing certificate, you can perform the next step anytime.)
-1. Before the old certificate expires, follow the instructions in the [Upload and activate a certificate](#upload-and-activate-a-certificate) section earlier. If your application certificate isn't updated after a new certificate is updated in Azure Active Directory, authentication on your app may fail.
+1. If the application can only handle one certificate at a time, pick a downtime interval to perform the next step. (Otherwise, if the application doesnΓÇÖt automatically pick up the new certificate but can handle more than one signing certificate, you can perform the next step anytime.)
+1. Before the old certificate expires, follow the instructions in the [Upload and activate a certificate](#upload-and-activate-a-certificate) section earlier. If your application certificate isn't updated after a new certificate is updated in Azure Active Directory, authentication on your application may fail.
1. Sign in to the application to make sure that the certificate works correctly.
-If your application doesn't validate the certificate expiration configured in Azure Active Directory, and the certificate matches in both Azure Active Directory and your application, your app is still accessible despite having an expired certificate. Ensure your application can validate certificate expiration.
+If your application doesn't validate the certificate expiration configured in Azure Active Directory, and the certificate matches in both Azure Active Directory and your application, your application is still accessible despite having an expired certificate. Ensure your application can validate certificate expiration.
## Related articles -- [Tutorials for integrating SaaS applications with Azure Active Directory](../saas-apps/tutorial-list.md) - [Application management with Azure Active Directory](what-is-application-management.md) - [Single sign-on to applications in Azure Active Directory](what-is-single-sign-on.md) - [Debug SAML-based single sign-on to applications in Azure Active Directory](./debug-saml-sso-issues.md)
active-directory Security Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/security-planning.md
Attackers might try to target privileged accounts so that they can disrupt the i
* Impersonation attacks * Credential theft attacks such as keystroke logging, Pass-the-Hash, and Pass-The-Ticket
-By deploying privileged access workstations, you can reduce the risk that administrators enter their credentials in a desktop environment that hasn't been hardened. For more information, see [Privileged Access Workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/).
+By deploying privileged access workstations, you can reduce the risk that administrators enter their credentials in a desktop environment that hasn't been hardened. For more information, see [Privileged Access Workstations](/security/compass/overview).
#### Review National Institute of Standards and Technology recommendations for handling incidents
active-directory Github Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/github-provisioning-tutorial.md
For more information, see [Assign a user or group to an enterprise app](../manag
## Configuring user provisioning to GitHub
-This section guides you through connecting your Azure AD to GitHub's SCIM provisioning API to automate provisioning of GitHub organization membership. This integration, which leverages an [OAuth app](https://docs.github.com/en/free-pro-team@latest/github/authenticating-to-github/authorizing-oauth-apps#oauth-apps-and-organizations), automatically adds, manages, and removes members' access to a GitHub Enterprise Cloud organization based on user and group assignment in Azure AD. When users are [provisioned to a GitHub organization via SCIM](https://docs.github.com/en/free-pro-team@latest/rest/reference/scim#provision-and-invite-a-scim-user), an email invitation is sent to the user's email address.
+This section guides you through connecting your Azure AD to GitHub's SCIM provisioning API to automate provisioning of GitHub organization membership. This integration, which leverages an [OAuth app](https://docs.github.com/en/free-pro-team@latest/github/authenticating-to-github/authorizing-oauth-apps#oauth-apps-and-organizations), automatically adds, manages, and removes members' access to a GitHub Enterprise Cloud organization based on user and group assignment in Azure AD. When users are [provisioned to a GitHub organization via SCIM](https://docs.github.com/en/rest/enterprise-admin/scim), an email invitation is sent to the user's email address.
### Configure automatic user account provisioning to GitHub in Azure AD
active-directory Credential Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/credential-design.md
# How to customize your verifiable credentials (preview) + Verifiable credentials are made up of two components, the rules and display files. The rules file determines what the user needs to provide before they receive a verifiable credential. The display file controls the branding of the credential and styling of the claims. In this guide, we will explain how to modify both files to meet the requirements of your organization. > [!IMPORTANT]
active-directory Decentralized Identifier Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/decentralized-identifier-overview.md
# Introduction to Azure Active Directory Verifiable Credentials (preview) + > [!IMPORTANT] > Azure Active Directory Verifiable Credentials is currently in public preview. > This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
The scenario we use to explain how VCs work involves:
Today, Alice provides a username and password to log onto WoodgroveΓÇÖs networked environment. Woodgrove is deploying a verifiable credential solution to provide a more manageable way for Alice to prove that she is an employee of Woodgrove. Proseware accepts verifiable credentials issued by Woodgrove as proof of employment to offer corporate discounts as part of their corporate discount program.
-Alice requests Woodgrove Inc for a proof of employment verifiable credential. Woodgrove Inc attests Alice's identiy and issues a signed verfiable credential that Alice can accept and store in her digital wallet application. Alice can now present this verifiable credential as a proof of employement on the Proseware site. After a succesfull presentation of the credential, Prosware offers discount to Alice and the transaction is logged in Alice's wallet application so that she can track where and to whom she has presented her proof of employment verifiable credential.
+Alice requests Woodgrove Inc for a proof of employment verifiable credential. Woodgrove Inc attests Alice's identity and issues a signed verfiable credential that Alice can accept and store in her digital wallet application. Alice can now present this verifiable credential as a proof of employement on the Proseware site. After a succesfull presentation of the credential, Prosware offers discount to Alice and the transaction is logged in Alice's wallet application so that she can track where and to whom she has presented her proof of employment verifiable credential.
![microsoft-did-overview](media/decentralized-identifier-overview/did-overview.png)
There are three primary actors in the verifiable credential solution. In the fol
- **Step 1**, the **user** requests a verifiable credential from an issuer. - **Step 2**, the **issuer** of the credential attests that the proof the user provided is accurate and creates a verifiable credential signed with their DID and the userΓÇÖs DID is the subject.-- **In Step 3**, the user signs a verifiable presentation (VP) with their DID and sends to the **verifier.** The verifier then validates of the credential by matching with the public key placed in the DPKI.
+- **In Step 3**, the user signs a verifiable presentation (VP) with their DID and sends to the **verifier.** The verifier then validates the credential by matching with the public key placed in the DPKI.
The roles in this scenario are:
active-directory Get Started Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/get-started-request-api.md
# Request Service REST API (preview) + Azure Active Directory (Azure AD) Verifiable Credentials includes the Request Service REST API. This API allows you to issue and verify credentials. This article shows you how to start using the Request Service REST API. > [!IMPORTANT]
active-directory How To Create A Free Developer Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-create-a-free-developer-account.md
# How to create a free Azure Active Directory developer tenant + > [!IMPORTANT] > Azure Active Directory Verifiable Credentials is currently in public preview. > This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
active-directory How To Dnsbind https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-dnsbind.md
# Link your domain to your Decentralized Identifier (DID) (preview) + > [!IMPORTANT] > Azure Active Directory Verifiable Credentials is currently in public preview. > This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
active-directory How To Issuer Revoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-issuer-revoke.md
# Revoke a previously issued verifiable credential (preview) + As part of the process of working with verifiable credentials (VCs), you not only have to issue credentials, but sometimes you also have to revoke them. In this article we go over the **Status** property part of the VC specification and take a closer look at the revocation process, why we may want to revoke credentials and some data and privacy implications. > [!IMPORTANT]
active-directory How To Opt Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-opt-out.md
# Opt out of the verifiable credentials (preview) + In this article: - The reason why you may need to opt out.
active-directory Introduction To Verifiable Credentials Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/introduction-to-verifiable-credentials-architecture.md
# Azure AD Verifiable Credentials architecture overview (preview) + > [!IMPORTANT] > Azure Active Directory Verifiable Credentials is currently in public preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
active-directory Issuance Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/issuance-request-api.md
# Request Service REST API issuance specification (preview) + Azure Active Directory (Azure AD) Verifiable Credentials includes the Request Service REST API. This API allows you to issue and verify a credential. This article specifies the Request Service REST API for an issuance request. ## HTTP request
active-directory Issuer Openid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/issuer-openid.md
# Issuer service communication examples (preview) + The Azure AD Verifiable Credential service can issue verifiable credentials by retrieving claims from an ID token generated by your organization's OpenID compliant identity provider. This article instructs you on how to set up your identity provider so Authenticator can communicate with it and retrieve the correct ID Token to pass to the issuing service. > [!IMPORTANT]
active-directory Plan Issuance Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/plan-issuance-solution.md
# Plan your Azure Active Directory Verifiable Credentials issuance solution (preview) + >[!IMPORTANT] > Azure Active Directory Verifiable Credentials is currently in public preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
active-directory Plan Verification Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/plan-verification-solution.md
# Plan your Azure Active Directory Verifiable Credentials verification solution (preview) + >[!IMPORTANT] > Azure Active Directory Verifiable Credentials is currently in public preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
active-directory Presentation Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/presentation-request-api.md
# Request Service REST API presentation specification (preview) + Azure Active Directory (Azure AD) Verifiable Credentials includes the Request Service REST API. This API allows you to issue and verify a credential. This article specifies the Request Service REST API for a presentation request. The presentation request asks the user to present a verifiable credential, and then verify the credential. ## HTTP request
POST https://beta.did.msidentity.com/v1.0/contoso.onmicrosoft.com/verifiablecred
Content-Type: application/json Authorization: Bearer <token>
-{
-    "includeQRCode": true,
- "callback":ΓÇ»{
-    "url": "https://www.contoso.com/api/verifier/presentationCallbac",
-    "state": "11111111-2222-2222-2222-333333333333",
-      "headers": {
-        "api-key": "an-api-key-can-go-here"
-      }
-    },
+{
+    "includeQRCode": true,
+ "callback":ΓÇ»{
+    "url": "https://www.contoso.com/api/verifier/presentationCallbac",
+    "state": "11111111-2222-2222-2222-333333333333",
+      "headers": {
+        "api-key": "an-api-key-can-go-here"
+      }
+    },
    ...
-}
-```
+}
+```
The following permission is required to call the Request Service REST API. For more information, see [Grant permissions to get access tokens](verifiable-credentials-configure-tenant.md#grant-permissions-to-get-access-tokens).
The presentation request payload contains information about your verifiable cred
} ```
-The payload contains the following properties.
+The payload contains the following properties.
|Parameter |Type | Description | ||||
The Request Service REST API generates several events to the callback endpoint.
If successful, this method returns a response code (*HTTP 201 Created*), and a collection of event objects in the response body. The following JSON demonstrates a successful response: ```json
-{
+{
"requestId": "e4ef27ca-eb8c-4b63-823b-3b95140eac11", "url": "openid://vc/?request_uri=https://beta.did.msidentity.com/v1.0/87654321-0000-0000-0000-000000000000/verifiablecredentials/request/e4ef27ca-eb8c-4b63-823b-3b95140eac11", "expiry": 1633017751, "qrCode":ΓÇ»"data:image/png;base64,iVBORw0KGgoA<SNIP>"
-}
+}
``` The response contains the following properties:
The response contains the following properties:
## Callback events
-The callback endpoint is called when a user scans the QR code, uses the deep link the authenticator app, or finishes the presentation process.
+The callback endpoint is called when a user scans the QR code, uses the deep link the authenticator app, or finishes the presentation process.
|Property |Type |Description | ||||
The callback endpoint is called when a user scans the QR code, uses the deep lin
| `code` |string |The code returned when the request was retrieved by the authenticator app. Possible values: <ul><li>`request_retrieved`: The user scanned the QR code or selected the link that starts the presentation flow.</li><li>`presentation_verified`: The verifiable credential validation completed successfully.</li></ul> | | `state` |string| Returns the state value that you passed in the original payload. | | `subject`|string | The verifiable credential user DID.|
-| `issuers`| array |Returns an array of verifiable credentials requested. For each verifiable credential, it provides: </li><li>The verifiable credential type(s).</li><li>The issuer's DID</li><li>The claims retrieved.</li><li>The verifiable credential issuerΓÇÖs domain. </li><li>The verifiable credential issuerΓÇÖs domain validation status. </li></ul> |
+| `issuers`| array |Returns an array of verifiable credentials requested. For each verifiable credential, it provides: </li><li>The verifiable credential type(s).</li><li>The issuer's DID</li><li>The claims retrieved.</li><li>The verifiable credential issuer's domain. </li><li>The verifiable credential issuer's domain validation status. </li></ul> |
| `receipt`| string | Optional. The receipt contains the original payload sent from the wallet to the Verifiable Credentials service. The receipt should be used for troubleshooting/debugging only. The format in the receipt is not fix and can change based on the wallet and version used.| The following example demonstrates a callback payload when the authenticator app starts the presentation request: ```json
-{
-    "requestId":"aef2133ba45886ce2c38974339ba1057",
-    "code":"request_retrieved",
+{
+    "requestId":"aef2133ba45886ce2c38974339ba1057",
+    "code":"request_retrieved",
    "state":"Wy0ThUz1gSasAjS1"
-}
+}
``` The following example demonstrates a callback payload after the verifiable credential presentation has successfully completed:
active-directory Verifiable Credentials Configure Issuer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-issuer.md
Last updated 05/03/2022
+ # Issue Azure AD Verifiable Credentials from an application (preview) + In this tutorial, you run a sample application from your local computer that connects to your Azure Active Directory (Azure AD) tenant. Using the application, you're going to issue and verify a verified credential expert card. In this article, you learn how to:
active-directory Verifiable Credentials Configure Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md
Last updated 05/06/2022
# Configure your tenant for Azure AD Verifiable Credentials (preview) + Azure Active Directory (Azure AD) Verifiable Credentials safeguards your organization with an identity solution that's seamless and decentralized. The service allows you to issue and verify credentials. For issuers, Azure AD provides a service that they can customize and use to issue their own verifiable credentials. For verifiers, the service provides a free REST API that makes it easy to request and accept verifiable credentials in your apps and services. In this tutorial, you learn how to configure your Azure AD tenant so it can use the verifiable credentials service.
active-directory Verifiable Credentials Configure Verifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-verifier.md
Previously updated : 10/08/2021 Last updated : 05/18/2022 # Customer intent: As an enterprise, we want to enable customers to manage information about themselves by using verifiable credentials. # Configure Azure AD Verifiable Credentials verifier (preview) + In [Issue Azure AD Verifiable Credentials from an application (preview)](verifiable-credentials-configure-issuer.md), you learn how to issue and verify credentials by using the same Azure Active Directory (Azure AD) tenant. In this tutorial, you go over the steps needed to present and verify your first verifiable credential: a verified credential expert card. As a verifier, you unlock privileges to subjects that possess verified credential expert cards. In this tutorial, you run a sample application from your local computer that asks you to present a verified credential expert card, and then verifies it.
active-directory Verifiable Credentials Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-faq.md
# Frequently Asked Questions (FAQ) (preview) + This page contains commonly asked questions about Verifiable Credentials and Decentralized Identity. Questions are organized into the following sections. - [Vocabulary and basics](#the-basics)
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/whats-new.md
# What's new in Azure Active Directory Verifiable Credentials (preview) + This article lists the latest features, improvements, and changes in the Azure Active Directory (Azure AD) Verifiable Credentials service. ## May 2022
aks Azure Ad Integration Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-ad-integration-cli.md
For best practices on identity and resource control, see [Best practices for aut
[kubernetes-webhook]:https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[complete-script]: https://github.com/Azure-Samples/azure-cli-samples/tree/master/aks/azure-ad-integration/azure-ad-integration.sh
<!-- LINKS - internal --> [az-aks-create]: /cli/azure/aks#az_aks_create
aks Csi Secrets Store Nginx Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-nginx-tls.md
Previously updated : 10/19/2021 Last updated : 05/26/2022
This article walks you through the process of securing an NGINX Ingress Controll
Importing the ingress TLS certificate to the cluster can be accomplished using one of two methods: -- **Application** - The application deployment manifest declares and mounts the provider volume. Only when the application is deployed is the certificate made available in the cluster, and when the application is removed the secret is removed as well. This scenario fits development teams who are responsible for the applicationΓÇÖs security infrastructure and their integration with the cluster.-- **Ingress Controller** - The ingress deployment is modified to declare and mount the provider volume. The secret is imported when ingress pods are created. The applicationΓÇÖs pods have no access to the TLS certificate. This scenario fits scenarios where one team (i.e. IT) manages and provisions infrastructure and networking components (including HTTPS TLS certificates) and other teams manage application lifecycle. In this case, ingress is specific to a single namespace/workload and is deployed in the same namespace as the application.
+- **Application** - The application deployment manifest declares and mounts the provider volume. Only when the application is deployed, is the certificate made available in the cluster, and when the application is removed the secret is removed as well. This scenario fits development teams who are responsible for the applicationΓÇÖs security infrastructure and their integration with the cluster.
+- **Ingress Controller** - The ingress deployment is modified to declare and mount the provider volume. The secret is imported when ingress pods are created. The applicationΓÇÖs pods have no access to the TLS certificate. This scenario fits scenarios where one team (for example, IT) manages and creates infrastructure and networking components (including HTTPS TLS certificates) and other teams manage application lifecycle. In this case, ingress is specific to a single namespace/workload and is deployed in the same namespace as the application.
## Prerequisites
Importing the ingress TLS certificate to the cluster can be accomplished using o
## Generate a TLS certificate ```bash
-export CERT_NAME=ingresscert
+export CERT_NAME=aks-ingress-cert
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
- -out ingress-tls.crt \
- -keyout ingress-tls.key \
- -subj "/CN=demo.test.com/O=ingress-tls"
+ -out aks-ingress-tls.crt \
+ -keyout aks-ingress-tls.key \
+ -subj "/CN=demo.azure.com/O=aks-ingress-tls"
``` ### Import the certificate to AKV ```bash export AKV_NAME="[YOUR AKV NAME]"
-openssl pkcs12 -export -in ingress-tls.crt -inkey ingress-tls.key -out $CERT_NAME.pfx
+openssl pkcs12 -export -in aks-ingress-tls.crt -inkey aks-ingress-tls.key -out $CERT_NAME.pfx
# skip Password prompt ```
az keyvault certificate import --vault-name $AKV_NAME -n $CERT_NAME -f $CERT_NAM
First, create a new namespace: ```bash
-export NAMESPACE=ingress-test
+export NAMESPACE=ingress-basic
``` ```azurecli-interactive
-kubectl create ns $NAMESPACE
+kubectl create namespace $NAMESPACE
``` Select a [method to provide an access identity][csi-ss-identity-access] and configure your SecretProviderClass YAML accordingly. Additionally:
Select a [method to provide an access identity][csi-ss-identity-access] and conf
- Be sure to use `objectType=secret`, which is the only way to obtain the private key and the certificate from AKV. - Set `kubernetes.io/tls` as the `type` in your `secretObjects` section.
-See the following for an example of what your SecretProviderClass might look like:
+See the following example of what your SecretProviderClass might look like:
```yml apiVersion: secrets-store.csi.x-k8s.io/v1
spec:
key: tls.crt parameters: usePodIdentity: "false"
+ useVMManagedIdentity: "true"
+ userAssignedIdentityID: <client id>
keyvaultName: $AKV_NAME # the name of the AKV instance objects: | array:
The applicationΓÇÖs deployment will reference the Secrets Store CSI Driver's Azu
helm install ingress-nginx/ingress-nginx --generate-name \ --namespace $NAMESPACE \ --set controller.replicaCount=2 \
- --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set controller.nodeSelector."kubernetes\.io/os"=linux \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
- --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux
+ --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux
``` #### Bind certificate to ingress controller
The ingress controllerΓÇÖs deployment will reference the Secrets Store CSI Drive
helm install ingress-nginx/ingress-nginx --generate-name \ --namespace $NAMESPACE \ --set controller.replicaCount=2 \
- --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
- --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set controller.nodeSelector."kubernetes\.io/os"=linux \
+ --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \ --set controller.podLabels.aadpodidbinding=$AAD_POD_IDENTITY_NAME \ -f - <<EOF
Again, depending on your scenario, the instructions will change slightly. Follow
### Deploy the application using an application reference
-Create a file named `deployment.yaml` with the following content:
+Create a file named `aks-helloworld-one.yaml` with the following content:
```yml apiVersion: apps/v1 kind: Deployment metadata:
- name: busybox-one
- labels:
- app: busybox-one
+ name: aks-helloworld-one
spec: replicas: 1 selector: matchLabels:
- app: busybox-one
+ app: aks-helloworld-one
template: metadata: labels:
- app: busybox-one
+ app: aks-helloworld-one
spec: containers:
- - name: busybox
- image: k8s.gcr.io/e2e-test-images/busybox:1.29-1
- command:
- - "/bin/sleep"
- - "10000"
+ - name: aks-helloworld-one
+ image: mcr.microsoft.com/azuredocs/aks-helloworld:v1
+ ports:
+ - containerPort: 80
+ env:
+ - name: TITLE
+ value: "Welcome to Azure Kubernetes Service (AKS)"
volumeMounts: - name: secrets-store-inline mountPath: "/mnt/secrets-store" readOnly: true volumes:
+ - name: secrets-store-inline
+ csi:
+ driver: secrets-store.csi.k8s.io
+ readOnly: true
+ volumeAttributes:
+ secretProviderClass: "azure-tls"
+
+apiVersion: v1
+kind: Service
+metadata:
+ name: aks-helloworld-one
+spec:
+ type: ClusterIP
+ ports:
+ - port: 80
+ selector:
+ app: aks-helloworld-one
+```
+
+Create a file named `aks-helloworld-two.yaml` with the following content:
+
+```yml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: aks-helloworld-two
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: aks-helloworld-two
+ template:
+ metadata:
+ labels:
+ app: aks-helloworld-two
+ spec:
+ containers:
+ - name: aks-helloworld-two
+ image: mcr.microsoft.com/azuredocs/aks-helloworld:v1
+ ports:
+ - containerPort: 80
+ env:
+ - name: TITLE
+ value: "AKS Ingress Demo"
+ volumeMounts:
- name: secrets-store-inline
- csi:
- driver: secrets-store.csi.k8s.io
- readOnly: true
- volumeAttributes:
- secretProviderClass: "azure-tls"
+ mountPath: "/mnt/secrets-store"
+ readOnly: true
+ volumes:
+ - name: secrets-store-inline
+ csi:
+ driver: secrets-store.csi.k8s.io
+ readOnly: true
+ volumeAttributes:
+ secretProviderClass: "azure-tls"
apiVersion: v1 kind: Service metadata:
- name: busybox-one
+ name: aks-helloworld-two
spec: type: ClusterIP ports: - port: 80 selector:
- app: busybox-one
+ app: aks-helloworld-two
```
-And apply it to your cluster:
+And apply them to your cluster:
```bash
-kubectl apply -f deployment.yaml -n $NAMESPACE
+kubectl apply -f aks-helloworld-one.yaml -n $NAMESPACE
+kubectl apply -f aks-helloworld-two.yaml -n $NAMESPACE
``` Verify the Kubernetes secret has been created:
ingress-tls-csi kubernetes.io/tls
### Deploy the application using an ingress controller reference
-Create a file named `deployment.yaml` with the following content:
+Create a file named `aks-helloworld-one.yaml` with the following content:
```yml apiVersion: apps/v1 kind: Deployment metadata:
- name: busybox-one
- labels:
- app: busybox-one
+ name: aks-helloworld-one
spec: replicas: 1 selector: matchLabels:
- app: busybox-one
+ app: aks-helloworld-one
template: metadata: labels:
- app: busybox-one
+ app: aks-helloworld-one
spec: containers:
- - name: busybox
- image: k8s.gcr.io/e2e-test-images/busybox:1.29-1
- command:
- - "/bin/sleep"
- - "10000"
+ - name: aks-helloworld-one
+ image: mcr.microsoft.com/azuredocs/aks-helloworld:v1
+ ports:
+ - containerPort: 80
+ env:
+ - name: TITLE
+ value: "Welcome to Azure Kubernetes Service (AKS)"
apiVersion: v1 kind: Service metadata:
- name: busybox-one
+ name: aks-helloworld-one
spec: type: ClusterIP ports: - port: 80 selector:
- app: busybox-one
+ app: aks-helloworld-one
```
-And apply it to your cluster:
+Create a file named `aks-helloworld-two.yaml` with the following content:
+
+```yml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: aks-helloworld-two
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: aks-helloworld-two
+ template:
+ metadata:
+ labels:
+ app: aks-helloworld-two
+ spec:
+ containers:
+ - name: aks-helloworld-two
+ image: mcr.microsoft.com/azuredocs/aks-helloworld:v1
+ ports:
+ - containerPort: 80
+ env:
+ - name: TITLE
+ value: "AKS Ingress Demo"
+
+apiVersion: v1
+kind: Service
+metadata:
+ name: aks-helloworld-two
+spec:
+ type: ClusterIP
+ ports:
+ - port: 80
+ selector:
+ app: aks-helloworld-two
+```
+
+And apply them to your cluster:
```bash
-kubectl apply -f deployment.yaml -n $NAMESPACE
+kubectl apply -f aks-helloworld-one.yaml -n $NAMESPACE
+kubectl apply -f aks-helloworld-two.yaml -n $NAMESPACE
``` ## Deploy an ingress resource referencing the secret
-Finally, we can deploy a Kubernetes ingress resource referencing our secret. Create a file name `ingress.yaml` with the following content:
+Finally, we can deploy a Kubernetes ingress resource referencing our secret. Create a file name `hello-world-ingress.yaml` with the following content:
```yml apiVersion: networking.k8s.io/v1
kind: Ingress
metadata: name: ingress-tls annotations:
- kubernetes.io/ingress.class: nginx
- nginx.ingress.kubernetes.io/rewrite-target: /$1
+ nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
+ ingressClassName: nginx
tls: - hosts:
- - demo.test.com
+ - demo.azure.com
secretName: ingress-tls-csi rules:
- - host: demo.test.com
+ - host: demo.azure.com
http: paths:
- - backend:
+ - path: /hello-world-one(/|$)(.*)
+ pathType: Prefix
+ backend:
service:
- name: busybox-one
+ name: aks-helloworld-one
port: number: 80
- path: /(.*)
- - backend:
+ - path: /hello-world-two(/|$)(.*)
+ pathType: Prefix
+ backend:
service:
- name: busybox-two
+ name: aks-helloworld-two
+ port:
+ number: 80
+ - path: /(.*)
+ pathType: Prefix
+ backend:
+ service:
+ name: aks-helloworld-one
port: number: 80
- path: /two(/|$)(.*)
``` Make note of the `tls` section referencing the secret we've created earlier, and apply the file to your cluster: ```bash
-kubectl apply -f ingress.yaml -n $NAMESPACE
+kubectl apply -f hello-world-ingress.yaml -n $NAMESPACE
``` ## Obtain the external IP address of the ingress controller
kubectl apply -f ingress.yaml -n $NAMESPACE
Use `kubectl get service` to obtain the external IP address for the ingress controller. ```bash
- kubectl get service -l app=nginx-ingress --namespace $NAMESPACE
+kubectl get service --namespace $NAMESPACE --selector app.kubernetes.io/name=ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-nginx-ingress-1588032400-controller LoadBalancer 10.0.255.157 52.xx.xx.xx 80:31293/TCP,443:31265/TCP 19m
+nginx-ingress-1588032400-controller LoadBalancer 10.0.255.157 EXTERNAL_IP 80:31293/TCP,443:31265/TCP 19m
nginx-ingress-1588032400-default-backend ClusterIP 10.0.223.214 <none> 80/TCP 19m ```
nginx-ingress-1588032400-default-backend ClusterIP 10.0.223.214 <none>
Use `curl` to verify your ingress has been properly configured with TLS. Be sure to use the external IP you've obtained from the previous step: ```bash
-curl -v -k --resolve demo.test.com:443:52.xx.xx.xx https://demo.test.com
+curl -v -k --resolve demo.azure.com:443:EXTERNAL_IP https://demo.azure.com
+```
-# You should see output similar to the following
-* subject: CN=demo.test.com; O=ingress-tls
-* start date: Oct 15 04:23:46 2021 GMT
-* expire date: Oct 15 04:23:46 2022 GMT
-* issuer: CN=demo.test.com; O=ingress-tls
+No additional path was provided with the address, so the ingress controller defaults to the */* route. The first demo application is returned, as shown in the following condensed example output:
+
+```console
+[...]
+<!DOCTYPE html>
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head>
+ <link rel="stylesheet" type="text/css" href="/static/default.css">
+ <title>Welcome to Azure Kubernetes Service (AKS)</title>
+[...]
+```
+
+The *-v* parameter in our `curl` command outputs verbose information, including the TLS certificate received. Half-way through your curl output, you can verify that your own TLS certificate was used. The *-k* parameter continues loading the page even though we're using a self-signed certificate. The following example shows that the *issuer: CN=demo.azure.com; O=aks-ingress-tls* certificate was used:
+
+```
+[...]
+* Server certificate:
+* subject: CN=demo.azure.com; O=aks-ingress-tls
+* start date: Oct 22 22:13:54 2021 GMT
+* expire date: Oct 22 22:13:54 2022 GMT
+* issuer: CN=demo.azure.com; O=aks-ingress-tls
* SSL certificate verify result: self signed certificate (18), continuing anyway.
+[...]
+```
+
+Now add */hello-world-two* path to the address, such as `https://demo.azure.com/hello-world-two`. The second demo application with the custom title is returned, as shown in the following condensed example output:
+
+```
+curl -v -k --resolve demo.azure.com:443:EXTERNAL_IP https://demo.azure.com/hello-world-two
+
+[...]
+<!DOCTYPE html>
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head>
+ <link rel="stylesheet" type="text/css" href="/static/default.css">
+ <title>AKS Ingress Demo</title>
+[...]
``` <!-- LINKS INTERNAL -->
aks Quick Windows Container Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-cli.md
spec:
limits: cpu: 1 memory: 800M
- requests:
- cpu: .1
- memory: 300M
ports: - containerPort: 80 selector:
aks Quick Windows Container Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-powershell.md
spec:
limits: cpu: 1 memory: 800M
- requests:
- cpu: .1
- memory: 300M
ports: - containerPort: 80 selector:
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md
the link in the **Version** column to view the source on the
[!INCLUDE [azure-policy-reference-rp-aks-containerservice](../../includes/policy/reference/byrp/microsoft.containerservice.md)]
-### AKS Engine
-- ## Next steps - See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
aks Release Tracker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/release-tracker.md
description: Learn how to determine which Azure regions have the weekly AKS rele
Last updated 05/24/2022++ # AKS release tracker
+> [!NOTE]
+> The AKS release tracker is currently not accessible. When the feature is fully released, this article will be updated to include access instructions.
+ AKS releases weekly rounds of fixes and feature and component updates that affect all clusters and customers. However, these releases can take up to two weeks to roll out to all regions from the initial time of shipping due to Azure Safe Deployment Practices (SDP). It is important for customers to know when a particular AKS release is hitting their region, and the AKS release tracker provides these details in real time by versions and regions. ## Why release tracker?
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-managed-identity.md
A successful cluster creation using your own kubelet managed identity contains t
}, ```
-### Update an existing cluster using kubelet identity (Preview)
+### Update an existing cluster using kubelet identity
Update kubelet identity on an existing cluster with your existing identities.
-#### Install the `aks-preview` Azure CLI
-
-You also need the *aks-preview* Azure CLI extension version 0.5.64 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
+#### Make sure the CLI version is 2.37.0 or later
```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
+# Check the version of Azure CLI modules
+az version
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
+# Upgrade the version to make sure it is 2.37.0 or later
+az upgrade
```
-#### Updating your cluster with kubelet identity (Preview)
+#### Updating your cluster with kubelet identity
Now you can use the following command to update your cluster with your existing identities. Provide the control plane identity id via `assign-identity` and the kubelet managed identity via `assign-kubelet-identity`:
aks Use Network Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-network-policies.md
This article shows you how to install the network policy engine and create Kuber
You need the Azure CLI version 2.0.61 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-> [!TIP]
-> If you used the network policy feature during preview, we recommend that you [create a new cluster](#create-an-aks-cluster-and-enable-network-policy).
->
-> If you wish to continue using existing test clusters that used network policy during preview, upgrade your cluster to a new Kubernetes versions for the latest GA release and then deploy the following YAML manifest to fix the crashing metrics server and Kubernetes dashboard. This fix is only required for clusters that used the Calico network policy engine.
->
-> As a security best practice, [review the contents of this YAML manifest][calico-aks-cleanup] to understand what is deployed into the AKS cluster.
->
-> `kubectl delete -f https://raw.githubusercontent.com/Azure/aks-engine/master/docs/topics/calico-3.3.1-cleanup-after-upgrade.yaml`
- ## Overview of network policy All pods in an AKS cluster can send and receive traffic without limitations, by default. To improve security, you can define rules that control the flow of traffic. Back-end applications are often only exposed to required front-end services, for example. Or, database components are only accessible to the application tiers that connect to them.
api-management Api Management Howto Create Or Invite Developers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-create-or-invite-developers.md
When a developer is invited, an email is sent to the developer. This email is ge
Once the invitation is accepted, the account becomes active.
+Invitation link will be active for 2 days.
+ ## <a name="block-developer"> </a> Deactivate or reactivate a developer account By default, newly created or invited developer accounts are **Active**. To deactivate a developer account, click **Block**. To reactivate a blocked developer account, click **Activate**. A blocked developer account can't access the developer portal or call any APIs. To delete a user account, click **Delete**.
app-service Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python.md
Title: 'Quickstart: Deploy a Python (Django or Flask) web app to Azure'
description: Get started with Azure App Service by deploying your first Python app to Azure App Service. Last updated 03/22/2022--++ ms.devlang: python
automation Automation Hrw Run Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hrw-run-runbooks.md
Follow the next steps to use a managed identity for Azure resources on a Hybrid
For instance, a runbook with `Get-AzVM` can return all the VMs in the subscription with no call to `Connect-AzAccount`, and the user would be able to access Azure resources without having to authenticate within that runbook. You can disable context autosave in Azure PowerShell, as detailed [here](/powershell/azure/context-persistence?view=azps-7.3.2#save-azure-contexts-across-powershell-sessions).
-### Use runbook authentication with Run As account
+### Use runbook authentication with Hybrid Worker Credentials
-Instead of having your runbook provide its own authentication to local resources, you can specify a Run As account for a Hybrid Runbook Worker group. To specify a Run As account, you must define a [credential asset](./shared-resources/credentials.md) that has access to local resources. These resources include certificate stores and all runbooks run under these credentials on a Hybrid Runbook Worker in the group.
+Instead of having your runbook provide its own authentication to local resources, you can specify Hybrid Worker Credentials for a Hybrid Runbook Worker group. To specify a Hybrid Worker Credentials, you must define a [credential asset](./shared-resources/credentials.md) that has access to local resources. These resources include certificate stores and all runbooks run under these credentials on a Hybrid Runbook Worker in the group.
- The user name for the credential must be in one of the following formats:
Instead of having your runbook provide its own authentication to local resources
- To use the PowerShell runbook **Export-RunAsCertificateToHybridWorker**, you need to install the Az modules for Azure Automation on the local machine.
-#### Use a credential asset to specify a Run As account
+#### Use a credential asset for a Hybrid Runbook Worker group
-Use the following procedure to specify a Run As account for a Hybrid Runbook Worker group:
+By default, the Hybrid jobs run under the context of System account. However, to run Hybrid jobs under a different credential asset, follow the steps:
1. Create a [credential asset](./shared-resources/credentials.md) with access to local resources. 1. Open the Automation account in the Azure portal. 1. Select **Hybrid Worker Groups**, and then select the specific group.
-1. Select **All settings**, followed by **Hybrid worker group settings**.
-1. Change the value of **Run As** from **Default** to **Custom**.
+1. Select **Settings**.
+1. Change the value of **Hybrid Worker credentials** from **Default** to **Custom**.
1. Select the credential and click **Save**.
+1. If the following permissions are not assigned for Custom users, jobs might get suspended.
+Use your discretion in assigning the elevated permissions corresponding to the following registry keys/folders:
+
+**Registry path**
+
+- HKLM\SYSTEM\CurrentControlSet\Services\EventLog (read) </br>
+- HKLM\SYSTEM\CurrentControlSet\Services\WinSock2\Parameters (full access) </br>
+- HKLM\SOFTWARE\Microsoft\Wbem\CIMOM (full access) </br>
+- HKLM\Software\Policies\Microsoft\SystemCertificates\Root (full access) </br>
+- HKLM\Software\Microsoft\SystemCertificates (full access) </br>
+- HKLM\Software\Microsoft\EnterpriseCertificates (full access) </br>
+- HKLM\software\Microsoft\HybridRunbookWorker (full access) </br>
+- HKLM\software\Microsoft\HybridRunbookWorkerV2 (full access) </br>
+- HKEY_CURRENT_USER\SOFTWARE\Policies\Microsoft\SystemCertificates\Disallowed (full access) </br>
+- HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Setup\PnpLockdownFiles (full access) </br>
+
+**Folders**
+- C:\ProgramData\AzureConnectedMachineAgent\Tokens (read) </br>
+- C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows\0.1.0.18\HybridWorkerPackage\HybridWorkerAgent (full access)
## <a name="runas-script"></a>Install Run As account certificate
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md
To create a hybrid worker group in the Azure portal, follow these steps:
1. From the **Basics** tab, in the **Name** text box, enter a name for your Hybrid worker group.
-1. For the **Use run as credential** option:
+1. For the **Use Hybrid Worker Credentials** option:
- - If you select **No**, the hybrid extension will be installed using the local system account.
- - If you select **Yes**, then from the drop-down list, select the credential asset.
+ - If you select **Default**, the hybrid extension will be installed using the local system account.
+ - If you select **Custom**, then from the drop-down list, select the credential asset.
1. Select **Next** to advance to the **Hybrid workers** tab. You can select Azure virtual machines or Azure Arc-enabled servers to be added to this Hybrid worker group. If you don't select any machines, an empty Hybrid worker group will be created. You can still add machines later.
To install and use Hybrid Worker extension using REST API, follow these steps. T
1. Get the automation account details using this API call. ```http
- GET https://westcentralus.management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Automation/automationAccounts/{automationAccountName}?api-version=2021-06-22
+ GET https://westcentralus.management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Automation/automationAccounts/HybridWorkerExtension?api-version=2021-06-22
```
To install and use Hybrid Worker extension using REST API, follow these steps. T
1. Install the Hybrid Worker Extension on Azure VM by using the following API call. ```http
- PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/extensions/{vmExtensionName}?api-version=2021-11-01
+ PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/extensions/HybridWorkerExtension?api-version=2021-11-01
```
azure-app-configuration Concept Soft Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-soft-delete.md
Purge is the operation to permanently delete the stores in a soft deleted state,
## Purge protection With Purge protection enabled, soft deleted stores can't be purged in the retention period. If disabled, the soft deleted store can be purged before the retention period expires. Once purge protection is enabled on a store, it can't be disabled.
-## Permissions to recover or purge store
+## Permissions to recover a deleted store
-A user has to have below permissions to recover or purge a soft-deleted app configuration store. The built-in Contributor and Owner roles already have the required permissions to recover and purge.
+- `Microsoft.AppConfiguration/configurationStores/write`
-- Permission to recover - `Microsoft.AppConfiguration/configurationStores/write`
+To recover a deleted App Configuration store the `Microsoft.AppConfiguration/configurationStores/write` permission is needed. The built-in "Owner" and "Contributor" roles contain this permission by default. The permission can be assigned at the subscription or resource group scope.
-- Permission to purge - `Microsoft.AppConfiguration/configurationStores/action`
+## Permissions to read and purge deleted stores
+
+* Read: `Microsoft.AppConfiguration/locations/deletedConfigurationStores/read`
+* Purge: `Microsoft.AppConfiguration/locations/deletedConfigurationStores/purge/action`
+
+To list deleted App Configuration stores, or get an individual store by name the `Microsoft.AppConfiguration/locations/deletedConfigurationStores/read` permission is needed. To purge a deleted App Configuration store the `Microsoft.AppConfiguration/locations/deletedConfigurationStores/purge/action` permission is needed. The built-in "Owner" and "Contributor" roles contain these permissions by default. Permissions for reading and purging deleted App Configuration stores must be assigned at the subscription level. This is because deleted configuration stores exist outside of individual resource groups.
## Billing implications
azure-app-configuration Howto Recover Deleted Stores In Azure App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-recover-deleted-stores-in-azure-app-configuration.md
To learn more about the concept of soft delete feature, see [Soft-Delete in Azur
* An Azure subscription - [create one for free](https://azure.microsoft.com/free/dotnet)
-* Refer to the [Soft-Delete in Azure App Configuration](./concept-soft-delete.md#permissions-to-recover-or-purge-store) for permissions requirements.
+* Refer to the [Soft-Delete in Azure App Configuration](./concept-soft-delete.md#permissions-to-recover-a-deleted-store) section for permissions requirements.
## Set retention policy and enable purge protection at store creation
azure-arc Upload Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upload-logs.md
description: Upload logs for Azure Arc-enabled data services to Azure Monitor
--++ Previously updated : 11/03/2021 Last updated : 05/27/2022
echo $WORKSPACE_SHARED_KEY
With the environment variables set, you can upload logs to the log workspace.
-## Upload logs to Azure Log Analytics Workspace in direct mode
+## Configure automatic upload of logs to Azure Log Analytics Workspace in direct mode using `az` CLI
-In the **direct** connected mode, Logs upload can only be setup in **automatic** mode. This automatic upload of metrics can be setup either during deployment or post deployment of Azure Arc data controller.
+In the **direct** connected mode, Logs upload can only be set up in **automatic** mode. This automatic upload of metrics can be set up either during deployment or post deployment of Azure Arc data controller.
### Enable automatic upload of logs to Azure Log Analytics Workspace
az arcdata dc update --name <name of datacontroller> --resource-group <resource
az arcdata dc update --name arcdc --resource-group <myresourcegroup> --auto-upload-logs true ```
-### Disable automatic upload of logs to Azure Log Analytics Workspace
+### Enable automatic upload of logs to Azure Log Analytics Workspace
If the automatic upload of logs was enabled during Azure Arc data controller deployment, run the below command to disable automatic upload of logs. ```
az arcdata dc update --name <name of datacontroller> --resource-group <resource
az arcdata dc update --name arcdc --resource-group <myresourcegroup> --auto-upload-logs false ```
-## Upload logs to Azure Monitor in indirect mode
+## Configure automatic upload of logs to Azure Log Analytics Workspace in **direct** mode using `kubectl` CLI
+
+### Enable automatic upload of logs to Azure Log Analytics Workspace
+
+To configure automatic upload of logs using ```kubectl```:
+
+- ensure the Log Analytics Workspace is created as described in the earlier section
+- create a Kubernetes secret for the Log Analytics workspace using the ```WorkspaceID``` and `SharedAccessKey` as follows:
+
+```
+apiVersion: v1
+data:
+ primaryKey: <base64 encoding of Azure Log Analytics workspace primary key>
+ workspaceId: <base64 encoding of Azure Log Analytics workspace Id>
+kind: Secret
+metadata:
+ name: log-workspace-secret
+ namespace: <your datacontroller namespace>
+type: Opaque
+```
+
+- To create the secret, run:
+
+ ```console
+ kubectl apply -f <myLogAnalyticssecret.yaml> --namespace <mynamespace>
+ ```
+
+- To open the settings as a yaml file in the default editor, run:
+
+ ```console
+ kubectl edit datacontroller <DC name> --name <namespace>
+ ```
+
+- update the autoUploadLogs property to ```"true"```, and save the file
+++
+### Enable automatic upload of logs to Azure Log Analytics Workspace
+
+To disable automatic upload of logs, run:
+
+```console
+kubectl edit datacontroller <DC name> --name <namespace>
+```
+
+- update the autoUploadLogs property to `"false"`, and save the file
+
+## Upload logs to Azure Monitor in **indirect** mode
To upload logs for your Azure Arc-enabled SQL managed instances and Azure Arc-enabled PostgreSQL Hyperscale server groups run the following CLI commands-
Once your logs are uploaded, you should be able to query them using the log quer
If you want to upload metrics and logs on a scheduled basis, you can create a script and run it on a timer every few minutes. Below is an example of automating the uploads using a Linux shell script.
-In your favorite text/code editor, add the following script to the file and save as a script executable file such as .sh (Linux/Mac) or .cmd, .bat, .ps1.
+In your favorite text/code editor, add the following script to the file and save as a script executable file - such as .sh for Linux/Mac, or .cmd, .bat, or .ps1 for Windows.
```azurecli az arcdata dc export --type logs --path logs.json --force --k8s-namespace arc
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
A .NET isolated function project is basically a .NET console app project that ta
+ Program.cs file that's the entry point for the app. + Any code files [defining your functions](#bindings).
-For complete examples, see the [.NET 6 isolated sample project](https://github.com/Azure/azure-functions-dotnet-worker/tree/main/samples/FunctionApp) and the [.NET Framework 4.8 isolated sample project](https://github.com/Azure/azure-functions-dotnet-worker/tree/main/samples/NetFxWorker).
+For complete examples, see the [.NET 6 isolated sample project](https://github.com/Azure/azure-functions-dotnet-worker/tree/main/samples/FunctionApp) and the [.NET Framework 4.8 isolated sample project](https://go.microsoft.com/fwlink/p/?linkid=2197310).
> [!NOTE] > To be able to publish your isolated function project to either a Windows or a Linux function app in Azure, you must set a value of `dotnet-isolated` in the remote [FUNCTIONS_WORKER_RUNTIME](functions-app-settings.md#functions_worker_runtime) application setting. To support [zip deployment](deployment-zip-push.md) and [running from the deployment package](run-functions-from-deployment-package.md) on Linux, you also need to update the `linuxFxVersion` site config setting to `DOTNET-ISOLATED|6.0`. To learn more, see [Manual version updates on Linux](set-runtime-version.md#manual-version-updates-on-linux).
azure-functions Functions Add Output Binding Storage Queue Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-storage-queue-cli.md
mvn azure-functions:deploy
# [Browser](#tab/browser)
- Copy the complete **Invoke URL** shown in the output of the publish command into a browser address bar, appending the query parameter `&name=Functions`. The browser should display similar output as when you ran the function locally.
-
- ![The output of the function runs on Azure in a browser](./media/functions-add-output-binding-storage-queue-cli/function-test-cloud-browser.png)
+ Copy the complete **Invoke URL** shown in the output of the publish command into a browser address bar, appending the query parameter `&name=Functions`. The browser should display the same output as when you ran the function locally.
# [curl](#tab/curl)
- Run [`curl`](https://curl.haxx.se/) with the **Invoke URL**, appending the parameter `&name=Functions`. The output of the command should be the text, "Hello Functions."
-
- ![The output of the function runs on Azure using CURL](./media/functions-add-output-binding-storage-queue-cli/function-test-cloud-curl.png)
+ Run [`curl`](https://curl.haxx.se/) with the **Invoke URL**, appending the parameter `&name=Functions`. The output should be the same as when you ran the function locally.
azure-functions Functions Bindings Mobile Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-mobile-apps.md
The following table explains the binding configuration properties that you set i
| **name**| n/a | Name of output parameter in function signature.| |**tableName** |**TableName**|Name of the mobile app's data table| |**connection**|**MobileAppUriSetting**|The name of an app setting that has the mobile app's URL. The function uses this URL to construct the required REST operations against your mobile app. Create an app setting in your function app that contains the mobile app's URL, then specify the name of the app setting in the `connection` property in your input binding. The URL looks like `http://<appname>.azurewebsites.net`.
-|**apiKey**|**ApiKeySetting**|The name of an app setting that has your mobile app's API key. Provide the API key if you implement an API key in your Node.js mobile app backend, or [implement an API key in your .NET mobile app backend](https://github.com/Azure/azure-mobile-apps-net-server/wiki/Implementing-Application-Key). To provide the key, create an app setting in your function app that contains the API key, then add the `apiKey` property in your input binding with the name of the app setting. |
+|**apiKey**|**ApiKeySetting**|The name of an app setting that has your mobile app's API key. Provide the API key if you implement an API key in your Node.js mobile app backend, or implement an API key in your .NET mobile app backend. To provide the key, create an app setting in your function app that contains the API key, then add the `apiKey` property in your input binding with the name of the app setting. |
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
azure-functions Functions Bindings Storage Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob.md
Functions 1.x apps automatically have a reference to the extension.
## host.json settings
-This section describes the function app configuration settings available for functions that this binding. These settings only apply when using extension version 5.0.0 and higher. The example host.json file below contains only the version 2.x+ settings for this binding. For more information about function app configuration settings in versions 2.x and later versions, see [host.json reference for Azure Functions](functions-host-json.md).
+This section describes the function app configuration settings available for functions that use this binding. These settings only apply when using extension version 5.0.0 and higher. The example host.json file below contains only the version 2.x+ settings for this binding. For more information about function app configuration settings in versions 2.x and later versions, see [host.json reference for Azure Functions](functions-host-json.md).
> [!NOTE] > This section doesn't apply to extension versions before 5.0.0. For those earlier versions, there aren't any function app-wide configuration settings for blobs.
azure-monitor Alerts Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log.md
You can also [create log alert rules using Azure Resource Manager templates](../
|Field |Description | |||
- |Dimension name|Dimensions can be either number or string columns. Dimensions are used to monitor specific time series and provide context to a fired alert.<br>Splitting on the Azure Resource ID column makes the specified resource into the alert target. If an Resource ID column is detected, it is selected automatically and changes the context of the fired alert to the record's resource. |
+ |Dimension name|Dimensions can be either number or string columns. Dimensions are used to monitor specific time series and provide context to a fired alert.<br>Splitting on the Azure Resource ID column makes the specified resource into the alert target. If a Resource ID column is detected, it is selected automatically and changes the context of the fired alert to the record's resource. |
|Operator|The operator used on the dimension name and value. | |Dimension values|The dimension values are based on data from the last 48 hours. Select **Add custom value** to add custom dimension values. |
You can also [create log alert rules using Azure Resource Manager templates](../
> [!NOTE] > This section above describes creating alert rules using the new alert rule wizard. > The new alert rule experience is a little different than the old experience. Please note these changes:
-> - Previously, search results were included in the payloads of the triggered alert and its associated notifications. This was a limited and error prone solution. To get detailed context information about the alert so that you can decide on the appropriate action :
-> - The recommended best practice it to use [Dimensions](alerts-unified-log.md#split-by-alert-dimensions). Dimensions provide the column value that fired the alert, giving you context for why the alert fired and how to fix the issue.
-> - When you need to investigate in the logs, use the link in the alert to the search results in Logs.
-> - If you need the raw search results or for any other advanced customizations, use Logic Apps.
+> - Previously, search results were included in the payloads of the triggered alert and its associated notifications. This was a limited solution, since the email included only 10 rows from the unfiltered results while the webhook payload contained 1000 unfiltered results.
+> To get detailed context information about the alert so that you can decide on the appropriate action :
+> - We recommend using [Dimensions](alerts-unified-log.md#split-by-alert-dimensions). Dimensions provide the column value that fired the alert, giving you context for why the alert fired and how to fix the issue.
+> - When you need to investigate in the logs, use the link in the alert to the search results in Logs.
+> - If you need the raw search results or for any other advanced customizations, use Logic Apps.
> - The new alert rule wizard does not support customization of the JSON payload. > - Use custom properties in the [new API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules/create-or-update#actions) to add static parameters and associated values to the webhook actions triggered by the alert. > - For more advanced customizations, use Logic Apps.
azure-monitor Itsmc Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections.md
To set up your ITSM environment:
1. Connect to your ITSM. - For ServiceNow ITSM, see [the ServiceNow connection instructions](./itsmc-connections-servicenow.md).
- - For SCSM, see [the System Center Service Manager connection instructions](./itsmc-connections-scsm.md).
+ - For SCSM, see [the System Center Service Manager connection instructions](/azure/azure-monitor/alerts/itsmc-connections).
>[!NOTE] > As of March 1, 2022, System Center ITSM integrations with Azure alerts is no longer enabled for new customers. New System Center ITSM Connections are not supported.
azure-monitor Itsmc Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-definition.md
After you've prepped your ITSM tool, complete these steps to create a connection
1. Specify the connection settings for the ITSM product that you're using: - [ServiceNow](./itsmc-connections-servicenow.md)
- - [System Center Service Manager](./itsmc-connections-scsm.md)
+ - [System Center Service Manager](/azure/azure-monitor/alerts/itsmc-connections)
> [!NOTE] > By default, ITSMC refreshes the connection's configuration data once every 24 hours. To refresh your connection's data instantly to reflect any edits or template updates that you make, select the **Sync** button on your connection's pane:
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
Title: Diagnose with Live Metrics Stream - Azure Application Insights description: Monitor your web app in real time with custom metrics, and diagnose issues with a live feed of failures, traces, and events. Previously updated : 10/12/2021 Last updated : 05/31/2022 ms.devlang: csharp
Monitor your live, in-production web application by using Live Metrics Stream (a
With Live Metrics Stream, you can:
-* Validate a fix while it is released, by watching performance and failure counts.
+* Validate a fix while it's released, by watching performance and failure counts.
* Watch the effect of test loads, and diagnose issues live. * Focus on particular test sessions or filter out known issues, by selecting and filtering the metrics you want to watch. * Get exception traces as they happen.
Live Metrics are currently supported for ASP.NET, ASP.NET Core, Azure Functions,
### Enable LiveMetrics using code for any .NET application
-Even though LiveMetrics is enabled by default when onboarding using recommended instructions for .NET Applications, the following shows how to setup Live Metrics
+Even though LiveMetrics is enabled by default when onboarding using recommended instructions for .NET Applications, the following shows how to set up Live Metrics
manually. 1. Install the NuGet package [Microsoft.ApplicationInsights.PerfCounterCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.PerfCounterCollector)
namespace LiveMetricsDemo
} ```
-While the above sample is for a console app, the same code can be used in any .NET applications. If any other TelemetryModules are enabled which auto-collects telemetry, it is important to ensure the same configuration used for initializing those modules is used for Live Metrics module as well.
+While the above sample is for a console app, the same code can be used in any .NET applications. If any other TelemetryModules are enabled which auto-collects telemetry, it's important to ensure the same configuration used for initializing those modules is used for Live Metrics module as well.
## How does Live Metrics Stream differ from Metrics Explorer and Analytics?
While the above sample is for a console app, the same code can be used in any .N
|**Latency**|Data displayed within one second|Aggregated over minutes| |**No retention**|Data persists while it's on the chart, and is then discarded|[Data retained for 90 days](./data-retention-privacy.md#how-long-is-the-data-kept)| |**On demand**|Data is only streamed while the Live Metrics pane is open |Data is sent whenever the SDK is installed and enabled|
-|**Free**|There is no charge for Live Stream data|Subject to [pricing](../logs/cost-logs.md#application-insights-billing)
+|**Free**|There's no charge for Live Stream data|Subject to [pricing](../logs/cost-logs.md#application-insights-billing)
|**Sampling**|All selected metrics and counters are transmitted. Failures and stack traces are sampled. |Events may be [sampled](./api-filtering-sampling.md)| |**Control channel**|Filter control signals are sent to the SDK. We recommend you secure this channel.|Communication is one way, to the portal|
While the above sample is for a console app, the same code can be used in any .N
(Available with ASP.NET, ASP.NET Core, and Azure Functions (v2).)
-You can monitor custom KPI live by applying arbitrary filters on any Application Insights telemetry from the portal. Click the filter control that shows when you mouse-over any of the charts. The following chart is plotting a custom Request count KPI with filters on URL and Duration attributes. Validate your filters with the Stream Preview section that shows a live feed of telemetry that matches the criteria you have specified at any point in time.
+You can monitor custom KPI live by applying arbitrary filters on any Application Insights telemetry from the portal. Select the filter control that shows when you mouse-over any of the charts. The following chart is plotting a custom Request count KPI with filters on URL and Duration attributes. Validate your filters with the Stream Preview section that shows a live feed of telemetry that matches the criteria you've specified at any point in time.
![Filter request rate](./media/live-stream/filter-request.png)
In addition to Application Insights telemetry, you can also monitor any Windows
Live metrics are aggregated at two points: locally on each server, and then across all servers. You can change the default at either by selecting other options in the respective drop-downs. ## Sample Telemetry: Custom Live Diagnostic Events
-By default, the live feed of events shows samples of failed requests and dependency calls, exceptions, events, and traces. Click the filter icon to see the applied criteria at any point in time.
+By default, the live feed of events shows samples of failed requests and dependency calls, exceptions, events, and traces. Select the filter icon to see the applied criteria at any point in time.
![Filter button](./media/live-stream/filter.png)
-As with metrics, you can specify any arbitrary criteria to any of the Application Insights telemetry types. In this example, we are selecting specific request failures, and events.
+As with metrics, you can specify any arbitrary criteria to any of the Application Insights telemetry types. In this example, we're selecting specific request failures, and events.
![Query Builder](./media/live-stream/query-builder.png)
For Azure Function Apps (v2), securing the channel with an API key can be accomp
Create an API key from within your Application Insights resource and go to **Settings > Configuration** for your Function App. Select **New application setting** and enter a name of `APPINSIGHTS_QUICKPULSEAUTHAPIKEY` and a value that corresponds to your API key.
-However, if you recognize and trust all the connected servers, you can try the custom filters without the authenticated channel. This option is available for six months. This override is required once every new session, or when a new server comes online.
+Securing the control channel is not necessary if you recognize and trust all the connected servers. This option is made available so that you can try custom filters without having to set up an authenticated channel. If you choose this option you will have to authorize the connected servers once every new session or when a new server comes online. We strongly discourage the use of unsecured channels and will disable this option 6 months after you start using it. To use custom filters without a secure channel simply click on any of the filter icons and authorize the connected servers. The ΓÇ£Authorize connected serversΓÇ¥ dialog displays the date (highlighted below) after which this option will be disabled.
-![Live Metrics Auth options](./media/live-stream/live-stream-auth.png)
> [!NOTE] > We strongly recommend that you set up the authenticated channel before entering potentially sensitive information like CustomerID in the filter criteria.
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
When Microsoft Sentinel is enabled in a Log Analytics workspace, all data collec
- [SecurityDetection](/azure/azure-monitor/reference/tables/securitydetection) - [SecurityEvent](/azure/azure-monitor/reference/tables/securityevent) - [WindowsFirewall](/azure/azure-monitor/reference/tables/windowsfirewall)-- [MaliciousIPCommunication](/azure/azure-monitor/reference/tables/maliciousipcommunication) - [LinuxAuditLog](/azure/azure-monitor/reference/tables/linuxauditlog) - [SysmonEvent](/azure/azure-monitor/reference/tables/sysmonevent) - [ProtectionStatus](/azure/azure-monitor/reference/tables/protectionstatus)
azure-monitor Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/customer-managed-keys.md
Customer-Managed key is provided on dedicated cluster and these operations are r
## Limitations and constraints -- The max number of cluster per region and subscription is two.
+- A maximum of five active clusters can be created in each region and subscription.
-- The maximum number of workspaces that can be linked to a cluster is 1000.
+- A maximum number of seven reserved clusters (active or recently deleted) can exist in each region and subscription.
-- You can link a workspace to your cluster and then unlink it. The number of workspace link operations on particular workspace is limited to two in a period of 30 days.
+- A maximum of 1,000 Log Analytics workspaces can be linked to a cluster.
-- Customer-managed key encryption applies to newly ingested data after the configuration time. Data that was ingested prior to the configuration, remains encrypted with Microsoft key. You can query data ingested before and after the Customer-managed key configuration seamlessly.--- The Azure Key Vault must be configured as recoverable. These properties aren't enabled by default and should be configured using CLI or PowerShell:<br>
- - [Soft Delete](../../key-vault/general/soft-delete-overview.md).
- - [Purge protection](../../key-vault/general/soft-delete-overview.md#purge-protection) should be turned on to guard against force deletion of the secret, vault even after soft delete.
--- Cluster move to another resource group or subscription isn't supported currently.
+- A maximum of two workspace link operations on particular workspace is allowed in 30 day period.
-- Your Azure Key Vault, cluster and workspaces must be in the same region and in the same Azure Active Directory (Azure AD) tenant, but they can be in different subscriptions.
+- Moving a cluster to another resource group or subscription isn't currently supported.
- Cluster update should not include both identity and key identifier details in the same operation. In case you need to update both, the update should be in two consecutive operations.
Customer-Managed key is provided on dedicated cluster and these operations are r
- If you create a cluster and get an errorΓÇö"region-name doesnΓÇÖt support Double Encryption for clusters", you can still create the cluster without Double encryption, by adding `"properties": {"isDoubleEncryptionEnabled": false}` in the REST request body. - Double encryption setting can not be changed after the cluster has been created.
- - Setting the cluster's `identity` `type` to `None` also revokes access to your data, but this approach isn't recommended since you can't revert it without contacting support. The recommended way to revoke access to your data is [key revocation](#key-revocation).
+Deleting a linked workspace is permitted while linked to cluster. If you decide to [recover](./delete-workspace.md#recover-workspace) the workspace during the [soft-delete](./delete-workspace.md#soft-delete-behavior) period, it returns to previous state and remains linked to cluster.
+
+- Customer-managed key encryption applies to newly ingested data after the configuration time. Data that was ingested prior to the configuration, remains encrypted with Microsoft key. You can query data ingested before and after the Customer-managed key configuration seamlessly.
+
+- The Azure Key Vault must be configured as recoverable. These properties aren't enabled by default and should be configured using CLI or PowerShell:<br>
+ - [Soft Delete](../../key-vault/general/soft-delete-overview.md).
+ - [Purge protection](../../key-vault/general/soft-delete-overview.md#purge-protection) should be turned on to guard against force deletion of the secret, vault even after soft delete.
+
+- Your Azure Key Vault, cluster and workspaces must be in the same region and in the same Azure Active Directory (Azure AD) tenant, but they can be in different subscriptions.
+
+- Setting the cluster's `identity` `type` to `None` also revokes access to your data, but this approach isn't recommended since you can't revert it without contacting support. The recommended way to revoke access to your data is [key revocation](#key-revocation).
- - You can't use Customer-managed key with User-assigned managed identity if your Key Vault is in Private-Link (vNet). You can use System-assigned managed identity in this scenario.
+- You can't use Customer-managed key with User-assigned managed identity if your Key Vault is in Private-Link (vNet). You can use System-assigned managed identity in this scenario.
## Troubleshooting
azure-monitor Tutorial Custom Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-custom-logs.md
Instead of directly configuring the schema of the table, the portal allows you t
```kusto source | extend TimeGenerated = todatetime(Time)
- | parse RawData.value with
+ | parse RawData with
ClientIP:string ' ' * ' ' *
azure-monitor Workbooks Chart Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-chart-visualizations.md
requests
| summarize Request = count() by bin(timestamp, 1h), RequestName = name ```
-Even though the underlying result set is different. All a user has to do is set the visualization to area, line, bar, or time and Workbooks will take care of the rest.
+Even though the queries return results in different formats, when a user sets the visualization to area, line, bar, or time, Workbooks understands how to handle the data to create the visualization.
[![Screenshot of a log line chart made from a make-series query](./media/workbooks-chart-visualizations/log-chart-line-make-series.png)](./media/workbooks-chart-visualizations/log-chart-line-make-series.png#lightbox)
The series setting tab lets you adjust the labels and colors shown for series in
## Next steps - Learn how to create a [tile in workbooks](workbooks-tile-visualizations.md).-- Learn how to create [interactive workbooks](workbooks-interactive.md).
+- Learn how to create [interactive workbooks](workbooks-interactive.md).
azure-portal How To Create Azure Support Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/supportability/how-to-create-azure-support-request.md
Follow these links to learn more:
* [Azure support ticket REST API](/rest/api/support) * Engage with us on [Twitter](https://twitter.com/azuresupport) * Get help from your peers in the [Microsoft Q&A question page](/answers/products/azure)
-* Learn more in [Azure Support FAQ](https://azure.microsoft.com/support/faq)
+* Learn more in [Azure Support FAQ](https://azure.microsoft.com/support/faq)
azure-resource-manager Tag Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources.md
The following limitations apply to tags:
> * Azure Automation > * Azure Content Delivery Network (CDN) > * Azure DNS (Zone and A records)
- > * Azure Private DNS (Zone, A records, and virtual network link)
## Next steps
azure-signalr Signalr Howto Move Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-move-across-regions.md
Title: Move an Azure SignalR resource to another region | Microsoft Docs
-description: Shows you how to move an Azure SignalR resource to another region.
+ Title: Move an Azure SignalR resource to another region
+description: Learn how to use an Azure Resource Manager template to export the configuration of an Azure SignalR resource to a different Azure region.
Previously updated : 12/22/2021 Last updated : 05/23/2022 -+
+- subject-moving-resources
+- kr2b-contr-experiment
# Move an Azure SignalR resource to another region
-There are various scenarios in which you'd want to move your existing SignalR resource from one region to another. **Azure SignalR resource are region specific and can't be moved from one region to another.** You can however, use an Azure Resource Manager template to export the existing configuration of an Azure SignalR resource, modify the parameters to match the destination region, and then create a copy of your SignalR resource in another region. For more information on Resource Manager and templates, see [Quickstart: Create and deploy Azure Resource Manager templates by using the Azure portal](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md).
+Azure SignalR resources are region specific and can't be moved from one region to another. There are, however, scenarios where you might want to move your existing SignalR resource to another region.
-## Prerequisites
--- Ensure that the service and features that your are using are supported in the target region.
+You can use an Azure Resource Manager template to export the existing configuration of an Azure SignalR resource, modify the parameters to match the destination region, and then create a copy of your SignalR resource in another region. For more information on Resource Manager and templates, see [Quickstart: Create and deploy Azure Resource Manager templates by using the Azure portal](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md).
-- Verify that your Azure subscription allows you to create SignalR resource in the target region that's used. Contact support to enable the required quota.
+## Prerequisites
+- Ensure that the service and features that you're using are supported in the target region.
+- Verify that your Azure subscription allows you to create SignalR resource in the target region that's used.
+- Contact support to enable the required quota.
- For preview features, ensure that your subscription is allowlisted for the target region. <a id="prepare"></a>
-## Prepare and move
+## Prepare and move your SignalR resource
To get started, export, and then modify a Resource Manager template.
-### Export the template and deploy from the Portal
+### Export the template and deploy from the Azure portal
The following steps show how to prepare the SignalR resource move using a Resource Manager template, and move it to the target region using the portal.
-1. Sign in to the [Azure portal](https://portal.azure.com) > **Resource Groups**.
+1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Locate the Resource Group that contains the source SignalR resource and click on it.
+1. Select **Resource Groups**. Locate the resource group that contains the source SignalR resource and select it.
-3. Select > **Automation** > **Export template**.
+1. Under **Automation**, select **Export template**.
-4. Choose **Deploy** in the **Export template** blade.
+1. Select **Deploy**.
-5. Click **TEMPLATE** > **Edit parameters** to open the **parameters.json** file in the online editor.
+1. Select **TEMPLATE** > **Edit parameters** to open the *parameters.json* file in the online editor.
-6. To edit the parameter of the SignalR resource name, change the **value** property under **parameters**:
+1. To edit the parameter of the SignalR resource name, change the `value` property under `parameters`:
```json {
The following steps show how to prepare the SignalR resource move using a Resour
} ```
-7. Change the value in the editor to a name of your choice for the target SignalR resource. Ensure you enclose the name in quotes.
+1. Change the value in the editor to a name of your choice for the target SignalR resource. Ensure you enclose the name in quotes.
-8. Click **Save** in the editor.
+1. Select **Save** in the editor.
-9. Click **TEMPLATE** > **Edit template** to open the **template.json** file in the online editor.
+1. Select **TEMPLATE** > **Edit template** to open the *template.json* file in the online editor.
-10. To edit the target region, change the **location** property under **resources** in the online editor:
+1. To edit the target region, change the `location` property under `resources` in the online editor:
```json "resources": [
The following steps show how to prepare the SignalR resource move using a Resour
```
-11. To obtain region location codes, see [Azure SignalR Locations](https://azure.microsoft.com/global-infrastructure/services/?products=signalr-service). The code for a region is the region name with no spaces, **Central US** = **centralus**.
-
-12. You can also change other parameters in the template if you choose, and are optional depending on your requirements.
+1. To obtain region location codes, see [Azure SignalR Locations](https://azure.microsoft.com/global-infrastructure/services/?products=signalr-service). The code for a region is the region name with no spaces, **Central US** = **centralus**.
-13. Click **Save** in the online editor.
+1. You can also change other parameters in the template if you choose, and are optional depending on your requirements.
-14. Click **BASICS** > **Subscription** to choose the subscription where the target resource will be deployed.
+1. Select **Save** in the online editor.
-15. Click **BASICS** > **Resource group** to choose the resource group where the target resource will be deployed. You can click **Create new** to create a new resource group for the target resource. Ensure the name isn't the same as the source resource group of the existing resource.
+1. Select **BASICS** > **Subscription** to choose the subscription where the target resource will be deployed.
-16. Verify **BASICS** > **Location** is set to the target location where you wish for the resource to be deployed.
+1. Select **BASICS** > **Resource group** to choose the resource group where the target resource will be deployed. You can select **Create new** to create a new resource group for the target resource. Ensure the name isn't the same as the source resource group of the existing resource.
-17. Click the **Review + create** button to deploy the target Azure SignalR resource.
+1. Verify **BASICS** > **Location** is set to the target location where you wish for the resource to be deployed.
+1. Select **Review + create** to deploy the target Azure SignalR resource.
### Export the template and deploy using Azure PowerShell
To export a template by using PowerShell:
Connect-AzAccount ```
-2. If your identity is associated with more than one subscription, then set your active subscription to subscription of the SignalR resource that you want to move.
+1. If your identity is associated with more than one subscription, then set your active subscription to subscription of the SignalR resource that you want to move.
```azurepowershell-interactive $context = Get-AzSubscription -SubscriptionId <subscription-id> Set-AzContext $context ```
-3. Export the template of your source SignalR resource. These commands save a json template to your current directory.
+1. Export the template of your source SignalR resource. These commands save a JSON template to your current directory.
```azurepowershell-interactive $resource = Get-AzResource `
To export a template by using PowerShell:
-IncludeParameterDefaultValue ```
-4. The file downloaded will be named after the resource group the resource was exported from. Locate the file that was exported from the command named **\<resource-group-name>.json** and open it in an editor of your choice:
-
+1. The file downloaded will be named after the resource group the resource was exported from. Locate the file that was exported from the command named *\<resource-group-name>.json* and open it in an editor of your choice:
+ ```azurepowershell notepad <source-resource-group-name>.json ```
-5. To edit the parameter of the SignalR resource name, change the property **defaultValue** of the source SignalR resource name to the name of your target SignalR resource, ensure the name is in quotes:
-
+1. To edit the parameter of the SignalR resource name, change the property `defaultValue` of the source SignalR resource name to the name of your target SignalR resource. Ensure the name is in quotes:
+ ```json { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
To export a template by using PowerShell:
} ```
-6. To edit the target region where the SignalR resource will be moved, change the **location** property under resources:
+1. To edit the target region where the SignalR resource will be moved, change the `location` property under `resources`:
```json "resources": [
To export a template by using PowerShell:
] ```
-7. To obtain region location codes, see [Azure SignalR Locations](https://azure.microsoft.com/global-infrastructure/services/?products=signalr-service). The code for a region is the region name with no spaces, **Central US** = **centralus**.
+1. To obtain region location codes, see [Azure SignalR Locations](https://azure.microsoft.com/global-infrastructure/services/?products=signalr-service). The code for a region is the region name with no spaces, **Central US** = **centralus**.
+
+ You can also change other parameters in the template if you choose, depending on your requirements.
-8. You can also change other parameters in the template if you choose, and are optional depending on your requirements.
+1. Save the *\<resource-group-name>.json* file.
-9. Save the **\<resource-group-name>.json** file.
+1. Create a resource group in the target region for the target SignalR resource to be deployed using [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup).
-10. Create a resource group in the target region for the target SignalR resource to be deployed using [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup).
-
```azurepowershell-interactive New-AzResourceGroup -Name <target-resource-group-name> -location <target-region> ```
-11. Deploy the edited **\<resource-group-name>.json** file to the resource group created in the previous step using [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment):
+1. Deploy the edited *\<resource-group-name>.json* file to the resource group created in the previous step using [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment):
```azurepowershell-interactive New-AzResourceGroupDeployment -ResourceGroupName <target-resource-group-name> -TemplateFile <source-resource-group-name>.json ```
-12. To verify the resources were created in the target region, use [Get-AzResourceGroup](/powershell/module/az.resources/get-azresourcegroup) and [Get-AzSignalR](/powershell/module/az.signalr/get-azsignalr):
-
- ```azurepowershell-interactive
- Get-AzResourceGroup -Name <target-resource-group-name>
- ```
+1. To verify that the resources were created in the target region, use [Get-AzResourceGroup](/powershell/module/az.resources/get-azresourcegroup) and [Get-AzSignalR](/powershell/module/az.signalr/get-azsignalr):
```azurepowershell-interactive
+ Get-AzResourceGroup -Name <target-resource-group-name>
Get-AzSignalR -Name <target-signalr-name> -ResourceGroupName <target-resource-group-name> ```
-## Discard
-
-After the deployment, if you wish to start over or discard the SignalR resource in the target, delete the resource group that was created in the target and the moved SignalR resource will be deleted. To do so, select the resource group from your dashboard in the portal and select **Delete** at the top of the overview page. Alternatively you can use [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup):
-
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name <target-resource-group-name>
-```
+> [!NOTE]
+>
+> After the deployment, if you wish to start over or discard the SignalR resource in the target, delete the resource group that was created in the target, which deletes the moved SignalR resource. To do so, select the resource group from your dashboard in the portal and select **Delete** at the top of the overview page. Alternatively you can use [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup):
+>
+> ```azurepowershell-interactive
+> Remove-AzResourceGroup -Name <target-resource-group-name>
+> ```
-## Clean up
+## Clean up source region
To commit the changes and complete the move of the SignalR resource, delete the source SignalR resource or resource group. To do so, select the SignalR resource or resource group from your dashboard in the portal and select **Delete** at the top of each page. ## Next steps
-In this tutorial, you moved an Azure SignalR resource from one region to another and cleaned up the source resources. To learn more about moving resources between regions and disaster recovery in Azure, refer to:
+In this tutorial, you moved an Azure SignalR resource from one region to another and cleaned up the source resources. To learn more about moving resources between regions and disaster recovery in Azure, see:
- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md) - [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
azure-video-analyzer Deploy Iot Edge Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/deploy-iot-edge-linux-on-windows.md
The following depicts the overall flow of the document and in 5 simple steps you
## Next steps * Try motion detection along with recording relevant videos in the Cloud. Follow the steps from the [detect motion and record video clips](detect-motion-record-video-edge-devices.md) quickstart.
-* Use our [VS Code extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.live-video-analytics-edge) to view additional pipelines.
+* Use our [VS Code extension](https://marketplace.visualstudio.com/vscode) to view additional pipelines.
* Use an [IP camera](https://en.wikipedia.org/wiki/IP_camera) that supports RTSP instead of using the RTSP simulator. You can find IP cameras that support RTSP on the [ONVIF conformant products](https://www.onvif.org/conformant-products/) page. Look for devices that conform with profiles G, S, or T. * Run [AI on Live Video](analyze-live-video-use-your-model-http.md#overview) (you can skip the prerequisite setup as it has already been done above).
azure-video-indexer Audio Effects Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/audio-effects-detection.md
Some scenarios where this feature is useful:
## Supported audio categories
-**Audio effect detection** can detect and classify 7 different categories. In the next table, you can find the different categories split in to the different presets, divided to **Standard** and **Advanced**. For more information, see [pricing](https://azure.microsoft.com/pricing/details/azure/media-services/).
+**Audio effect detection** can detect and classify 7 different categories. In the next table, you can find the different categories split in to the different presets, divided to **Standard** and **Advanced**. For more information, see [pricing](https://azure.microsoft.com/pricing/details/media-services/).
|Indexing type |Standard indexing| Advanced indexing| ||||
azure-video-indexer Compare Video Indexer With Media Services Presets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/compare-video-indexer-with-media-services-presets.md
Currently, there is an overlap between features offered by the [Azure Video Inde
|||| |Media Insights|[Enhanced](video-indexer-output-json-v2.md) |[Fundamentals](/azure/media-services/latest/analyze-video-audio-files-concept)| |Experiences|See the full list of supported features: <br/> [Overview](video-indexer-overview.md)|Returns video insights only|
-|Billing|[Media Services pricing](https://azure.microsoft.com/pricing/details/azure/media-services/#analytics)|[Media Services pricing](https://azure.microsoft.com/pricing/details/azure/media-services/#analytics)|
+|Billing|[Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/#analytics) |[Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/#analytics) |
|Compliance|For the most current compliance updates, visit [Azure Compliance Offerings.pdf](https://gallery.technet.microsoft.com/Overview-of-Azure-c1be3942/file/178110/23/Microsoft%20Azure%20Compliance%20Offerings.pdf) and search for "Azure Video Indexer" to see if it complies with a certificate of interest.|For the most current compliance updates, visit [Azure Compliance Offerings.pdf](https://gallery.technet.microsoft.com/Overview-of-Azure-c1be3942/file/178110/23/Microsoft%20Azure%20Compliance%20Offerings.pdf) and search for "Media Services" to see if it complies with a certificate of interest.| |Free Trial|East US|Not available| |Region availability|See [Cognitive Services availability by region](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services)|See [Media Services availability by region](https://azure.microsoft.com/global-infrastructure/services/?products=media-services).|
azure-video-indexer Connect To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-to-azure.md
When creating an Azure Video Indexer account, you can choose a free trial accoun
1. [Azure Video Indexer portal](https://aka.ms/vi-portal-link) 2. [Azure portal](https://portal.azure.com/#home)
- 3. [QuickStart ARM template](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/ARM-Samples/Create-Account)
To read more on how to create a **new ARM-Based** Azure Video Indexer account, read this [article](create-video-analyzer-for-media-account.md)
If the connection to Azure failed, you can attempt to troubleshoot the problem b
### Create and configure a Media Services account
-1. Use the [Azure](https://portal.azure.com/) portal to create an Azure Media Services account, as described in [Create an account](/azure/azure/media-services/previous/media-services-portal-create-account).
+1. Use the [Azure](https://portal.azure.com/) portal to create an Azure Media Services account, as described in [Create an account](/azure/media-services/previous/media-services-portal-create-account).
Make sure the Media Services account was created with the classic APIs.
If the connection to Azure failed, you can attempt to troubleshoot the problem b
In the new Media Services account, select **Streaming endpoints**. Then select the streaming endpoint and press start. :::image type="content" alt-text="Screenshot that shows how to specify streaming endpoints." source="./media/create-account/create-ams-account-se.png":::
-4. For Azure Video Indexer to authenticate with Media Services API, an AD app needs to be created. The following steps guide you through the Azure AD authentication process described in [Get started with Azure AD authentication by using the Azure portal](/azure/azure/media-services/previous/media-services-portal-get-started-with-aad):
+4. For Azure Video Indexer to authenticate with Media Services API, an AD app needs to be created. The following steps guide you through the Azure AD authentication process described in [Get started with Azure AD authentication by using the Azure portal](/azure/media-services/previous/media-services-portal-get-started-with-aad):
1. In the new Media Services account, select **API access**.
- 2. Select [Service principal authentication method](/azure/azure/media-services/previous/media-services-portal-get-started-with-aad).
+ 2. Select [Service principal authentication method](/azure/media-services/previous/media-services-portal-get-started-with-aad).
3. Get the client ID and client secret After you select **Settings**->**Keys**, add **Description**, press **Save**, and the key value gets populated.
azure-video-indexer Customize Person Model With Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-person-model-with-website.md
Previously updated : 12/16/2020 Last updated : 05/31/2022
You can use the Azure Video Indexer website to edit faces that were detected in
## Create a new Person model 1. Select the **+ Add model** button on the right.
-1. Enter the name of the model. You can now add new people and faces to the new Person model.
+1. Enter the name of the model and select the check button to save the new model created. You can now add new people and faces to the new Person model.
1. Select the list menu button and choose **+ Add person**. > [!div class="mx-imgBorder"]
You can delete any Person model that you created in your account. However, you c
## Manage existing people in a Person model
-To look at the contents of any of your Person models, select the arrow next to the name of the Person model. The drop-down shows you all of the people in that particular Person model. If you select the list menu button next to each of the people, you see manage, rename, and delete options.
+To look at the contents of any of your Person models, select the arrow next to the name of the Person model. Then you can view all of the people in that particular Person model. If you select the list menu button next to each of the people, you see manage, rename, and delete options.
![Screenshot shows a contextual menu with options to Manage, Rename, and Delete.](./media/customize-face-model/manage-people.png)
azure-video-indexer Deploy With Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/deploy-with-arm-template.md
The resource will be deployed to your subscription and will create the Azure Vid
``` > [!NOTE]
-> If you would like to work with bicep format, inspect the [bicep file](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/ARM-Samples/Create-Account/avam.template.bicep) on this repo.
+> If you would like to work with bicep format, inspect the [bicep file](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/ARM-Quick-Start/avam.template.bicep) on this repo.
## Parameters
If you're new to template deployment, see:
## Next steps
-[Connect an existing classic paid Azure Video Indexer account to ARM-based account](connect-classic-account-to-arm.md)
+[Connect an existing classic paid Azure Video Indexer account to ARM-based account](connect-classic-account-to-arm.md)
azure-video-indexer Manage Account Connected To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/manage-account-connected-to-azure.md
If your account needs some adjustments, you see relevant errors and warnings abo
* Media reserved units
- You must allocate Media Reserved Units on your Media Service resource in order to index videos. For optimal indexing performance, it's recommended to allocate at least 10 S3 Reserved Units. For pricing information, see the FAQ section of the [Media Services pricing](https://azure.microsoft.com/pricing/details/azure/media-services/) page.
+ You must allocate Media Reserved Units on your Media Service resource in order to index videos. For optimal indexing performance, it's recommended to allocate at least 10 S3 Reserved Units. For pricing information, see the FAQ section of the [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/) page.
## Next steps
azure-video-indexer Odrv Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/odrv-download.md
Use this parameter to define an AI bundle that you want to apply on your audio o
Azure Video Indexer covers up to two tracks of audio. If the file has more audio tracks, they're treated as one track. If you want to index the tracks separately, you need to extract the relevant audio file and index it as `AudioOnly`.
-Price depends on the selected indexing option. For more information, see [Media Services pricing](https://azure.microsoft.com/pricing/details/azure/media-services/).
+Price depends on the selected indexing option. For more information, see [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/).
#### priority
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
Azure Video Indexer website is now supporting account management based on ARM in
### Leverage open-source code to create ARM based account
-Added new code samples including HTTP calls to use Azure Video Indexer create, read, update and delete (CRUD) ARM API for solution developers. See [this sample](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/ARM-Samples/Create-Account
-).
+Added new code samples including HTTP calls to use Azure Video Indexer create, read, update and delete (CRUD) ARM API for solution developers. See [this sample](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/ARM-Quick-Start).
## January 2022
You can now see the detected acoustic events in the closed captions file. The fi
### Audio analysis
-Audio analysis is available now in additional new bundle of audio features at different price point. The new **Basic Audio** analysis preset provides a low-cost option to only extract speech transcription, translation and format output captions and subtitles. The **Basic Audio** preset will produce two separate meters on your bill, including a line for transcription and a separate line for caption and subtitle formatting. More information on the pricing, see the [Media Services pricing](https://azure.microsoft.com/pricing/details/azure/media-services/) page.
+Audio analysis is available now in additional new bundle of audio features at different price point. The new **Basic Audio** analysis preset provides a low-cost option to only extract speech transcription, translation and format output captions and subtitles. The **Basic Audio** preset will produce two separate meters on your bill, including a line for transcription and a separate line for caption and subtitle formatting. More information on the pricing, see the [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/) page.
The newly added bundle is available when indexing or re-indexing your file by choosing the **Advanced option** -> **Basic Audio** preset (under the **Video + audio indexing** drop-down box).
azure-video-indexer Upload Index Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/upload-index-videos.md
When you're creating an Azure Video Indexer account, you choose between:
- A free trial account. Azure Video Indexer provides up to 600 minutes of free indexing to website users and up to 2,400 minutes of free indexing to API users. - A paid option where you're not limited by a quota. You create an Azure Video Indexer account that's [connected to your Azure subscription and an Azure Media Services account](connect-to-azure.md). You pay for indexed minutes.
-For more information about account types, see [Media Services pricing](https://azure.microsoft.com/pricing/details/azure/media-services/).
+For more information about account types, see [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/).
After you upload and index a video, you can use [Azure Video Indexer website](video-indexer-view-edit.md) or [Azure Video Indexer Developer Portal](video-indexer-use-apis.md) to see the insights of the video (see [Examine the Azure Video Indexer output](video-indexer-output-json-v2.md)).
Use this parameter to define an AI bundle that you want to apply on your audio o
Azure Video Indexer covers up to two tracks of audio. If the file has more audio tracks, they're treated as one track. If you want to index the tracks separately, you need to extract the relevant audio file and index it as `AudioOnly`.
-Price depends on the selected indexing option. For more information, see [Media Services pricing](https://azure.microsoft.com/pricing/details/azure/media-services/).
+Price depends on the selected indexing option. For more information, see [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/).
#### priority
azure-video-indexer Video Indexer Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-get-started.md
This getting started quickstart shows how to sign in to the Azure Video Indexer website and how to upload your first video.
-When creating an Azure Video Indexer account, you can choose a free trial account (where you get a certain number of free indexing minutes) or a paid option (where you aren't limited by the quota). With free trial, Azure Video Indexer provides up to 600 minutes of free indexing to website users and up to 2400 minutes of free indexing to API users. With paid option, you create an Azure Video Indexer account that is [connected to your Azure subscription and an Azure Media Services account](connect-to-azure.md). You pay for minutes indexed, for more information, see [Media Services pricing](https://azure.microsoft.com/pricing/details/azure/media-services/).
+When creating an Azure Video Indexer account, you can choose a free trial account (where you get a certain number of free indexing minutes) or a paid option (where you aren't limited by the quota). With free trial, Azure Video Indexer provides up to 600 minutes of free indexing to website users and up to 2400 minutes of free indexing to API users. With paid option, you create an Azure Video Indexer account that is [connected to your Azure subscription and an Azure Media Services account](connect-to-azure.md). You pay for minutes indexed, for more information, see [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/).
## Sign up for Azure Video Indexer
azure-video-indexer Video Indexer Use Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-use-apis.md
Azure Video Indexer consolidates various audio and video artificial intelligence (AI) technologies offered by Microsoft into one integrated service, making development simpler. The APIs are designed to enable developers to focus on consuming Media AI technologies without worrying about scale, global reach, availability, and reliability of cloud platforms. You can use the API to upload your files, get detailed video insights, get URLs of embeddable insight and player widgets, and more.
-When creating an Azure Video Indexer account, you can choose a free trial account (where you get a certain number of free indexing minutes) or a paid option (where you're not limited by the quota). With a free trial, Azure Video Indexer provides up to 600 minutes of free indexing to website users and up to 2400 minutes of free indexing to API users. With a paid option, you create an Azure Video Indexer account that's [connected to your Azure subscription and an Azure Media Services account](connect-to-azure.md). You pay for minutes indexed, for more information, see [Media Services pricing](https://azure.microsoft.com/pricing/details/azure/media-services/).
+When creating an Azure Video Indexer account, you can choose a free trial account (where you get a certain number of free indexing minutes) or a paid option (where you're not limited by the quota). With a free trial, Azure Video Indexer provides up to 600 minutes of free indexing to website users and up to 2400 minutes of free indexing to API users. With a paid option, you create an Azure Video Indexer account that's [connected to your Azure subscription and an Azure Media Services account](connect-to-azure.md). You pay for minutes indexed, for more information, see [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/).
This article shows how the developers can take advantage of the [Azure Video Indexer API](https://api-portal.videoindexer.ai/).
azure-vmware Deploy Disaster Recovery Using Jetstream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-disaster-recovery-using-jetstream.md
Once JetStream DR MSA and JetStream VIB are installed on the Azure VMware Soluti
1. [Select the VMs](https://www.jetstreamsoft.com/portal/jetstream-knowledge-base/select-vms-for-protection/) you want to protect and then [start VM protection](https://www.jetstreamsoft.com/portal/jetstream-knowledge-base/start-vm-protection/).
-For remaining configuration steps for JetStream DR, such as creating a failover runbook, invoking failover to the DR site, and invoking failback to the primary site, see the [JetStream Admin Guide documentation](https://www.jetstreamsoft.com/portal/jetstream-article-categories/product-manual/).
+For remaining configuration steps for JetStream DR, such as creating a failover runbook, invoking failover to the DR site, and invoking failback to the primary site, see the [JetStream Admin Guide documentation](https://docs.delphix.com/docs51/delphix-jet-stream/jet-stream-admin-guide).
## Disable JetStream DR on an Azure VMware Solution cluster
azure-vmware Deploy Zerto Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-zerto-disaster-recovery.md
You can reuse pre-existing Zerto product licenses for Azure VMware Solution envi
### How is Zerto supported?
-Zerto disaster recovery is a solution that is sold and supported by Zerto. For any support issue with Zerto disaster recovery, always contact [Zerto support](https://www.zerto.com/company/support-and-service/support/).
+Zerto disaster recovery is a solution that is sold and supported by Zerto. For any support issue with Zerto disaster recovery, always contact [Zerto support](https://www.zerto.com/support-and-services/).
Zerto and Microsoft support teams will engage each other as needed to troubleshoot Zerto disaster recovery issues on Azure VMware Solution.
azure-web-pubsub Reference Server Sdk Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-server-sdk-python.md
Title: Reference - Python server SDK for Azure Web PubSub
-description: This reference describes the Python server SDK for the Azure Web PubSub service.
+description: Learn about the Python server SDK for the Azure Web PubSub service. You can use this library in your app server to manage the WebSocket client connections.
- Previously updated : 11/08/2021++ Last updated : 05/23/2022 # Azure Web PubSub service client library for Python [Azure Web PubSub Service](./index.yml) is an Azure-managed service that helps developers easily build web applications with real-time features and publish-subscribe pattern. Any scenario that requires real-time publish-subscribe messaging between server and clients or among clients can use Azure Web PubSub service. Traditional real-time features that often require polling from server or submitting HTTP requests can also use Azure Web PubSub service.
-You can use this library in your app server side to manage the WebSocket client connections, as shown in below diagram:
+You can use this library in your app server side to manage the WebSocket client connections, as shown in following diagram:
![The overflow diagram shows the overflow of using the service client library.](media/sdk-reference/service-client-overflow.png)
Use this library to:
- Send messages to hubs and groups. - Send messages to particular users and connections. - Organize users and connections into groups.-- Close connections-- Grant, revoke, and check permissions for an existing connection
+- Close connections.
+- Grant, revoke, and check permissions for an existing connection.
-[Source code](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/webpubsub/azure-messaging-webpubsubservice) | [Package (Pypi)][package] | [API reference documentation](/python/api/overview/azure/messaging-webpubsubservice-readme) | [Product documentation][webpubsubservice_docs]
+## Prerequisites
-> [!IMPORTANT]
-> Azure SDK Python packages support for Python 2.7 is ending 01 January 2022. For more information and questions, please refer to https://github.com/Azure/azure-sdk-for-python/issues/20691.
+- Python 3.6 or later is required to use this package.
+- You need an [Azure subscription][azure_sub] and an [Azure WebPubSub service instance][webpubsubservice_docs] to use this package.
+- An existing Azure Web PubSub service instance.
-## Getting started
+> [!IMPORTANT]
+> Azure SDK Python packages support for Python 2.7 is ending 01 January 2022. For more information, see [Azure SDK Python packages support](https://github.com/Azure/azure-sdk-for-python/issues/20691).
-### Prerequisites
+## Install the package
-- Python 2.7, or 3.6 or later is required to use this package.-- You need an [Azure subscription][azure_sub] and a [Azure WebPubSub service instance][webpubsubservice_docs] to use this package.-- An existing Azure Web PubSub service instance.-
-### 1. Install the package
+Use this command to install the package:
```bash python -m pip install azure-messaging-webpubsubservice ```
-### 2. Create and authenticate a WebPubSubServiceClient
+## Create and authenticate a WebPubSubServiceClient
-You can authenticate the `WebPubSubServiceClient` using [connection string][connection_string]:
+You can authenticate the `WebPubSubServiceClient` using a [connection string][connection_string]:
```python >>> from azure.messaging.webpubsubservice import WebPubSubServiceClient
You can authenticate the `WebPubSubServiceClient` using [connection string][conn
>>> service = WebPubSubServiceClient.from_connection_string(connection_string='<connection_string>', hub='hub') ```
-Or using the service endpoint and the access key:
+Or use the service endpoint and the access key:
```python >>> from azure.messaging.webpubsubservice import WebPubSubServiceClient
Or using the service endpoint and the access key:
>>> service = WebPubSubServiceClient(endpoint='<endpoint>', hub='hub', credential=AzureKeyCredential("<access_key>")) ```
-Or using [Azure Active Directory][aad_doc]:
+Or use [Azure Active Directory][aad_doc] (Azure AD):
-1. [pip][pip] install [`azure-identity`][azure_identity_pip]
-2. Follow the document to [enable AAD authentication on your Webpubsub resource][aad_doc]
-3. Update code to use [DefaultAzureCredential][default_azure_credential]
+1. [pip][pip] install [`azure-identity`][azure_identity_pip].
+2. [Enable Azure AD authentication on your Webpubsub resource][aad_doc].
+3. Update code to use [DefaultAzureCredential][default_azure_credential].
```python >>> from azure.messaging.webpubsubservice import WebPubSubServiceClient
Or using [Azure Active Directory][aad_doc]:
}) ```
-The WebSocket client will receive JSON serialized text: `{"from": "user1", "data": "Hello world"}`.
+The WebSocket client receives JSON serialized text: `{"from": "user1", "data": "Hello world"}`.
### Broadcast messages in plain-text format
The WebSocket client will receive JSON serialized text: `{"from": "user1", "data
>>> service.send_to_all(message = 'Hello world', content_type='text/plain') ```
-The WebSocket client will receive text: `Hello world`.
+The WebSocket client receives text: `Hello world`.
### Broadcast messages in binary format
The WebSocket client will receive text: `Hello world`.
>>> service.send_to_all(message=io.StringIO('Hello World'), content_type='application/octet-stream') ```
-The WebSocket client will receive binary text: `b'Hello world'`.
-
-## Troubleshooting
+The WebSocket client receives binary text: `b'Hello world'`.
-### Logging
+## Logging
This SDK uses Python standard logging library.
-You can configure logging print out debugging information to the stdout or anywhere you want.
+You can configure logging to print debugging information to the `stdout` or anywhere you want.
```python import sys
credential = DefaultAzureCredential()
service = WebPubSubServiceClient(endpoint=endpoint, hub='hub', credential=credential, logging_enable=True) ```
-Similarly, `logging_enable` can enable detailed logging for a single call,
-even when it isn't enabled for the WebPubSubServiceClient:
+Similarly, `logging_enable` can enable detailed logging for a single call, even when it isn't enabled for the `WebPubSubServiceClient`:
```python result = service.send_to_all(..., logging_enable=True) ```
-Http request and response details are printed to stdout with this logging config.
+HTTP request and response details are printed to `stdout` with this logging configuration.
## Next steps
-Check [more samples here][samples].
+- [Source code](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/webpubsub/azure-messaging-webpubsubservice)
+- [Package (Pypi)][package]
+- [API reference documentation](/python/api/overview/azure/messaging-webpubsubservice-readme)
+- [Product documentation][webpubsubservice_docs]
+
+For more samples, see [Azure Web PubSub service client library for Python Samples][samples].
## Contributing
-This project welcomes contributions and suggestions. Most contributions require
-you to agree to a Contributor License Agreement (CLA) declaring that you have
-the right to, and actually do, grant us the rights to use your contribution.
-For details, visit https://cla.microsoft.com.
+This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For more information, see [Contributor License Agreement](https://cla.microsoft.com).
-When you submit a pull request, a CLA-bot will automatically determine whether
-you need to provide a CLA and decorate the PR appropriately (e.g., label,
-comment). Simply follow the instructions provided by the bot. You will only
-need to do this once across all repos using our CLA.
+When you submit a pull request, a CLA-bot automatically determines whether you need to provide a CLA and decorate the PR appropriately, for example, "label", "comment". Follow the instructions provided by the bot. You only need to do this action once across all repos using our CLA.
-This project has adopted the
-[Microsoft Open Source Code of Conduct][code_of_conduct]. For more information,
-see the Code of Conduct FAQ or contact opencode@microsoft.com with any
-additional questions or comments.
+This project has adopted the Microsoft Open Source Code of Conduct. For more information, see [Code of Conduct][code_of_conduct] FAQ or contact [Open Source Conduct Team](mailto:opencode@microsoft.com) with questions or comments.
<!-- LINKS --> [webpubsubservice_docs]: ./index.yml
backup Backup Azure Sap Hana Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sap-hana-database.md
Title: Back up an SAP HANA database to Azure with Azure Backup description: In this article, learn how to back up an SAP HANA database to Azure virtual machines with the Azure Backup service. Previously updated : 04/28/2022 Last updated : 06/01/2022
The following table lists the various alternatives you can use for establishing
| Private endpoints | Allow backups over private IPs inside the virtual network <br><br> Provide granular control on the network and vault side | Incurs standard private endpoint [costs](https://azure.microsoft.com/pricing/details/private-link/) | | NSG service tags | Easier to manage as range changes are automatically merged <br><br> No additional costs | Can be used with NSGs only <br><br> Provides access to the entire service | | Azure Firewall FQDN tags | Easier to manage since the required FQDNs are automatically managed | Can be used with Azure Firewall only |
-| Allow access to service FQDNs/IPs | No additional costs <br><br> Works with all network security appliances and firewalls | A broad set of IPs or FQDNs may be required to be accessed |
+| Allow access to service FQDNs/IPs | No additional costs. <br><br> Works with all network security appliances and firewalls. <br><br> You can also use service endpoints for *Storage* and *Azure Active Directory*. However, for Azure Backup, you need to assign the access to the corresponding IPs/FQDNs. | A broad set of IPs or FQDNs may be required to be accessed. |
| [Virtual Network Service Endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) | Can be used for Azure Storage (= Recovery Services vault). <br><br> Provides large benefit to optimize performance of data plane traffic. | CanΓÇÖt be used for Azure AD, Azure Backup service. | | Network Virtual Appliance | Can be used for Azure Storage, Azure AD, Azure Backup service. <br><br> **Data plane** <ul><li> Azure Storage: `*.blob.core.windows.net`, `*.queue.core.windows.net`, `*.blob.storage.azure.net` </li></ul> <br><br> **Management plane** <ul><li> Azure AD: Allow access to FQDNs mentioned in sections 56 and 59 of [Microsoft 365 Common and Office Online](/microsoft-365/enterprise/urls-and-ip-address-ranges?view=o365-worldwide&preserve-view=true#microsoft-365-common-and-office-online). </li><li> Azure Backup service: `.backup.windowsazure.com` </li></ul> <br>Learn more about [Azure Firewall service tags](../firewall/fqdn-tags.md). | Adds overhead to data plane traffic and decrease throughput/performance. |
backup Backup Sql Server Database Azure Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-sql-server-database-azure-vms.md
Title: Back up multiple SQL Server VMs from the vault description: In this article, learn how to back up SQL Server databases on Azure virtual machines with Azure Backup from the Recovery Services vault Previously updated : 04/28/2022 Last updated : 06/01/2022
The following table lists the various alternatives you can use for establishing
| Private endpoints | Allow backups over private IPs inside the virtual network <br><br> Provide granular control on the network and vault side | Incurs standard private endpoint [costs](https://azure.microsoft.com/pricing/details/private-link/) | | NSG service tags | Easier to manage as range changes are automatically merged <br><br> No additional costs | Can be used with NSGs only <br><br> Provides access to the entire service | | Azure Firewall FQDN tags | Easier to manage since the required FQDNs are automatically managed | Can be used with Azure Firewall only |
-| Allow access to service FQDNs/IPs | No additional costs <br><br> Works with all network security appliances and firewalls | A broad set of IPs or FQDNs may be required to be accessed |
+| Allow access to service FQDNs/IPs | No additional costs. <br><br> Works with all network security appliances and firewalls. <br><br> You can also use service endpoints for *Storage* and *Azure Active Directory*. However, for Azure Backup, you need to assign the access to the corresponding IPs/FQDNs. | A broad set of IPs or FQDNs may be required to be accessed. |
| Use an HTTP proxy | Single point of internet access to VMs | Additional costs to run a VM with the proxy software | The following sections provide more details around using these options.
bastion Connect Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-ip-address.md
Before you begin these steps, verify that you have the following environment set
1. To connect to a VM using a specified private IP address, you make the connection from Bastion to the VM, not directly from the VM page. On your Bastion page, select **Connect** to open the Connect page.
-1. On the Bastion **Connect** page, for **Hostname**, enter the private IP address of the target VM.
+1. On the Bastion **Connect** page, for **IP address**, enter the private IP address of the target VM.
:::image type="content" source="./media/connect-ip-address/ip-address.png" alt-text="Screenshot of the Connect using Azure Bastion page." lightbox="./media/connect-ip-address/ip-address.png":::
cloud-services-extended-support Cloud Services Model And Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/cloud-services-model-and-package.md
Once the cloud service is running in Azure, you can reconfigure it through the *
* I want to know more about the [ServiceDefinition.csdef](#csdef) and [ServiceConfig.cscfg](#cscfg) files. * I already know about that, give me [some examples](#next-steps) on what I can configure. * I want to create the [ServicePackage.cspkg](#cspkg).
-* I am using Visual Studio and I want to...
- * [Create a cloud service][vs_create]
- * [Reconfigure an existing cloud service][vs_reconfigure]
- * [Deploy a Cloud Service project][vs_deploy]
- * [Remote desktop into a cloud service instance][remotedesktop]
<a name="csdef"></a>
cognitive-services Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/concepts/entities.md
You can use entities as a signal for an intent. For example, the presence of a c
| Example utterance | Entity | Intent | |--|--|--|
-| Book me a _fight to New York_. | City | Book Flight |
+| Book me a _flight to New York_. | City | Book Flight |
| Book me the _main conference room_. | Room | Reserve Room | ## Entities as Feature for entities
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/overview.md
The following document file types are supported by Document Translation:
|Tab Separated Values/TAB|tsv/tab| A tab-delimited raw-data file used by spreadsheet programs.| |Text|txt| An unformatted text document.|
+### Legacy file types
+
+Source file types will be preserved during the document translation with the following exceptions:
+
+| Source file extension | Translated file extension|
+| | |
+| .doc, .odt, .rtf, | .docx |
+| .xls, .ods | .xlsx |
+| .ppt, .odp | .pptx |
+ ## Supported glossary formats The following glossary file types are supported by Document Translation:
cognitive-services Model Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/model-lifecycle.md
Use the table below to find which API versions are supported by each feature:
| Feature | Supported versions | Latest Generally Available version | Latest preview version | |--||||
-| Custom text classification | `2022-03-01-preview` | | `2022-03-01-preview` |
-| Conversational language understanding | `2022-03-01-preview` | | `2022-03-01-preview` |
-| Custom named entity recognition | `2022-03-01-preview` | | `2022-03-01-preview` |
-| Orchestration workflow | `2022-03-01-preview` | | `2022-03-01-preview` |
+| Custom text classification | `2022-05-01` ,`2022-05-15-preview` | `2022-05-01` | `2022-05-15-preview` |
+| Conversational language understanding | `2022-05-01` ,`2022-05-15-preview` | `2022-05-01` | `2022-05-15-preview` |
+| Custom named entity recognition | `2022-05-01` ,`2022-05-15-preview` | `2022-05-01` | `2022-05-15-preview` |
+| Orchestration workflow | `2022-05-01`,`2022-05-15-preview` | `2022-05-01` | `2022-05-15-preview` |
## Next steps
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/call-api.md
You can also use the client libraries provided by the Azure SDK to send requests
|Language |Package version | |||
- |.NET | [5.2.0-beta.2](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0-beta.2) |
- |Python | [5.2.0b2](https://pypi.org/project/azure-ai-textanalytics/5.2.0b2/) |
+ |.NET | [1.0.0-beta.3 ](https://www.nuget.org/packages/Azure.AI.Language.Conversations/1.0.0-beta.3) |
+ |Python | [1.1.0b1](https://pypi.org/project/azure-ai-language-conversations/) |
4. After you've installed the client library, use the following samples on GitHub to start calling the API.
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/call-api.md
First you will need to get your resource key and endpoint:
|Language |Package version | |||
- |.NET | [5.2.0-beta.2](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0-beta.2) |
- |Java | [5.2.0-beta.2](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.2.0-beta.2) |
- |JavaScript | [5.2.0-beta.2](https://www.npmjs.com/package/@azure/ai-text-analytics/v/5.2.0-beta.2) |
- |Python | [5.2.0b2](https://pypi.org/project/azure-ai-textanalytics/5.2.0b2/) |
+ |.NET | [5.2.0-beta.3](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0-beta.3) |
+ |Java | [5.2.0-beta.3](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.2.0-beta.3) |
+ |JavaScript | [6.0.0-beta.1](https://www.npmjs.com/package/@azure/ai-text-analytics/v/6.0.0-beta.1) |
+ |Python | [5.2.0b4](https://pypi.org/project/azure-ai-textanalytics/5.2.0b4/) |
4. After you've installed the client library, use the following samples on GitHub to start calling the API.
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/service-limits.md
Custom named entity recognition is only available in some Azure regions. To use
* West Europe * North Europe * UK south
-* Southeast Asia
* Australia East
-* Sweden Central
## API limits
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/call-api.md
First you will need to get your resource key and endpoint:
|Language |Package version | |||
- |.NET | [5.2.0-beta.2](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0-beta.2) |
- |Java | [5.2.0-beta.2](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.2.0-beta.2) |
- |JavaScript | [5.2.0-beta.2](https://www.npmjs.com/package/@azure/ai-text-analytics/v/5.2.0-beta.2) |
- |Python | [5.2.0b2](https://pypi.org/project/azure-ai-textanalytics/5.2.0b2/) |
+ |.NET | [5.2.0-beta.3](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0-beta.3) |
+ |Java | [5.2.0-beta.3](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.2.0-beta.3) |
+ |JavaScript | [6.0.0-beta.1](https://www.npmjs.com/package/@azure/ai-text-analytics/v/6.0.0-beta.1) |
+ |Python | [5.2.0b4](https://pypi.org/project/azure-ai-textanalytics/5.2.0b4/) |
4. After you've installed the client library, use the following samples on GitHub to start calling the API.
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/service-limits.md
Custom text classification is only available in some Azure regions. To use custo
* West Europe * North Europe * UK south
-* Southeast Asia
* Australia East
-* Sweden Central
+
## API limits
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/how-to/call-api.md
You can also use the client libraries provided by the Azure SDK to send requests
|Language |Package version | |||
- |.NET | [5.2.0-beta.2](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0-beta.2) |
- |Python | [5.2.0b2](https://pypi.org/project/azure-ai-textanalytics/5.2.0b2/) |
+ |.NET | [1.0.0-beta.3 ](https://www.nuget.org/packages/Azure.AI.Language.Conversations/1.0.0-beta.3) |
+ |Python | [1.1.0b1](https://pypi.org/project/azure-ai-language-conversations/) |
4. After you've installed the client library, use the following samples on GitHub to start calling the API.
cognitive-services Adding Synonyms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/tutorials/adding-synonyms.md
As you can see, when `troubleshoot` was not added as a synonym, we got a low con
## Notes * Synonyms can be added in any order. The ordering is not considered in any computational logic.
+* Synonyms can only be added to a project that has at least one question and answer pair.
+* Synonyms can be added only when there is at least one question and answer pair present in a knowledge base.
* In case of overlapping synonym words between 2 sets of alterations, it may have unexpected results and it is not recommended to use overlapping sets. * Special characters are not allowed for synonyms. For hyphenated words like "COVID-19", they are treated the same as "COVID 19", and "space" can be used as a term separator. Following is the list of special characters **not allowed**:
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/overview.md
Conversation summarization feature would simplify the text into the following:
|Example summary | Format | Conversation aspect | ||-|-|
-| Customer wants to use the wifi connection on their Smart Brew 300. They canΓÇÖt connect it using the Contoso Coffee app. | One or two sentences | issue |
-| Checked if the power light is blinking slowly. Tried to do a factory reset. | One or more sentences, generated from multiple lines of the transcript. | resolution |
+| Customer wants to use the wifi connection on their Smart Brew 300. But it didn't work. | One or two sentences | issue |
+| Checked if the power light is blinking slowly. Checked the Contoso coffee app. It had no prompt. Tried to do a factory reset. | One or more sentences, generated from multiple lines of the transcript. | resolution |
connectors Connectors Create Api Informix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-informix.md
Title: Connect to IBM Informix database
description: Automate tasks and workflows that manage resources stored in IBM Informix by using Azure Logic Apps ms.suite: integration--++ Last updated 01/07/2020
connectors Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/managed.md
For more information, see these topics:
[youtube-icon]: ./media/apis-list/youtube.png <!--Managed connector doc links-->
-[apache-impala-doc]: /connectors/azureimpala/ "Connect to your Impala database to read data from tables"
+[apache-impala-doc]: /connectors/impala/ "Connect to your Impala database to read data from tables"
[azure-automation-doc]: /connectors/azureautomation/ "Create and manage automation jobs for your cloud and on-premises infrastructure" [azure-blob-storage-doc]: ./connectors-create-api-azureblobstorage.md "Manage files in your blob container with Azure blob storage connector" [azure-cosmos-db-doc]: ./connectors-create-api-cosmos-db.md "Connect to Azure Cosmos DB so that you can access and manage Azure Cosmos DB documents"
For more information, see these topics:
[x12-encode-doc]: ../logic-apps/logic-apps-enterprise-integration-X12-encode.md "Encode messages that use the X12 protocol" <!--Other doc links-->
-[gateway-doc]: ../logic-apps/logic-apps-gateway-connection.md "Connect to data sources on-premises from logic apps with on-premises data gateway"
+[gateway-doc]: ../logic-apps/logic-apps-gateway-connection.md "Connect to data sources on-premises from logic apps with on-premises data gateway"
container-apps Compare Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/compare-options.md
Title: 'Comparing Container Apps with other Azure container options' description: Understand when to use Azure Container Apps and how it compares to other container options including Azure Container Instances, Azure App Service, Azure Functions, and Azure Kubernetes Service. -+ Last updated 11/03/2021-+
container-apps Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quickstart-portal.md
In this quickstart, you create a secure Container Apps environment and deploy yo
## Prerequisites
-An Azure account with an active subscription is required. If you don't already have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+An Azure account with an active subscription is required. If you don't already have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Also, please make sure to have the Resource Provider "Microsoft.App" registered.
## Setup
container-apps Revisions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/revisions-manage.md
As you interact with this example, replace the placeholders surrounded by `<>` w
## Deactivate
-Deactivate revisions that are no longer in use with `az container app revision deactivate`. Deactivation stops all running replicas of a revision.
+Deactivate revisions that are no longer in use with `az containerapp revision deactivate`. Deactivation stops all running replicas of a revision.
# [Bash](#tab/bash)
container-instances Container Instances Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-custom-dns.md
+
+ Title: Configure custom DNS settings for container group in Azure Container Instances
+description: Configure a public or private DNS configuration for a container group
+++++ Last updated : 05/25/2022++
+# Deploy a container group with custom DNS settings
+
+In [Azure Virtual Network](../virtual-network/virtual-networks-overview.md), you can deploy container groups using the `az container create` command in the Azure CLI. You can also provide advanced configuration settings to the `az container create` command using a YAML configuration file.
+
+This article demonstrates how to deploy a container group with custom DNS settings using a YAML configuration file.
+
+For more information on deploying container groups to a virtual network, see the [Deploy in a virtual network article](container-instances-vnet.md).
+
+> [!IMPORTANT]
+> Previously, the process of deploying container groups on virtual networks used [network profiles](/azure/container-instances/container-instances-virtual-network-concepts#network-profile) for configuration. However, network profiles have been retired as of the `2021-07-01` API version. We recommend you use the latest API version, which relies on [subnet IDs](/azure/virtual-network/subnet-delegation-overview) instead.
+
+## Prerequisites
+
+* An **active Azure subscription**. If you don't have an active Azure subscription, create a [free account](https://azure.microsoft.com/free) before you begin.
+
+* **Azure CLI**. The command-line examples in this article use the [Azure CLI](/cli/azure/) and are formatted for the Bash shell. You can [install the Azure CLI](/cli/azure/install-azure-cli) locally or use the [Azure Cloud Shell][cloud-shell-bash].
+
+* A **resource group** to manage all the resources you use in this how-to guide. We use the example resource group name **ACIResourceGroup** throughout this article.
+
+ ```azurecli-interactive
+ az group create --name ACIResourceGroup --location westus
+ ```
+
+## Limitations
+
+For networking scenarios and limitations, see [Virtual network scenarios and resources for Azure Container Instances](container-instances-virtual-network-concepts.md).
+
+> [!IMPORTANT]
+> Container group deployment to a virtual network is available for Linux containers in most regions where Azure Container Instances is available. For details, see [Regions and resource availability](container-instances-region-availability.md).
+Examples in this article are formatted for the Bash shell. For PowerShell or command prompt, adjust the line continuation characters accordingly.
+
+## Create your virtual network
+
+You'll need a virtual network to deploy a container group with a custom DNS configuration. This virtual network will require a subnet with permissions to create Azure Container Instances resources and a linked private DNS zone to test name resolution.
+
+This guide uses a virtual network named `aci-vnet`, a subnet named `aci-subnet`, and a private DNS zone named `private.contoso.com`. We use **Azure Private DNS Zones**, which you can learn about in the [Private DNS Overview](../dns/private-dns-overview.md).
+
+If you have an existing virtual network that meets these criteria, you can skip to [Deploy your container group](#deploy-your-container-group).
+
+> [!TIP]
+> You can modify the following commands with your own information as needed.
+
+1. Create the virtual network using the [az network vnet create][az-network-vnet-create] command. Enter address prefixes in Classless Inter-Domain Routing (CIDR) format (for example: `10.0.0.0/16`).
+
+ ```azurecli
+ az network vnet create \
+ --name aci-vnet \
+ --resource-group ACIResourceGroup \
+ --location westus \
+ --address-prefix 10.0.0.0/16
+ ```
+
+1. Create the subnet using the [az network vnet subnet create][az-network-vnet-subnet-create] command. The following command creates a subnet in your virtual network with a delegation that permits it to create container groups. For more information about working with subnets, see the [Add, change, or delete a virtual network subnet](../virtual-network/virtual-network-manage-subnet.md). For more information about subnet delegation, see the [Virtual Network Scenarios and Resources article section on delegated subnets](container-instances-virtual-network-concepts.md#subnet-delegated).
+
+ ```azurecli
+ az network vnet subnet create \
+ --name aci-subnet \
+ --resource-group ACIResourceGroup \
+ --vnet-name aci-vnet \
+ --address-prefixes 10.0.0.0/24 \
+ --delegations Microsoft.ContainerInstance/containerGroups
+ ```
+
+1. Record the subnet ID key-value pair from the output of this command. You'll use this in your YAML configuration file later. It will take the form `"id"`: `"/subscriptions/<subscription-ID>/resourceGroups/ACIResourceGroup/providers/Microsoft.Network/virtualNetworks/aci-vnet/subnets/aci-subnet"`.
+
+1. Create the private DNS Zone using the [az network private-dns zone create][az-network-private-dns-zone-create] command.
+
+ ```azurecli
+ az network private-dns zone create -g ACIResourceGroup -n private.contoso.com
+ ```
+
+1. Link the DNS zone to your virtual network using the [az network private-dns link vnet create][az-network-private-dns-link-vnet-create] command. The DNS server is only required to test name resolution. The `-e` flag enables automatic hostname registration, which is unneeded, so we set it to `false`.
+
+ ```azurecli
+ az network private-dns link vnet create \
+ -g ACIResourceGroup \
+ -n aciDNSLink \
+ -z private.contoso.com \
+ -v aci-vnet \
+ -e false
+ ```
+
+Once you've completed the steps above, you should see an output with a final key-value pair that reads `"virtualNetworkLinkState"`: `"Completed"`.
+
+## Deploy your container group
+
+> [!NOTE]
+> Custom DNS settings are not currently available in the Azure portal for container group deployments. They must be provided with YAML file, Resource Manager template, [REST API](/rest/api/container-instances/containergroups/createorupdate), or an [Azure SDK](https://azure.microsoft.com/downloads/).
+
+Copy the following YAML into a new file named *custom-dns-deploy-aci.yaml*. Edit the following configurations with your values:
+
+* `dnsConfig`: DNS settings for your containers within your container group.
+ * `nameServers`: A list of name servers to be used for DNS lookups.
+ * `searchDomains`: DNS suffixes to be appended for DNS lookups.
+* `ipAddress`: The private IP address settings for the container group.
+ * `ports`: The ports to open, if any.
+ * `protocol`: The protocol (TCP or UDP) for the opened port.
+* `subnetIDs`: Network settings for the subnet(s) in the virtual network.
+ * `id`: The full Resource Manager resource ID of the subnet, which you obtained earlier.
+
+> [!NOTE]
+> The DNS config fields aren't automatically queried at this time, so these fields must be explicitly filled out.
+
+```yaml
+apiVersion: '2021-07-01'
+location: westus
+name: pwsh-vnet-dns
+properties:
+ containers:
+ - name: pwsh-vnet-dns
+ properties:
+ command:
+ - /bin/bash
+ - -c
+ - echo hello; sleep 10000
+ environmentVariables: []
+ image: mcr.microsoft.com/powershell:latest
+ ports:
+ - port: 80
+ resources:
+ requests:
+ cpu: 1.0
+ memoryInGB: 2.0
+ dnsConfig:
+ nameServers:
+ - 10.0.0.10 # DNS Server 1
+ - 10.0.0.11 # DNS Server 2
+ searchDomains: contoso.com # DNS search suffix
+ ipAddress:
+ type: Private
+ ports:
+ - port: 80
+ subnetIds:
+ - id: /subscriptions/<subscription-ID>/resourceGroups/ACIResourceGroup/providers/Microsoft.Network/virtualNetworks/aci-vnet/subnets/aci-subnet
+ osType: Linux
+tags: null
+type: Microsoft.ContainerInstance/containerGroups
+```
+
+Deploy the container group with the [az container create][az-container-create] command, specifying the YAML file name with the `--file` parameter:
+
+```azurecli
+az container create --resource-group ACIResourceGroup \
+ --file custom-dns-deploy-aci.yaml
+```
+
+Once the deployment is complete, run the [az container show][az-container-show] command to display its status. Sample output:
+
+```azurecli
+az container show --resource-group ACIResourceGroup --name pwsh-vnet-dns -o table
+```
+
+```console
+Name ResourceGroup Status Image IP:ports Network CPU/Memory OsType Location
+- -- -- -- -
+pwsh-vnet-dns ACIResourceGroup Running mcr.microsoft.com/powershell 10.0.0.5:80 Private 1.0 core/2.0 gb Linux westus
+```
+
+After the status shows `Running`, execute the [az container exec][az-container-exec] command to obtain bash access within the container.
+
+```azurecli
+az container exec --resource-group ACIResourceGroup --name pwsh-vnet-dns --exec-command "/bin/bash"
+```
+
+Validate that DNS is working as expected from within your container. For example, read the `/etc/resolv.conf` file to ensure it's configured with the DNS settings provided in the YAML file.
+
+```console
+root@wk-caas-81d609b206c541589e11058a6d260b38-90b0aff460a737f346b3b0:/# cat /etc/resolv.conf
+
+nameserver 10.0.0.10
+nameserver 10.0.0.11
+search contoso.com
+```
+
+## Clean up resources
+
+### Delete container instances
+
+When you're finished with the container instance you created, delete it with the [az container delete][az-container-delete] command:
+
+```azurecli
+az container delete --resource-group ACIResourceGroup --name pwsh-vnet-dns -y
+```
+
+### Delete network resources
+
+If you don't plan to use this virtual network again, you can delete it with the [az network vnet delete][az-network-vnet-delete] command:
+
+```azurecli
+az network vnet delete --resource-group ACIResourceGroup --name aci-vnet
+```
+
+### Delete resource group
+
+If you don't plan to use this resource group outside of this guide, you can delete it with [az group delete][az-group-delete] command:
+
+```azurecli
+az group delete --name ACIResourceGroup
+```
+
+Enter `y` when prompted if you're sure you wish to perform the operation.
+
+## Next steps
+
+See the Azure quickstart template [Create an Azure container group with VNet](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.containerinstance/aci-vnet), to deploy a container group within a virtual network.
+
+<!-- LINKS - Internal -->
+[az-network-vnet-create]: /cli/azure/network/vnet#az-network-vnet-create
+[az-network-vnet-subnet-create]: /cli/azure/network/vnet/subnet#az-network-vnet-subnet-create
+[az-network-private-dns-zone-create]: /cli/azure/network/private-dns/zone#az-network-private-dns-zone-create
+[az-network-private-dns-link-vnet-create]: /cli/azure/network/private-dns/link/vnet#az-network-private-dns-link-vnet-create
+[az-container-create]: /cli/azure/container#az-container-create
+[az-container-show]: /cli/azure/container#az-container-show
+[az-container-exec]: /cli/azure/container#az-container-exec
+[az-container-delete]: /cli/azure/container#az-container-delete
+[az-network-vnet-delete]: /cli/azure/network/vnet#az-network-vnet-delete
+[az-group-delete]: /cli/azure/group#az-group-create
+[cloud-shell-bash]: /cloud-shell/overview.md
container-registry Buffer Gate Public Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/buffer-gate-public-content.md
For details, see [Docker Hub authenticated pulls on App Service](https://azure.g
To begin managing copies of public images, you can create an Azure container registry if you don't already have one. Create a registry using the [Azure CLI](container-registry-get-started-azure-cli.md), [Azure portal](container-registry-get-started-portal.md), [Azure PowerShell](container-registry-get-started-powershell.md), or other tools.
+# [Azure CLI](#tab/azure-cli)
+ As a recommended one-time step, [import](container-registry-import-images.md) base images and other public content to your Azure container registry. The [az acr import](/cli/azure/acr#az-acr-import) command in the Azure CLI supports image import from public registries such as Docker Hub and Microsoft Container Registry and from other private container registries. `az acr import` doesn't require a local Docker installation. You can run it with a local installation of the Azure CLI or directly in Azure Cloud Shell. It supports images of any OS type, multi-architecture images, or OCI artifacts such as Helm charts. Depending on your organization's needs, you can import to a dedicated registry or a repository in a shared registry.
-# [Azure CLI](#tab/azure-cli)
-Example:
- ```azurecli-interactive az acr import \ --name myregistry \
az acr import \
--password <Docker Hub token> ```
-# [PowerShell](#tab/azure-powershell)
-Example:
+# [Azure PowerShell](#tab/azure-powershell)
+
+As a recommended one-time step, [import](container-registry-import-images.md) base images and other public content to your Azure container registry. The [Import-AzContainerRegistryImage](/powershell/module/az.containerregistry/import-azcontainerregistryimage) command in the Azure PowerShell supports image import from public registries such as Docker Hub and Microsoft Container Registry and from other private container registries.
+
+`Import-AzContainerRegistryImage` doesn't require a local Docker installation. You can run it with a local installation of the Azure PowerShell or directly in Azure Cloud Shell. It supports images of any OS type, multi-architecture images, or OCI artifacts such as Helm charts.
+
+Depending on your organization's needs, you can import to a dedicated registry or a repository in a shared registry.
```azurepowershell-interactive
-Import-AzContainerRegistryImage
- -SourceImage library/busybox:latest
- -ResourceGroupName $resourceGroupName
- -RegistryName $RegistryName
- -SourceRegistryUri docker.io
- -TargetTag busybox:latest
+$Params = @{
+ SourceImage = 'library/busybox:latest'
+ ResourceGroupName = $resourceGroupName
+ RegistryName = $RegistryName
+ SourceRegistryUri = 'docker.io'
+ TargetTag = 'busybox:latest'
+}
+Import-AzContainerRegistryImage @Params
```
- Credentials are required if the source registry is not available publicly or the admin user is disabled.
+
+Credentials are required if the source registry is not available publicly or the admin user is disabled.
++ ## Update image references
container-registry Container Registry Tasks Base Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-base-images.md
See the following tutorials for scenarios to automate application image builds a
* [Automate container image builds when a base image is updated in the same registry](container-registry-tutorial-base-image-update.md)
-* [Automate container image builds when a base image is updated in a different registry](container-registry-tutorial-base-image-update.md)
+* [Automate container image builds when a base image is updated in a different registry](container-registry-tutorial-private-base-image-update.md)
<!-- LINKS - External -->
cosmos-db Migrate Data Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/migrate-data-databricks.md
Select **Install**, and then restart the cluster when installation is complete.
> [!NOTE] > Make sure that you restart the Databricks cluster after the Cassandra Connector library has been installed.
+> [!WARNING]
+> The samples shown in this article have been tested with Spark **version 3.0.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.0.0**. Later versions of Spark and/or the Cassandra connector may not function as expected.
+ ## Create Scala Notebook for migration Create a Scala Notebook in Databricks. Replace your source and target Cassandra configurations with the corresponding credentials, and source and target keyspaces and tables. Then run the following code:
cosmos-db Spark Create Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-create-operations.md
spark.conf.set("spark.cassandra.connection.keep_alive_ms", "600000000")
``` > [!NOTE]
-> If you are using Spark 3.0 or higher, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above). You will see that connection related properties are defined within the notebook above. Using the syntax below, connection properties can be defined in this manner without needing to be defined at the cluster level (Spark context initialization).
+> If you are using Spark 3.0, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above). You will see that connection related properties are defined within the notebook above. Using the syntax below, connection properties can be defined in this manner without needing to be defined at the cluster level (Spark context initialization).
+
+> [!WARNING]
+> The Spark 3 samples shown in this article have been tested with Spark **version 3.0.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.0.0**. Later versions of Spark and/or the Cassandra connector may not function as expected.
## Dataframe API
cosmos-db Spark Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-databricks.md
This article details how to work with Azure Cosmos DB Cassandra API from Spark o
* **Azure Cosmos DB Cassandra API-specific library:** - If you are using Spark 2.x, a custom connection factory is required to configure the retry policy from the Cassandra Spark connector to Azure Cosmos DB Cassandra API. Add the `com.microsoft.azure.cosmosdb:azure-cosmos-cassandra-spark-helper:1.2.0`[maven coordinates](https://search.maven.org/artifact/com.microsoft.azure.cosmosdb/azure-cosmos-cassandra-spark-helper/1.2.0/jar) to attach the library to the cluster. > [!NOTE]
-> If you are using Spark 3.0 or higher, you do not need to install the Cosmos DB Cassandra API-specific library mentioned above.
+> If you are using Spark 3.0, you do not need to install the Cosmos DB Cassandra API-specific library mentioned above.
+
+> [!WARNING]
+> The Spark 3 samples shown in this article have been tested with Spark **version 3.0.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.0.0**. Later versions of Spark and/or the Cassandra connector may not function as expected.
## Sample notebooks
cosmos-db Spark Ddl Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-ddl-operations.md
spark.conf.set("spark.cassandra.connection.keep_alive_ms", "600000000")
``` > [!NOTE]
-> If you are using Spark 3.0 or higher, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above).
+> If you are using Spark 3.0, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above).
+
+> [!WARNING]
+> The Spark 3 samples shown in this article have been tested with Spark **version 3.0.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.0.0**. Later versions of Spark and/or the Cassandra connector may not function as expected.
## Keyspace DDL operations
cosmos-db Spark Delete Operation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-delete-operation.md
spark.conf.set("spark.cassandra.connection.keep_alive_ms", "600000000")
``` > [!NOTE]
-> If you are using Spark 3.0 or higher, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above). You will see that connection related properties are defined within the notebook above. Using the syntax below, connection properties can be defined in this manner without needing to be defined at the cluster level (Spark context initialization). However, when using operations that require spark context (for example, `CassandraConnector(sc)` for `delete` as shown below), connection properties need to be defined at the cluster level.
+> If you are using Spark 3.0, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above). You will see that connection related properties are defined within the notebook above. Using the syntax below, connection properties can be defined in this manner without needing to be defined at the cluster level (Spark context initialization). However, when using operations that require spark context (for example, `CassandraConnector(sc)` for `delete` as shown below), connection properties need to be defined at the cluster level.
+
+> [!WARNING]
+> The Spark 3 samples shown in this article have been tested with Spark **version 3.0.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.0.0**. Later versions of Spark and/or the Cassandra connector may not function as expected.
## Sample data generator We will use this code fragment to generate sample data:
cosmos-db Spark Read Operation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-read-operation.md
spark.conf.set("spark.cassandra.connection.keep_alive_ms", "600000000")
``` > [!NOTE]
-> If you are using Spark 3.0 or higher, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector(see above). You will see that connection related properties are defined within the notebook above. Using the syntax below, connection properties can be defined in this manner without needing to be defined at the cluster level (Spark context initialization).
+> If you are using Spark 3.0, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector(see above). You will see that connection related properties are defined within the notebook above. Using the syntax below, connection properties can be defined in this manner without needing to be defined at the cluster level (Spark context initialization).
+
+> [!WARNING]
+> The Spark 3 samples shown in this article have been tested with Spark **version 3.0.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.0.0**. Later versions of Spark and/or the Cassandra connector may not function as expected.
## Dataframe API
cosmos-db Spark Table Copy Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-table-copy-operations.md
spark.conf.set("spark.cassandra.connection.keep_alive_ms", "600000000")
> [!NOTE] > If you are using Spark 3.0 or higher, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above). You will see that connection related properties are defined within the notebook above. Using the syntax below, connection properties can be defined in this manner without needing to be defined at the cluster level (Spark context initialization).
+> [!WARNING]
+> The Spark 3 samples shown in this article have been tested with Spark **version 3.0.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.0.0**. Later versions of Spark and/or the Cassandra connector may not function as expected.
+ ## Insert sample data ```scala val booksDF = Seq(
cosmos-db Spark Upsert Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-upsert-operations.md
spark.conf.set("spark.cassandra.connection.keep_alive_ms", "600000000")
``` > [!NOTE]
-> If you are using Spark 3.0 or higher, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above). You will see that connection related properties are defined within the notebook above. Using the syntax below, connection properties can be defined in this manner without needing to be defined at the cluster level (Spark context initialization). However, when using operations that require spark context (for example, `CassandraConnector(sc)` for `update` as shown below), connection properties need to be defined at the cluster level.
+> If you are using Spark 3.0, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above). You will see that connection related properties are defined within the notebook above. Using the syntax below, connection properties can be defined in this manner without needing to be defined at the cluster level (Spark context initialization). However, when using operations that require spark context (for example, `CassandraConnector(sc)` for `update` as shown below), connection properties need to be defined at the cluster level.
+
+> [!WARNING]
+> The Spark 3 samples shown in this article have been tested with Spark **version 3.0.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.0.0**. Later versions of Spark and/or the Cassandra connector may not function as expected.
## Dataframe API
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/concepts-limits.md
Depending on the current RU/s provisioned and resource settings, each resource c
| Maximum RU/s per container | 5,000 | | Maximum storage across all items per (logical) partition | 20 GB | | Maximum number of distinct (logical) partition keys | Unlimited |
-| Maximum storage per container (SQL API, Mongo API, Table API, Gremlin API)| 50 GB |
+| Maximum storage per container (SQL API, Mongo API, Table API, Gremlin API)| 50 GB<sup>1</sup> |
| Maximum storage per container (Cassandra API)| 30 GB |
+<sup>1</sup> Serverless containers up to 1 TB are currently in preview with Azure Cosmos DB. To try the new feature, register the *"Azure Cosmos DB Serverless 1 TB Container Preview"* [preview feature in your Azure subscription](../azure-resource-manager/management/preview-features.md).
## Control plane operations
An Azure Cosmos item can represent either a document in a collection, a row in a
| Maximum level of nesting for embedded objects / arrays | 128 | | Maximum TTL value |2147483647 |
-<sup>1</sup> Large document sizes up to 16 Mb are currently in preview with Azure Cosmos DB API for MongoDB only. Sign-up for the feature ΓÇ£Azure Cosmos DB API For MongoDB 16MB Document SupportΓÇ¥ from [Preview Features the Azure portal](./access-previews.md), to try the new feature.
+<sup>1</sup> Large document sizes up to 16 Mb are currently in preview with Azure Cosmos DB API for MongoDB only. Sign-up for the feature ΓÇ£Azure Cosmos DB API For MongoDB 16 MB Document SupportΓÇ¥ from [Preview Features the Azure portal](./access-previews.md), to try the new feature.
There are no restrictions on the item payloads (like number of properties and nesting depth), except for the length restrictions on partition key and ID values, and the overall size restriction of 2 MB. You may have to configure indexing policy for containers with large or complex item structures to reduce RU consumption. See [Modeling items in Cosmos DB](how-to-model-partition-example.md) for a real-world example, and patterns to manage large items.
See the [Autoscale](provision-throughput-autoscale.md#autoscale-limits) article
| Current RU/s the system is scaled to | `0.1*Tmax <= T <= Tmax`, based on usage| | Minimum billable RU/s per hour| `0.1 * Tmax` <br></br>Billing is done on a per-hour basis, where you're billed for the highest RU/s the system scaled to in the hour, or `0.1*Tmax`, whichever is higher. | | Minimum autoscale max RU/s for a container | `MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 100)` rounded to nearest 1000 RU/s |
-| Minimum autoscale max RU/s for a database | `MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 100, 1000 + (MAX(Container count - 25, 0) * 1000))`, rounded to nearest 1000 RU/s. <br></br>Note if your database has more than 25 containers, the system increments the minimum autoscale max RU/s by 1000 RU/s per additional container. For example, if you have 30 containers, the lowest autoscale maximum RU/s you can set is 6000 RU/s (scales between 600 - 6000 RU/s).
+| Minimum autoscale max RU/s for a database | `MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 100, 1000 + (MAX(Container count - 25, 0) * 1000))`, rounded to nearest 1000 RU/s. <br></br>Note if your database has more than 25 containers, the system increments the minimum autoscale max RU/s by 1000 RU/s per extra container. For example, if you have 30 containers, the lowest autoscale maximum RU/s you can set is 6000 RU/s (scales between 600 - 6000 RU/s).
## SQL query limits
Get started with Azure Cosmos DB with one of our quickstarts:
* [Get started with Azure Cosmos DB Gremlin API](create-graph-dotnet.md) * [Get started with Azure Cosmos DB Table API](table/create-table-dotnet.md) * Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
+ * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) > [!div class="nextstepaction"]
cosmos-db Merge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/merge.md
Support for these connectors is planned for the future.
## Next steps
-* Learn more about [using Azure CLI with Azure Cosmos DB.](/cli/azure/azure-cli-reference-for-cosmos-db.md)
+* Learn more about [using Azure CLI with Azure Cosmos DB.](/cli/azure/azure-cli-reference-for-cosmos-db)
* Learn more about [using Azure PowerShell with Azure Cosmos DB.](/powershell/module/az.cosmosdb/) * Learn more about [partitioning in Azure Cosmos DB.](partitioning-overview.md)
cosmos-db Sql Query Bitwise Operators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-bitwise-operators.md
+
+ Title: Bitwise operators in Azure Cosmos DB
+description: Learn about SQL bitwise operators supported by Azure Cosmos DB.
++++++ Last updated : 05/31/2022++
+# Bitwise operators in Azure Cosmos DB
++
+This article details the bitwise operators supported by Azure Cosmos DB. Bitwise operators are useful for constructing JSON result-sets on the fly. The bitwise operators work similarly to higher-level programming languages like C# and JavaScript. For examples of C# bitwise operators, see [Bitwise and shift operators](/dotnet/csharp/language-reference/operators/bitwise-and-shift-operators).
+
+## Understanding bitwise operations
+
+The following table shows the explanations and examples of bitwise operations in the SQL API between two values.
+
+| Operation | Operator | Description |
+| | | |
+| **Left shift** | ``<<`` | Shift left-hand value *left* by the specified number of bits. |
+| **Right shift** | ``>>`` | Shift left-hand value *right* by the specified number of bits. |
+| **Zero-fill (unsigned) right shift** | ``>>>`` | Shift left-hand value *right* by the specified number of bits without filling left-most bits. |
+| **AND** | ``&`` | Computes bitwise logical AND. |
+| **OR** | ``|`` | Computes bitwise logical OR. |
+| **XOR** | ``^`` | Computes bitwise logical exclusive OR. |
++
+For example, the following query uses each of the bitwise operators and renders a result.
+
+```sql
+SELECT
+ (100 >> 2) AS rightShift,
+ (100 << 2) AS leftShift,
+ (100 >>> 0) AS zeroFillRightShift,
+ (100 & 1000) AS logicalAnd,
+ (100 | 1000) AS logicalOr,
+ (100 ^ 1000) AS logicalExclusiveOr
+```
+
+The example query's results as a JSON object.
+
+```json
+[
+ {
+ "rightShift": 25,
+ "leftShift": 400,
+ "zeroFillRightShift": 100,
+ "logicalAnd": 96,
+ "logicalOr": 1004,
+ "logicalExclusiveOr": 908
+ }
+]
+```
+
+> [!IMPORTANT]
+> In this example, the values on the left and right side of the operands are 32-bit integer values.
+
+## Next steps
+
+- [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmos-dotnet-v3)
+- [Keywords](sql-query-keywords.md)
+- [SELECT clause](sql-query-select.md)
cosmos-db Sql Query Date Time Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-date-time-functions.md
or numeric ticks whose value is the number of 100 nanosecond ticks which have el
The following functions allow you to easily manipulate DateTime, timestamp, and tick values: * [DateTimeAdd](sql-query-datetimeadd.md)
+* [DateTimeBin](sql-query-datetimebin.md)
* [DateTimeDiff](sql-query-datetimediff.md) * [DateTimeFromParts](sql-query-datetimefromparts.md) * [DateTimePart](sql-query-datetimepart.md)
cosmos-db Sql Query Datetimebin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-datetimebin.md
+
+ Title: DateTimeBin in Azure Cosmos DB query language
+description: Learn about SQL system function DateTimeBin in Azure Cosmos DB.
++++ Last updated : 05/27/2022 ++
+
+
+# DateTimeBin (Azure Cosmos DB)
+ [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
+
+Returns the nearest multiple of *BinSize* below the specified DateTime given the unit of measurement *DateTimePart* and start value of *BinAtDateTime*.
++
+## Syntax
+
+```sql
+DateTimeBin (<DateTime> , <DateTimePart> [,BinSize] [,BinAtDateTime])
+```
++
+## Arguments
+
+*DateTime*
+ The string value date and time to be binned. A UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
+
+|Format|Description|
+|-|-|
+|YYYY|four-digit year|
+|MM|two-digit month (01 = January, etc.)|
+|DD|two-digit day of month (01 through 31)|
+|T|signifier for beginning of time elements|
+|hh|two-digit hour (00 through 23)|
+|mm|two-digit minutes (00 through 59)|
+|ss|two-digit seconds (00 through 59)|
+|.fffffff|seven-digit fractional seconds|
+|Z|UTC (Coordinated Universal Time) designator|
+
+For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
+
+*DateTimePart*
+ The date time part specifies the units for BinSize. DateTimeBin is Undefined for DayOfWeek, Year, and Month. The finest granularity for binning by Nanosecond is 100 nanosecond ticks; if Nanosecond is specified with a BinSize less than 100, the result is Undefined. This table lists all valid DateTimePart arguments for DateTimeBin:
+
+| DateTimePart | abbreviations |
+| | -- |
+| Day | "day", "dd", "d" |
+| Hour | "hour", "hh" |
+| Minute | "minute", "mi", "n" |
+| Second | "second", "ss", "s" |
+| Millisecond | "millisecond", "ms" |
+| Microsecond | "microsecond", "mcs" |
+| Nanosecond | "nanosecond", "ns" |
+
+*BinSize* (optional)
+ Numeric value that specifies the size of bins. If not specified, the default value is one.
++
+*BinAtDateTime* (optional)
+ A UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` that specifies the start date to bin from. Default value is the Unix epoch, ΓÇÿ1970-01-01T00:00:00.000000ZΓÇÖ.
++
+## Return types
+
+Returns the result of binning the *DateTime* value.
++
+## Remarks
+
+DateTimeBin will return `Undefined` for the following reasons:
+- The DateTimePart value specified is invalid
+- The BinSize value is zero or negative
+- The DateTime or BinAtDateTime isn't a valid ISO 8601 DateTime or precedes the year 1601 (the Windows epoch)
++
+## Examples
+
+The following example bins ΓÇÿ2021-06-28T17:24:29.2991234ZΓÇÖ by one hour:
+
+```sql
+SELECT DateTimeBin('2021-06-28T17:24:29.2991234Z', 'hh') AS BinByHour
+```
+
+```json
+[
+    {
+        "BinByHour": "2021-06-28T17:00:00.0000000Z"
+    }
+]
+```
+
+The following example bins ΓÇÿ2021-06-28T17:24:29.2991234ZΓÇÖ given different *BinAtDateTime* values:
+
+```sql
+SELECTΓÇ»
+DateTimeBin('2021-06-28T17:24:29.2991234Z', 'day', 5) AS One_BinByFiveDaysUnixEpochImplicit,
+DateTimeBin('2021-06-28T17:24:29.2991234Z', 'day', 5, '1970-01-01T00:00:00.0000000Z') AS Two_BinByFiveDaysUnixEpochExplicit,
+DateTimeBin('2021-06-28T17:24:29.2991234Z', 'day', 5, '1601-01-01T00:00:00.0000000Z') AS Three_BinByFiveDaysFromWindowsEpoch,
+DateTimeBin('2021-06-28T17:24:29.2991234Z', 'day', 5, '2021-01-01T00:00:00.0000000Z') AS Four_BinByFiveDaysFromYearStart,
+DateTimeBin('2021-06-28T17:24:29.2991234Z', 'day', 5, '0001-01-01T00:00:00.0000000Z') AS Five_BinByFiveDaysFromUndefinedYear
+```
+
+```json
+[
+    {
+        "One_BinByFiveDaysUnixEpochImplicit": "2021-06-27T00:00:00.0000000Z",
+        "Two_BinByFiveDaysUnixEpochExplicit": "2021-06-27T00:00:00.0000000Z",
+        "Three_BinByFiveDaysFromWindowsEpoch": "2021-06-28T00:00:00.0000000Z",
+        "Four_BinByFiveDaysFromYearStart": "2021-06-25T00:00:00.0000000Z"
+    }
+]
+```
+
+## Next steps
+
+- [Date and time functions Azure Cosmos DB](sql-query-date-time-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
cosmos-db Create Table Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/create-table-dotnet.md
Title: 'Quickstart: Table API with .NET - Azure Cosmos DB' description: This quickstart shows how to access the Azure Cosmos DB Table API from a .NET application using the Azure.Data.Tables SDK-+ ms.devlang: csharp Last updated 09/26/2021-+
cost-management-billing Direct Ea Azure Usage Charges Invoices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-azure-usage-charges-invoices.md
Enterprise administrators can also view an overall summary of the charges for th
## Download or view your Azure billing invoice
-You can download your invoice from the [Azure portal](https://portal.azure.com) or have it sent in email. Invoices are sent to whoever is set up to receive invoices for the enrollment.
+An EA administrator can download the invoice from the [Azure portal](https://portal.azure.com) or have it sent in email. Invoices are sent to whoever is set up to receive invoices for the enrollment.
-Only an Enterprise Administrator has permission to view and get the billing invoice. To learn more about getting access to billing information, see [Manage access to Azure billing using roles](manage-billing-access.md).
+Only an Enterprise Administrator has permission to view and download the billing invoice. To learn more about getting access to billing information, see [Manage access to Azure billing using roles](manage-billing-access.md).
You receive an Azure invoice when any of the following events occur during your billing cycle:
cost-management-billing Ea Portal Enrollment Invoices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-enrollment-invoices.md
Title: Azure Enterprise enrollment invoices
description: This article explains how to manage and act on your Azure Enterprise invoice. Previously updated : 12/03/2021 Last updated : 05/31/2022
If an Amendment M503 is signed, you can move any agreement from any frequency to
### Request an invoice copy
-To request a copy of your invoice, contact your partner.
+If you're an indirect enterprise agreement customer, contact your partner to request a copy of your invoice.
## Credits and adjustments
cost-management-billing Reservation Renew https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-renew.md
The following conditions are required to renew a reservation:
## Default renewal settings
-By default, the renewal inherits all properties from the expiring reservation. A reservation renewal purchase has the same SKU, region, scope, billing subscription, term, and quantity.
+By default, the renewal inherits all properties except automatic renewal setting from the expiring reservation. A reservation renewal purchase has the same SKU, region, scope, billing subscription, term, and quantity.
However, you can update the renewal reservation purchase quantity to optimize your savings.
cost-management-billing Download Azure Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/download-azure-invoice.md
# View and download your Microsoft Azure invoice
-You can download your invoice in the [Azure portal](https://portal.azure.com/) or have it sent in email. If you're an Azure customer with an Enterprise Agreement (EA customer), you can't download your organization's invoice. Instead, invoices are sent to the person set to receive invoices for the enrollment.
+You can download your invoice in the [Azure portal](https://portal.azure.com/) or have it sent in email. If you're an Azure customer with an Enterprise Agreement (EA customer), only an EA administrator can download and view your organization's invoice. Invoices are sent to the person set to receive invoices for the enrollment.
## When invoices are generated
data-factory Connector Google Adwords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-adwords.md
Previously updated : 02/24/2022 Last updated : 05/30/2022 # Copy data from Google AdWords using Azure Data Factory or Synapse Analytics
The following properties are supported for Google AdWords linked service:
| clientId | The client ID of the Google application used to acquire the refresh token. You can choose to mark this field as a SecureString to store it securely, or store password in Azure Key Vault and let the copy activity pull from there when performing data copy - learn more from [Store credentials in Key Vault](store-credentials-in-key-vault.md). | No | | clientSecret | The client secret of the google application used to acquire the refresh token. You can choose to mark this field as a SecureString to store it securely, or store password in Azure Key Vault and let the copy activity pull from there when performing data copy - learn more from [Store credentials in Key Vault](store-credentials-in-key-vault.md). | No | | email | The service account email ID that is used for ServiceAuthentication and can only be used on self-hosted IR. | No |
-| keyFilePath | The full path to the .p12 key file that is used to authenticate the service account email address and can only be used on self-hosted IR. | No |
+| keyFilePath | The full path to the `.p12` or `.json` key file that is used to authenticate the service account email address and can only be used on self-hosted IR. | No |
| trustedCertPath | The full path of the .pem file containing trusted CA certificates for verifying the server when connecting over TLS. This property can only be set when using TLS on self-hosted IR. The default value is the cacerts.pem file installed with the IR. | No | | useSystemTrustStore | Specifies whether to use a CA certificate from the system trust store or from a specified PEM file. The default value is false. | No |
data-factory Connector Google Bigquery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-bigquery.md
Previously updated : 04/26/2022 Last updated : 05/30/2022 # Copy data from Google BigQuery using Azure Data Factory or Synapse Analytics
Set "authenticationType" property to **ServiceAuthentication**, and specify the
| Property | Description | Required | |: |: |: | | email | The service account email ID that is used for ServiceAuthentication. It can be used only on Self-hosted Integration Runtime. | No |
-| keyFilePath | The full path to the .p12 key file that is used to authenticate the service account email address. | No |
+| keyFilePath | The full path to the `.p12` or `.json` key file that is used to authenticate the service account email address. | No |
| trustedCertPath | The full path of the .pem file that contains trusted CA certificates used to verify the server when you connect over TLS. This property can be set only when you use TLS on Self-hosted Integration Runtime. The default value is the cacerts.pem file installed with the integration runtime. | No | | useSystemTrustStore | Specifies whether to use a CA certificate from the system trust store or from a specified .pem file. The default value is **false**. | No |
Set "authenticationType" property to **ServiceAuthentication**, and specify the
"requestGoogleDriveScope" : true, "authenticationType" : "ServiceAuthentication", "email": "<email>",
- "keyFilePath": "<.p12 key path on the IR machine>"
+ "keyFilePath": "<.p12 or .json key path on the IR machine>"
}, "connectVia": { "referenceName": "<name of Self-hosted Integration Runtime>",
data-factory Data Flow Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-source.md
Previously updated : 05/27/2022 Last updated : 05/31/2022 # Source transformation in mapping data flow
If your text file has no defined schema, select **Detect data type** so that the
**Reset schema** resets the projection to what is defined in the referenced dataset.
-You can modify the column data types in a downstream derived-column transformation. Use a select transformation to modify the column names.
+**Overwrite schema** allows you to modify the projected data types here the source, overwriting the schema-defined data types. You can alternatively modify the column data types in a downstream derived-column transformation. Use a select transformation to modify the column names.
### Import schema
defender-for-cloud Defender For Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md
Last updated 05/19/2022
# What is Microsoft Defender for Cloud?
-Microsoft Defender for Cloud is a Cloud Workload Protection Platform (CWPP) that also delivers Cloud Security Posture Management (CSPM) for all of your Azure, on-premises, and multicloud (Amazon AWS and Google GCP) resources.
--- [**Defender for Cloud recommendations**](security-policy-concept.md) identify cloud workloads that require security actions and provide you with steps to protect your workloads from security risks.-- [**Defender for Cloud secure score**](secure-score-security-controls.md) gives you a clear view of your security posture based on the implementation of the security recommendations so you can track new security opportunities and precisely report on the progress of your security efforts.-- [**Defender for Cloud alerts**](alerts-overview.md) warn you about security events in your workloads in real-time, including the indicators that led to the event.-
-Defender for Cloud fills three vital needs as you manage the security of your resources and workloads in the cloud and on-premises:
+Microsoft Defender for Cloud is a Cloud Security Posture Management (CSPM) and Cloud Workload Protection Platform (CWPP) for all of your Azure, on-premises, and multi-cloud (Amazon AWS and Google GCP) resources. Defender for Cloud fills three vital needs as you manage the security of your resources and workloads in the cloud and on-premises:
:::image type="content" source="media/defender-for-cloud-introduction/defender-for-cloud-synopsis.png" alt-text="Understanding the core functionality of Microsoft Defender for Cloud.":::
-|Security requirement | Defender for Cloud solution|
-|||
-|**Continuous assessment** - Understand your current security posture. | **Secure score** - A single score so that you can tell, at a glance, your current security situation: the higher the score, the lower the identified risk level. |
-|**Secure** - Harden all connected resources and services. | **Security recommendations** - Customized and prioritized hardening tasks to improve your posture. You implement a recommendation by following the detailed remediation steps provided in the recommendation. For many recommendations, Defender for Cloud offers a "Fix" button for automated implementation!|
-|**Defend** - Detect and resolve threats to those resources and services. | **Security alerts** - With the enhanced security features enabled, Defender for Cloud detects threats to your resources and workloads. These alerts appear in the Azure portal and Defender for Cloud can also send them by email to the relevant personnel in your organization. Alerts can also be streamed to SIEM, SOAR, or IT Service Management solutions as required. |
+- [**Defender for Cloud secure score**](secure-score-security-controls.md) **continually assesses** your security posture so you can track new security opportunities and precisely report on the progress of your security efforts.
+- [**Defender for Cloud recommendations**](security-policy-concept.md) **secures** your workloads with step-by-step actions that protect your workloads from known security risks.
+- [**Defender for Cloud alerts**](alerts-overview.md) **defends** your workloads in real-time so you can react immediately and prevent security events from developing.
+
+For a step-by-step walkthrough of Defender for Cloud, check out this [interactive tutorial](https://mslearn.cloudguides.com/en-us/guides/Protect%20your%20multi-cloud%20environment%20with%20Microsoft%20Defender%20for%20Cloud).
-## Posture management and workload protection
+## Protect your resources and track your security progress
-Microsoft Defender for Cloud's features covers the two broad pillars of cloud security: cloud security posture management and cloud workload protection.
+Microsoft Defender for Cloud's features covers the two broad pillars of cloud security: Cloud Workload Protection Platform (CWPP) and Cloud Security Posture Management (CSPM).
-### Cloud security posture management (CSPM)
+### CSPM - Remediate security issues and watch your security posture improve
In Defender for Cloud, the posture management features provide: -- **Visibility** - to help you understand your current security situation - **Hardening guidance** - to help you efficiently and effectively improve your security
+- **Visibility** - to help you understand your current security situation
-The central feature in Defender for Cloud that enables you to achieve those goals is **secure score**. Defender for Cloud continually assesses your resources, subscriptions, and organization for security issues. It then aggregates all the findings into a single score so that you can tell, at a glance, your current security situation: the higher the score, the lower the identified risk level.
+Defender for Cloud continually assesses your resources, subscriptions, and organization for security issues and shows your security posture in **secure score**, an aggregated score of the security findings that tells you, at a glance, your current security situation: the higher the score, the lower the identified risk level.
-When you open Defender for Cloud for the first time, it will meet the visibility and strengthening goals as follows:
+As soon as you open Defender for Cloud for the first time, Defender for Cloud:
-1. **Generate a secure score** for your subscriptions based on an assessment of your connected resources compared with the guidance in [Azure Security Benchmark](/security/benchmark/azure/overview). Use the score to understand your security posture, and the compliance dashboard to review your compliance with the built-in benchmark. When you've enabled the enhanced security features, you can customize the standards used to assess your compliance, and add other regulations (such as NIST and Azure CIS) or organization-specific security requirements. You can also apply recommendations, and score based on the AWS Foundational Security Best practices standards.
+- **Generates a secure score** for your subscriptions based on an assessment of your connected resources compared with the guidance in [Azure Security Benchmark](/security/benchmark/azure/overview). Use the score to understand your security posture, and the compliance dashboard to review your compliance with the built-in benchmark. When you've enabled the enhanced security features, you can customize the standards used to assess your compliance, and add other regulations (such as NIST and Azure CIS) or organization-specific security requirements. You can also apply recommendations, and score based on the AWS Foundational Security Best practices standards.
-1. **Provide hardening recommendations** based on any identified security misconfigurations and weaknesses. Use these security recommendations to strengthen the security posture of your organization's Azure, hybrid, and multicloud resources.
+- **Provides hardening recommendations** based on any identified security misconfigurations and weaknesses. Use these security recommendations to strengthen the security posture of your organization's Azure, hybrid, and multi-cloud resources.
[Learn more about secure score](secure-score-security-controls.md).
-### Cloud workload protection (CWP)
+### CWP - Identify unique workload security requirements
-Defender for Cloud offers security alerts that are powered by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684). It also includes a range of advanced, intelligent, protections for your workloads. The workload protections are provided through Microsoft Defender plans specific to the types of resources in your subscriptions. For example, you can enable **Microsoft Defender for Storage** to get alerted about suspicious activities related to your Azure Storage accounts.
+Defender for Cloud offers security alerts that are powered by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684). It also includes a range of advanced, intelligent, protections for your workloads. The workload protections are provided through Microsoft Defender plans specific to the types of resources in your subscriptions. For example, you can enable **Microsoft Defender for Storage** to get alerted about suspicious activities related to your storage resources.
-## Azure, hybrid, and multicloud protections
+## Protect all of your resources under one roof
-Because Defender for Cloud is an Azure-native service, many Azure services are monitored and protected without needing any deployment.
+Because Defender for Cloud is an Azure-native service, many Azure services are monitored and protected without needing any deployment, but you can also add resources the are on-premises or in other public clouds.
When necessary, Defender for Cloud can automatically deploy a Log Analytics agent to gather security-related data. For Azure machines, deployment is handled directly. For hybrid and multicloud environments, Microsoft Defender plans are extended to non Azure machines with the help of [Azure Arc](https://azure.microsoft.com/services/azure-arc/). CSPM features are extended to multicloud machines without the need for any agents (see [Defend resources running on other clouds](#defend-resources-running-on-other-clouds)).
-### Azure-native protections
+### Defend your Azure-native resources
Defender for Cloud helps you detect threats across:
Defender for Cloud helps you detect threats across:
- **Networks** - Defender for Cloud helps you limit exposure to brute force attacks. By reducing access to virtual machine ports, using the just-in-time VM access, you can harden your network by preventing unnecessary access. You can set secure access policies on selected ports, for only authorized users, allowed source IP address ranges or IP addresses, and for a limited amount of time.
-### Defend your hybrid resources
+### Defend your on-premises resources
In addition to defending your Azure environment, you can add Defender for Cloud capabilities to your hybrid cloud environment to protect your non-Azure servers. To help you focus on what matters the mostΓÇï, you'll get customized threat intelligence and prioritized alerts according to your specific environment.
For example, if you've [connected an Amazon Web Services (AWS) account](quicksta
Learn more about connecting your [AWS](quickstart-onboard-aws.md) and [GCP](quickstart-onboard-gcp.md) accounts to Microsoft Defender for Cloud.
-## Vulnerability assessment and management
+## Close vulnerabilities before they get exploited
:::image type="content" source="media/defender-for-cloud-introduction/defender-for-cloud-expanded-assess.png" alt-text="Focus on the assessment features of Microsoft Defender for Cloud.":::
Learn more on the following pages:
- [Defender for Cloud's integrated Qualys scanner for Azure and hybrid machines](deploy-vulnerability-assessment-vm.md) - [Identify vulnerabilities in images in Azure container registries](defender-for-containers-usage.md#identify-vulnerabilities-in-images-in-other-container-registries)
-## Optimize and improve security by configuring recommended controls
+## Enforce your security policy from the top down
:::image type="content" source="media/defender-for-cloud-introduction/defender-for-cloud-expanded-secure.png" alt-text="Focus on the 'secure' features of Microsoft Defender for Cloud.":::
To help you understand how important each recommendation is to your overall secu
:::image type="content" source="./media/defender-for-cloud-introduction/sc-secure-score.png" alt-text="Defender for Cloud secure score.":::
-## Defend against threats
+## Extend Defender for Cloud with Defender plans and external monitoring
:::image type="content" source="media/defender-for-cloud-introduction/defender-for-cloud-expanded-defend.png" alt-text="Focus on the 'defend'' features of Microsoft Defender for Cloud.":::
-Defender for Cloud provides:
--- **Security alerts** - When Defender for Cloud detects a threat in any area of your environment, it generates a security alert. These alerts describe details of the affected resources, suggested remediation steps, and in some cases an option to trigger a logic app in response. Whether an alert is generated by Defender for Cloud, or received by Defender for Cloud from an integrated security product, you can export it. To export your alerts to Microsoft Sentinel, any third-party SIEM, or any other external tool, follow the instructions in [Stream alerts to a SIEM, SOAR, or IT Service Management solution](export-to-siem.md). Defender for Cloud's threat protection includes fusion kill-chain analysis, which automatically correlates alerts in your environment based on cyber kill-chain analysis, to help you better understand the full story of an attack campaign, where it started and what kind of impact it had on your resources. [Defender for Cloud's supported kill chain intents are based on version 9 of the MITRE ATT&CK matrix](alerts-reference.md#intentions).
+You can extend the Defender for Cloud protection with:
- **Advanced threat protection features** for virtual machines, SQL databases, containers, web applications, your network, and more - Protections include securing the management ports of your VMs with [just-in-time access](just-in-time-access-overview.md), and [adaptive application controls](adaptive-application-controls.md) to create allowlists for what apps should and shouldn't run on your machines.
-The **Defender plans** page of Microsoft Defender for Cloud offers the following plans for comprehensive defenses for the compute, data, and service layers of your environment:
+The **Defender plans** of Microsoft Defender for Cloud offer comprehensive defenses for the compute, data, and service layers of your environment:
- [Microsoft Defender for Servers](defender-for-servers-introduction.md) - [Microsoft Defender for Storage](defender-for-storage-introduction.md)
Use the advanced protection tiles in the [workload protections dashboard](worklo
> [!TIP] > Microsoft Defender for IoT is a separate product. You'll find all the details in [Introducing Microsoft Defender for IoT](../defender-for-iot/overview.md).
+- **Security alerts** - When Defender for Cloud detects a threat in any area of your environment, it generates a security alert. These alerts describe details of the affected resources, suggested remediation steps, and in some cases an option to trigger a logic app in response. Whether an alert is generated by Defender for Cloud, or received by Defender for Cloud from an integrated security product, you can export it. To export your alerts to Microsoft Sentinel, any third-party SIEM, or any other external tool, follow the instructions in [Stream alerts to a SIEM, SOAR, or IT Service Management solution](export-to-siem.md). Defender for Cloud's threat protection includes fusion kill-chain analysis, which automatically correlates alerts in your environment based on cyber kill-chain analysis, to help you better understand the full story of an attack campaign, where it started and what kind of impact it had on your resources. [Defender for Cloud's supported kill chain intents are based on version 9 of the MITRE ATT&CK matrix](alerts-reference.md#intentions).
+ ## Learn More If you would like to learn more about Defender for Cloud from a cybersecurity expert, check out [Lessons Learned from the Field](episode-six.md).
defender-for-cloud Defender For Containers Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-architecture.md
+
+ Title: Container security architecture in Microsoft Defender for Cloud
+description: Learn about the architecture of Microsoft Defender for Containers for each container platform
+++ Last updated : 05/31/2022+
+# Defender for Containers architecture
+
+Defender for Containers is designed differently for each container environment whether they're running in:
+
+- **Azure Kubernetes Service (AKS)** - Microsoft's managed service for developing, deploying, and managing containerized applications.
+
+- **Amazon Elastic Kubernetes Service (EKS) in a connected Amazon Web Services (AWS) account** - Amazon's managed service for running Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes.
+
+- **Google Kubernetes Engine (GKE) in a connected Google Cloud Platform (GCP) project** - GoogleΓÇÖs managed environment for deploying, managing, and scaling applications using GCP infrastructure.
+
+- **An unmanaged Kubernetes distribution** (using Azure Arc-enabled Kubernetes) - Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters hosted on-premises or on IaaS.
+
+> [!NOTE]
+> Defender for Containers support for Arc-enabled Kubernetes clusters (AWS EKS and GCP GKE) is a preview feature.
+
+To protect your Kubernetes containers, Defender for Containers receives and analyzes:
+
+- Audit logs and security events from the API server
+- Cluster configuration information from the control plane
+- Workload configuration from Azure Policy
+- Security signals and events from the node level
+
+## Architecture for each container environment
+
+## [**Azure (AKS)**](#tab/defender-for-container-arch-aks)
+
+### Architecture diagram of Defender for Cloud and AKS clusters<a name="jit-asc"></a>
+
+When Defender for Cloud protects a cluster hosted in Azure Kubernetes Service, the collection of audit log data is agentless and frictionless.
+
+The **Defender profile (preview)** deployed to each node provides the runtime protections and collects signals from nodes using [eBPF technology](https://ebpf.io/).
+
+The **Azure Policy add-on for Kubernetes** collects cluster and workload configuration for admission control policies as explained in [Protect your Kubernetes workloads](kubernetes-workload-protections.md).
+
+> [!NOTE]
+> Defender for Containers **Defender profile** is a preview feature.
++
+### Defender profile component details
+
+| Pod Name | Namespace | Kind | Short Description | Capabilities | Resource limits | Egress Required |
+|--|--|--|--|--|--|--|
+| azuredefender-collector-ds-* | kube-system | [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) | A set of containers that focus on collecting inventory and security events from the Kubernetes environment. | SYS_ADMIN, <br>SYS_RESOURCE, <br>SYS_PTRACE | memory: 64Mi<br> <br> cpu: 60m | No |
+| azuredefender-collector-misc-* | kube-system | [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) | A set of containers that focus on collecting inventory and security events from the Kubernetes environment that aren't bounded to a specific node. | N/A | memory: 64Mi <br> <br>cpu: 60m | No |
+| azuredefender-publisher-ds-* | kube-system | [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) | Publish the collected data to Microsoft Defender for Containers backend service where the data will be processed for and analyzed. | N/A | memory: 200Mi  <br> <br> cpu: 60m | Https 443 <br> <br> Learn more about the [outbound access prerequisites](../aks/limit-egress-traffic.md#microsoft-defender-for-containers) |
+
+\* resource limits aren't configurable
+
+## [**On-premises / IaaS (Arc)**](#tab/defender-for-container-arch-arc)
+
+### Architecture diagram of Defender for Cloud and Arc-enabled Kubernetes clusters
+
+For all clusters hosted outside of Azure, [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) is required to connect the clusters to Azure and provide Azure services such as Defender for Containers.
+
+When a non-Azure container is connected to Azure with Arc, the [Arc extension](../azure-arc/kubernetes/extensions.md) collects Kubernetes audit logs data from all control plane nodes in the cluster. The extension sends the log data to the Microsoft Defender for Cloud backend in the cloud for further analysis. The extension is registered with a Log Analytics workspace used as a data pipeline, but the audit log data isn't stored in the Log Analytics workspace.
+
+Workload configuration information is collected by an Azure Policy add-on. As explained in [this Azure Policy for Kubernetes page](../governance/policy/concepts/policy-for-kubernetes.md), the add-on extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) admission controller webhook for [Open Policy Agent](https://www.openpolicyagent.org/). Kubernetes admission controllers are plugins that enforce how your clusters are used. The add-on registers as a web hook to Kubernetes admission control and makes it possible to apply at-scale enforcements and safeguards on your clusters in a centralized, consistent manner.
+
+> [!NOTE]
+> Defender for Containers support for Arc-enabled Kubernetes clusters is a preview feature.
++
+## [**AWS (EKS)**](#tab/defender-for-container-arch-eks)
+
+### Architecture diagram of Defender for Cloud and EKS clusters
+
+These components are required in order to receive the full protection offered by Microsoft Defender for Containers:
+
+- **[Kubernetes audit logs](https://kubernetes.io/docs/tasks/debug-application-cluster/audit/)** ΓÇô [AWS accountΓÇÖs CloudWatch](https://aws.amazon.com/cloudwatch/) enables, and collects audit log data through an agentless collector, and sends the collected information to the Microsoft Defender for Cloud backend for further analysis.
+
+- **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** - An agent based solution that connects your EKS clusters to Azure. Azure then is capable of providing services such as Defender, and Policy as [Arc extensions](../azure-arc/kubernetes/extensions.md).
+
+- **The Defender extension** ΓÇô The [DeamonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) that collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The extension is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace.
+
+- **The Azure Policy extension** - The workload's configuration information is collected by the Azure Policy add-on. The Azure Policy add-on extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) admission controller webhook for [Open Policy Agent](https://www.openpolicyagent.org/). The extension registers as a web hook to Kubernetes admission control and makes it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. For more information, see [Understand Azure Policy for Kubernetes clusters](../governance/policy/concepts/policy-for-kubernetes.md).
+
+> [!NOTE]
+> Defender for Containers support for AWS EKS clusters is a preview feature.
++
+## [**GCP (GKE)**](#tab/defender-for-container-gke)
+
+### Architecture diagram of Defender for Cloud and GKE clusters<a name="jit-asc"></a>
+
+These components are required in order to receive the full protection offered by Microsoft Defender for Containers:
+
+- **[Kubernetes audit logs](https://kubernetes.io/docs/tasks/debug-application-cluster/audit/)** ΓÇô [GCP Cloud Logging](https://cloud.google.com/logging/) enables, and collects audit log data through an agentless collector, and sends the collected information to the Microsoft Defender for Cloud backend for further analysis.
+
+- **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** - An agent based solution that connects your EKS clusters to Azure. Azure then is capable of providing services such as Defender, and Policy as [Arc extensions](../azure-arc/kubernetes/extensions.md).
+
+- **The Defender extension** ΓÇô The [DeamonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) that collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The extension is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace.
+
+- **The Azure Policy extension** - The workload's configuration information is collected by the Azure Policy add-on. The Azure Policy add-on extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) admission controller webhook for [Open Policy Agent](https://www.openpolicyagent.org/). The extension registers as a web hook to Kubernetes admission control and makes it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. For more information, see [Understand Azure Policy for Kubernetes clusters](../governance/policy/concepts/policy-for-kubernetes.md).
+
+> [!NOTE]
+> Defender for Containers support for GCP GKE clusters is a preview feature.
++++
+## Next steps
+
+In this overview, you learned about the architecture of container security in Microsoft Defender for Cloud. To enable the plan, see:
+
+> [!div class="nextstepaction"]
+> [Enable Defender for Containers](defender-for-containers-enable.md)
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Title: Container security with Microsoft Defender for Cloud description: Learn about Microsoft Defender for Containers++ Last updated 05/25/2022 # Overview of Microsoft Defender for Containers
-Microsoft Defender for Containers is the cloud-native solution for securing your containers.
+Microsoft Defender for Containers is the cloud-native solution for securing your containers so you can improve, monitor, and maintain the security of your clusters, containers, and their applications.
-On this page, you'll learn how you can use Defender for Containers to improve, monitor, and maintain the security of your clusters, containers, and their applications.
+[How does Defender for Containers work in each Kubernetes platform?](defender-for-containers-architecture.md)
## Microsoft Defender for Containers plan availability
On this page, you'll learn how you can use Defender for Containers to improve, m
| Feature availability | Refer to the [availability](supported-machines-endpoint-solutions-clouds-containers.md) section for additional information on feature release state and availability.| | Pricing: | **Microsoft Defender for Containers** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) | | Required roles and permissions: | ΓÇó To auto provision the required components, see the [permissions for each of the components](enable-data-collection.md?tabs=autoprovision-containers)<br> ΓÇó **Security admin** can dismiss alerts<br> ΓÇó **Security reader** can view vulnerability assessment findings<br> See also [Azure Container Registry roles and permissions](../container-registry/container-registry-roles.md) |
-| Clouds: | **Azure**:<br>:::image type="icon" source="./medi#defender-for-containers-feature-availability). |
-
+| Clouds: | **Azure**:<br>:::image type="icon" source="./medi#defender-for-containers-feature-availability). |
## What are the benefits of Microsoft Defender for Containers? Defender for Containers helps with the core aspects of container security: -- **Environment hardening** - Defender for Containers protects your Kubernetes clusters whether they're running on Azure Kubernetes Service, Kubernetes on-premises / IaaS, or Amazon EKS. By continuously assessing clusters, Defender for Containers provides visibility into misconfigurations and guidelines to help mitigate identified threats. Learn more in [Hardening](#hardening).
+- [**Environment hardening**](#hardening) - Defender for Containers protects your Kubernetes clusters whether they're running on Azure Kubernetes Service, Kubernetes on-premises/IaaS, or Amazon EKS. By continuously assessing clusters, Defender for Containers provides visibility into misconfigurations and guidelines to help mitigate identified threats.
-- **Vulnerability assessment** - Vulnerability assessment and management tools for images **stored** in ACR registries and **running** in Azure Kubernetes Service. Learn more in [Vulnerability assessment](#vulnerability-assessment).
+- [**Vulnerability assessment**](#vulnerability-assessment) - Vulnerability assessment and management tools for images **stored** in ACR registries and **running** in Azure Kubernetes Service.
-- **Run-time threat protection for nodes and clusters** - Threat protection for clusters and Linux nodes generates security alerts for suspicious activities. Learn more in [Run-time protection for Kubernetes nodes, clusters, and hosts](#run-time-protection-for-kubernetes-nodes-and-clusters).
+- [**Run-time threat protection for nodes and clusters**](#run-time-protection-for-kubernetes-nodes-and-clusters) - Threat protection for clusters and Linux nodes generates security alerts for suspicious activities.
## Hardening
Defender for Containers helps with the core aspects of container security:
Defender for Cloud continuously assesses the configurations of your clusters and compares them with the initiatives applied to your subscriptions. When it finds misconfigurations, Defender for Cloud generates security recommendations. Use Defender for Cloud's **recommendations page** to view recommendations and remediate issues. For details of the relevant Defender for Cloud recommendations that might appear for this feature, see the [compute section](recommendations-reference.md#recs-container) of the recommendations reference table.
-For Kubernetes clusters on EKS, you'll need to connect your AWS account to Microsoft Defender for Cloud via the environment settings page as described in [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md). Then ensure you've enabled the CSPM plan.
+For Kubernetes clusters on EKS, you'll need to [connect your AWS account to Microsoft Defender for Cloud](quickstart-onboard-aws.md). Then ensure you've enabled the CSPM plan.
When reviewing the outstanding recommendations for your container-related resources, whether in asset inventory or the recommendations page, you can use the resource filter:
When reviewing the outstanding recommendations for your container-related resour
### Kubernetes data plane hardening
-For a bundle of recommendations to protect the workloads of your Kubernetes containers, install the **Azure Policy for Kubernetes**. You can also auto deploy this component as explained in [enable auto provisioning of agents and extensions](enable-data-collection.md#auto-provision-mma).
+To protect the workloads of your Kubernetes containers with tailored recommendations, install the **Azure Policy for Kubernetes**. You can also auto deploy this component as explained in [enable auto provisioning of agents and extensions](enable-data-collection.md#auto-provision-mma).
With the add-on on your AKS cluster, every request to the Kubernetes API server will be monitored against the predefined set of best practices before being persisted to the cluster. You can then configure to **enforce** the best practices and mandate them for future workloads.
Learn more in [Vulnerability assessment](defender-for-containers-usage.md).
:::image type="content" source="./media/defender-for-containers/recommendation-acr-images-with-vulnerabilities.png" alt-text="Sample Microsoft Defender for Cloud recommendation about vulnerabilities discovered in Azure Container Registry (ACR) hosted images." lightbox="./media/defender-for-containers/recommendation-acr-images-with-vulnerabilities.png":::
-### View vulnerabilities for running images
+### View vulnerabilities for running images
-The recommendation **Running container images should have vulnerability findings resolved** shows vulnerabilities for running images by using the scan results from ACR registries and information on running images from the Defender security profile/extension. Images that are deployed from a non ACR registry, will appear under the **Not applicable** tab.
+The recommendation **Running container images should have vulnerability findings resolved** shows vulnerabilities for running images by using the scan results from ACR registries and information on running images from the Defender security profile/extension. Images that are deployed from a non-ACR registry, will appear under the **Not applicable** tab.
## Run-time protection for Kubernetes nodes and clusters
-Defender for Cloud provides real-time threat protection for your containerized environments and generates alerts for suspicious activities. You can use this information to quickly remediate security issues and improve the security of your containers.
+Defender for Containers provides real-time threat protection for your containerized environments and generates alerts for suspicious activities. You can use this information to quickly remediate security issues and improve the security of your containers. Threat protection at the cluster level is provided by the Defender profile and analysis of the Kubernetes audit logs. Examples of events at this level include exposed Kubernetes dashboards, creation of high-privileged roles, and the creation of sensitive mounts.
-Threat protection at the cluster level is provided by the Defender profile and analysis of the Kubernetes audit logs. Examples of events at this level include exposed Kubernetes dashboards, creation of high-privileged roles, and the creation of sensitive mounts.
+In addition, our threat detection goes beyond the Kubernetes management layer. Defender for Containers includes **host-level threat detection** with over 60 Kubernetes-aware analytics, AI, and anomaly detections based on your runtime workload. Our global team of security researchers constantly monitor the threat landscape. They add container-specific alerts and vulnerabilities as they're discovered.
-In addition, our threat detection goes beyond the Kubernetes management layer. Defender for Containers includes **host-level threat detection** with over 60 Kubernetes-aware analytics, AI, and anomaly detections based on your runtime workload. Our global team of security researchers constantly monitor the threat landscape. They add container-specific alerts and vulnerabilities as they're discovered. Together, this solution monitors the growing attack surface of multicloud Kubernetes deployments and tracks the [MITRE ATT&CK® matrix for Containers](https://www.microsoft.com/security/blog/2021/04/29/center-for-threat-informed-defense-teams-up-with-microsoft-partners-to-build-the-attck-for-containers-matrix/), a framework that was developed by the [Center for Threat-Informed Defense](https://mitre-engenuity.org/ctid/) in close partnership with Microsoft and others.
+This solution monitors the growing attack surface of multi-cloud Kubernetes deployments and tracks the [MITRE ATT&CK® matrix for Containers](https://www.microsoft.com/security/blog/2021/04/29/center-for-threat-informed-defense-teams-up-with-microsoft-partners-to-build-the-attck-for-containers-matrix/), a framework that was developed by the [Center for Threat-Informed Defense](https://mitre-engenuity.org/ctid/) in close partnership with Microsoft and others.
The full list of available alerts can be found in the [Reference table of alerts](alerts-reference.md#alerts-k8scluster). :::image type="content" source="media/defender-for-containers/sample-containers-plan-alerts.png" alt-text="Screenshot of Defender for Cloud's alerts page showing alerts for multicloud Kubernetes resources." lightbox="./media/defender-for-containers/sample-containers-plan-alerts.png":::
-## Architecture overview
-
-The architecture of the various elements involved in the full range of protections provided by Defender for Containers varies depending on where your Kubernetes clusters are hosted.
-
-Defender for Containers protects your clusters whether they're running in:
--- **Azure Kubernetes Service (AKS) (Preview)** - Microsoft's managed service for developing, deploying, and managing containerized applications.--- **Amazon Elastic Kubernetes Service (EKS) in a connected Amazon Web Services (AWS) account (Preview)** - Amazon's managed service for running Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes.--- **Google Kubernetes Engine (GKE) in a connected Google Cloud Platform (GCP) project (Preview)** - GoogleΓÇÖs managed environment for deploying, managing, and scaling applications using GCP infrastructure.--- **An unmanaged Kubernetes distribution** (using Azure Arc-enabled Kubernetes) - Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters hosted on-premises or on IaaS.-
-For high-level diagrams of each scenario, see the relevant tabs below.
-
-In the diagrams you'll see that the items received and analyzed by Defender for Cloud include:
--- Audit logs and security events from the API server-- Cluster configuration information from the control plane-- Workload configuration from Azure Policy -- Security signals and events from the node level-
-### [**Azure (AKS)**](#tab/defender-for-container-arch-aks)
-
-### Architecture diagram of Defender for Cloud and AKS clusters<a name="jit-asc"></a>
-
-When Defender for Cloud protects a cluster hosted in Azure Kubernetes Service, the collection of audit log data is agentless and frictionless.
-
-The **Defender profile (preview)** deployed to each node provides the runtime protections and collects signals from nodes using [eBPF technology](https://ebpf.io/).
-
-The **Azure Policy add-on for Kubernetes** collects cluster and workload configuration for admission control policies as explained in [Protect your Kubernetes workloads](kubernetes-workload-protections.md).
-
-> [!NOTE]
-> Defender for Containers' **Defender profile** is a preview feature.
--
-#### Defender profile component details
-
-| Pod Name | Namespace | Kind | Short Description | Capabilities | Resource limits | Egress Required |
-|--|--|--|--|--|--|--|
-| azuredefender-collector-ds-* | kube-system | [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) | A set of containers that focus on collecting inventory and security events from the Kubernetes environment. | SYS_ADMIN, <br>SYS_RESOURCE, <br>SYS_PTRACE | memory: 64Mi<br> <br> cpu: 60m | No |
-| azuredefender-collector-misc-* | kube-system | [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) | A set of containers that focus on collecting inventory and security events from the Kubernetes environment that aren't bounded to a specific node. | N/A | memory: 64Mi <br> <br>cpu: 60m | No |
-| azuredefender-publisher-ds-* | kube-system | [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) | Publish the collected data to Microsoft Defender for Containers' backend service where the data will be processed for and analyzed. | N/A | memory: 200Mi  <br> <br> cpu: 60m | Https 443 <br> <br> Learn more about the [outbound access prerequisites](../aks/limit-egress-traffic.md#microsoft-defender-for-containers) |
-
-\* resource limits aren't configurable
-
-### [**On-premises / IaaS (Arc)**](#tab/defender-for-container-arch-arc)
-
-### Architecture diagram of Defender for Cloud and Arc-enabled Kubernetes clusters
-
-For all clusters hosted outside of Azure, [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) is required to connect the clusters to Azure and provide Azure services such as Defender for Containers.
-
-With the cluster connected to Azure, an [Arc extension](../azure-arc/kubernetes/extensions.md) collects Kubernetes audit logs data from all control plane nodes in the cluster and sends them to the Microsoft Defender for Cloud backend in the cloud for further analysis. The extension is registered with a Log Analytics workspace used as a data pipeline, but the audit log data isn't stored in the Log Analytics workspace.
-
-Workload configuration information is collected by an Azure Policy add-on. As explained in [this Azure Policy for Kubernetes page](../governance/policy/concepts/policy-for-kubernetes.md), the add-on extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) admission controller webhook for [Open Policy Agent](https://www.openpolicyagent.org/). Kubernetes admission controllers are plugins that enforce how your clusters are used. The add-on registers as a web hook to Kubernetes admission control and makes it possible to apply at-scale enforcements and safeguards on your clusters in a centralized, consistent manner.
-
-> [!NOTE]
-> Defender for Containers' support for Arc-enabled Kubernetes clusters is a preview feature.
----
-### [**AWS (EKS)**](#tab/defender-for-container-arch-eks)
-
-### Architecture diagram of Defender for Cloud and EKS clusters
-
-The following describes the components necessary in order to receive the full protection offered by Microsoft Defender for Cloud for Containers.
--- **[Kubernetes audit logs](https://kubernetes.io/docs/tasks/debug-application-cluster/audit/)** – [AWS account’s CloudWatch](https://aws.amazon.com/cloudwatch/) enables, and collects audit log data through an agentless collector, and sends the collected information to the Microsoft Defender for Cloud backend for further analysis.--- **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** - An agent based solution that connects your EKS clusters to Azure. Azure then is capable of providing services such as Defender, and Policy as [Arc extensions](../azure-arc/kubernetes/extensions.md).--- **The Defender extension** – The [DeamonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) that collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The extension is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace.--- **The Azure Policy extension** - The workload's configuration information is collected by the Azure Policy add-on. The Azure Policy add-on extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) admission controller webhook for [Open Policy Agent](https://www.openpolicyagent.org/). The extension registers as a web hook to Kubernetes admission control and makes it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. For more information, see [Understand Azure Policy for Kubernetes clusters](../governance/policy/concepts/policy-for-kubernetes.md).-
-> [!NOTE]
-> Defender for Containers' support for AWS EKS clusters is a preview feature.
--
-### [**GCP (GKE)**](#tab/defender-for-container-gke)
-
-### Architecture diagram of Defender for Cloud and GKE clusters<a name="jit-asc"></a>
-
-The following describes the components necessary in order to receive the full protection offered by Microsoft Defender for Cloud for Containers.
--- **[Kubernetes audit logs](https://kubernetes.io/docs/tasks/debug-application-cluster/audit/)** – [GCP Cloud Logging](https://cloud.google.com/logging/) enables, and collects audit log data through an agentless collector, and sends the collected information to the Microsoft Defender for Cloud backend for further analysis.--- **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** - An agent based solution that connects your EKS clusters to Azure. Azure then is capable of providing services such as Defender, and Policy as [Arc extensions](../azure-arc/kubernetes/extensions.md).--- **The Defender extension** – The [DeamonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) that collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The extension is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace.--- **The Azure Policy extension** - The workload's configuration information is collected by the Azure Policy add-on. The Azure Policy add-on extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) admission controller webhook for [Open Policy Agent](https://www.openpolicyagent.org/). The extension registers as a web hook to Kubernetes admission control and makes it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. For more information, see [Understand Azure Policy for Kubernetes clusters](../governance/policy/concepts/policy-for-kubernetes.md).-
-> [!NOTE]
-> Defender for Containers' support for GCP GKE clusters is a preview feature.
---- ## FAQ - Defender for Containers - [What are the options to enable the new plan at scale?](#what-are-the-options-to-enable-the-new-plan-at-scale)-- [Does Microsoft Defender for Containers support AKS clusters with virtual machines scale set?](#does-microsoft-defender-for-containers-support-aks-clusters-with-virtual-machines-scale-set)
+- [Does Microsoft Defender for Containers support AKS clusters with virtual machines scale sets?](#does-microsoft-defender-for-containers-support-aks-clusters-with-virtual-machines-scale-sets)
- [Does Microsoft Defender for Containers support AKS without scale set (default)?](#does-microsoft-defender-for-containers-support-aks-without-scale-set-default) - [Do I need to install the Log Analytics VM extension on my AKS nodes for security protection?](#do-i-need-to-install-the-log-analytics-vm-extension-on-my-aks-nodes-for-security-protection)
-### What are the options to enable the new plan at scale?
-WeΓÇÖve rolled out a new policy in Azure Policy, **Configure Microsoft Defender for Containers to be enabled**, to make it easier to enable the new plan at scale.
+### What are the options to enable the new plan at scale?
+
+WeΓÇÖve rolled out a new policy in Azure Policy, **Configure Microsoft Defender for Containers to be enabled**, to make it easier to enable the new plan at scale.
-### Does Microsoft Defender for Containers support AKS clusters with virtual machines scale set?
-Yes
+### Does Microsoft Defender for Containers support AKS clusters with virtual machines scale sets?
+
+Yes.
### Does Microsoft Defender for Containers support AKS without scale set (default)?
-No. Only Azure Kubernetes Service (AKS) clusters that use virtual machine scale sets for the nodes is supported.
+
+No. Only Azure Kubernetes Service (AKS) clusters that use virtual machine scale sets for the nodes is supported.
### Do I need to install the Log Analytics VM extension on my AKS nodes for security protection?
-No, AKS is a managed service, and manipulation of the IaaS resources isn't supported. The Log Analytics VM extension is not needed and may result in additional charges.
+
+No, AKS is a managed service, and manipulation of the IaaS resources isn't supported. The Log Analytics VM extension isn't needed and may result in additional charges.
## Learn More
defender-for-cloud Enable Enhanced Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-enhanced-security.md
Title: Enable Microsoft Defender for Cloud's integrated workload protections description: Learn how to enable enhanced security features to extend the protections of Microsoft Defender for Cloud to your hybrid and multicloud resources -- Previously updated : 11/09/2021 Last updated : 05/31/2022
To enable all Defender for Cloud features including threat protection capabiliti
- You can enable **Microsoft Defender for SQL** at either the subscription level or resource level - You can enable **Microsoft Defender for open-source relational databases** at the resource level only
-### To enable enhanced security features on your subscriptions and workspaces:
+### Enable enhanced security features on your subscriptions and workspaces:
- To enable enhanced security features on one subscription: 1. From Defender for Cloud's main menu, select **Environment settings**.
+
1. Select the subscription or workspace that you want to protect.
- 1. Select **Enable all Microsoft Defender plans** to upgrade.
+
+ 1. Select **Enable all** to upgrade.
+
1. Select **Save**.
- > [!TIP]
- > You'll notice that each Microsoft Defender plan is priced separately and can be individually set to on or off. For example, you might want to turn off Defender for App Service on subscriptions that don't have an associated Azure App Service plan.
-
- :::image type="content" source="./media/enhanced-security-features-overview/pricing-tier-page.png" alt-text="Defender for Cloud's pricing page in the portal":::
-
+ :::image type="content" source="./media/enhanced-security-features-overview/pricing-tier-page.png" alt-text="Defender for Cloud's pricing page in the portal" lightbox="media/enhanced-security-features-overview/pricing-tier-page.png":::
+
- To enable enhanced security on multiple subscriptions or workspaces: 1. From Defender for Cloud's menu, select **Getting started**. The **Upgrade** tab lists subscriptions and workspaces eligible for onboarding.
- :::image type="content" source="./media/enable-enhanced-security/get-started-upgrade-tab.png" alt-text="Upgrade tab of the getting started page.":::
+ :::image type="content" source="./media/enable-enhanced-security/get-started-upgrade-tab.png" alt-text="Upgrade tab of the getting started page." lightbox="media/enable-enhanced-security/get-started-upgrade-tab.png":::
1. From the **Select subscriptions and workspaces to protect with Microsoft Defender for Cloud** list, select the subscriptions and workspaces to upgrade and select **Upgrade** to enable all Microsoft Defender for Cloud security features. - If you select subscriptions and workspaces that aren't eligible for trial, the next step will upgrade them and charges will begin.
+
- If you select a workspace that's eligible for a free trial, the next step will begin a trial.
- :::image type="content" source="./media/enable-enhanced-security/upgrade-selected-workspaces-and-subscriptions.png" alt-text="Upgrade all selected workspaces and subscriptions from the getting started page.":::
-
+ :::image type="content" source="./media/enable-enhanced-security/upgrade-selected-workspaces-and-subscriptions.png" alt-text="Upgrade all selected workspaces and subscriptions from the getting started page." lightbox="media/enable-enhanced-security/upgrade-selected-workspaces-and-subscriptions.png":::
## Disable enhanced security features
If you need to disable enhanced security features for a subscription, the proced
1. From Defender for Cloud's menu, open **Environment settings**. 1. Select the relevant subscription.
-1. Select **Defender plans** and select **Enhanced security off**.
-
- :::image type="content" source="./media/enable-enhanced-security/disable-plans.png" alt-text="Enable or disable Defender for Cloud's enhanced security features.":::
+1. Find the plan you wish to turn off and select **off**.
-1. Select **Save**.
+ :::image type="content" source="./media/enable-enhanced-security/disable-plans.png" alt-text="Enable or disable Defender for Cloud's enhanced security features." lightbox="media/enable-enhanced-security/disable-plans.png":::
-> [!NOTE]
-> After you disable enhanced security features - whether you disable a single plan or all plans at once - data collection may continue for a short period of time.
+ > [!NOTE]
+ > After you disable enhanced security features - whether you disable a single plan or all plans at once - data collection may continue for a short period of time.
## Next steps
defender-for-cloud Enhanced Security Features Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enhanced-security-features-overview.md
Title: Understand the enhanced security features of Microsoft Defender for Cloud description: Learn about the benefits of enabling enhanced security in Microsoft Defender for Cloud Previously updated : 04/11/2022- Last updated : 05/31/2022 - # Microsoft Defender for Cloud's enhanced security features
You can use any of the following ways to enable enhanced security for your subsc
### Can I enable Microsoft Defender for Servers on a subset of servers in my subscription?+ No. When you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) on a subscription, all the machines in the subscription will be protected by Defender for Servers. An alternative is to enable Microsoft Defender for Servers at the Log Analytics workspace level. If you do this, only servers reporting to that workspace will be protected and billed. However, several capabilities will be unavailable. These include Microsoft Defender for Endpoint, VA solution (TVM/Qualys), just-in-time VM access, and more. ### If I already have a license for Microsoft Defender for Endpoint can I get a discount for Defender for Servers?+ If you've already got a license for **Microsoft Defender for Endpoint for Servers Plan 2**, you won't have to pay for that part of your Microsoft Defender for Servers license. Learn more about [this license](/microsoft-365/security/defender-endpoint/minimum-requirements#licensing-requirements). To request your discount, [contact Defender for Cloud's support team](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview). You'll need to provide the relevant workspace ID, region, and number of Microsoft Defender for Endpoint for servers licenses applied for machines in the given workspace. The discount will be effective starting from the approval date, and won't take place retroactively.
-### My subscription has Microsoft Defender for Servers enabled, do I pay for not-running servers?
+### My subscription has Microsoft Defender for Servers enabled, do I pay for not-running servers?
+ No. When you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) on a subscription, you won't be charged for any machines that are in the deallocated power state while they're in that state. Machines are billed according to their power state as shown in the following table: | State | Description | Instance usage billed |
No. When you enable [Microsoft Defender for Servers](defender-for-servers-introd
:::image type="content" source="media/enhanced-security-features-overview/deallocated-virtual-machines.png" alt-text="Azure Virtual Machines showing a deallocated machine.":::
+### If I enable Defender for Clouds Servers plan on the subscription level, do I need to enable it on the workspace level?
+
+When you enable the Servers plan on the subscription level, Defender for Cloud will enable the Servers plan on your default workspace(s) automatically when auto-provisioning is enabled. This can be accomplished on the Auto provisioning page by selecting **Connect Azure VMs to the default workspace(s) created by Defender for Cloud** option and selecting **Apply**.
++
+However, if you're using a custom workspace in place of the default workspace, you'll need to enable the Servers plan on all of your custom workspaces that do not have it enabled.
+
+If you're using a custom workspace and enable the plan on the subscription level only, the `Microsoft Defender for servers should be enabled on workspaces` recommendation will appear on the Recommendations page. This recommendation will give you the option to enable the servers plan on the workspace level with the Fix button. Until the workspace has the Servers plan enabled, any connected VM will not benefit from the full security coverage (Microsoft Defender for Endpoint, VA solution (TVM/Qualys), just-in-time VM access, and more) offered by the Defender for Cloud, but will still incur the cost.
+
+Enabling the Servers plan on both the subscription and its connected workspaces, will not incur a double charge. The system will identify each unique VM.
+
+If you enable the Servers plan on cross-subscription workspaces, all connected VMs, even those from subscriptions that it was not enabled on, will be billed.
+ ### Will I be charged for machines without the Log Analytics agent installed?+ Yes. When you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) on a subscription, the machines in that subscription get a range of protections even if you haven't installed the Log Analytics agent. This is applicable for Azure virtual machines, Azure virtual machine scale sets instances, and Azure Arc-enabled servers.
-### If a Log Analytics agent reports to multiple workspaces, will I be charged twice?
-Yes. If you've configured your Log Analytics agent to send data to two or more different Log Analytics workspaces (multi-homing), you'll be charged for every workspace that has a 'Security' or 'AntiMalware' solution installed.
+### If a Log Analytics agent reports to multiple workspaces, will I be charged twice?
+
+No you will not be charged twice.
### If a Log Analytics agent reports to multiple workspaces, is the 500 MB free data ingestion available on all of them?+ Yes. If you've configured your Log Analytics agent to send data to two or more different Log Analytics workspaces (multi-homing), you'll get 500 MB free data ingestion. It's calculated per node, per reported workspace, per day, and available for every workspace that has a 'Security' or 'AntiMalware' solution installed. You'll be charged for any data ingested over the 500 MB limit. ### Is the 500 MB free data ingestion calculated for an entire workspace or strictly per machine?
-You'll get 500 MB free data ingestion per day, for every machine connected to the workspace. Specifically for security data types directly collected by Defender for Cloud.
-This data is a daily rate averaged across all nodes. So even if some machines send 100-MB and others send 800-MB, if the total doesn't exceed the **[number of machines] x 500 MB** free limit, you won't be charged extra.
+You'll get 500 MB free data ingestion per day, for every VM connected to the workspace. Specifically for the [security data types](#what-data-types-are-included-in-the-500-mb-data-daily-allowance) that are directly collected by Defender for Cloud.
+
+This data is a daily rate averaged across all nodes. Your total daily free limit is equal to **[number of machines] x 500 MB**. So even if some machines send 100-MB and others send 800-MB, if the total doesn't exceed your total daily free limit, you won't be charged extra.
### What data types are included in the 500 MB data daily allowance? Defender for Cloud's billing is closely tied to the billing for Log Analytics. [Microsoft Defender for Servers](defender-for-servers-introduction.md) provides a 500 MB/node/day allocation for machines against the following subset of [security data types](/azure/azure-monitor/reference/tables/tables-category#security):-- SecurityAlert-- SecurityBaseline-- SecurityBaselineSummary-- SecurityDetection-- SecurityEvent-- WindowsFirewall-- MaliciousIPCommunication-- SysmonEvent-- ProtectionStatus-- Update and UpdateSummary data types when the Update Management solution is not running on the workspace or solution targeting is enabled+
+- [SecurityAlert](/azure/azure-monitor/reference/tables/securityalert)
+- [SecurityBaseline](/azure/azure-monitor/reference/tables/securitybaseline)
+- [SecurityBaselineSummary](/azure/azure-monitor/reference/tables/securitybaselinesummary)
+- [SecurityDetection](/azure/azure-monitor/reference/tables/securitydetection)
+- [SecurityEvent](/azure/azure-monitor/reference/tables/securityevent)
+- [WindowsFirewall](/azure/azure-monitor/reference/tables/windowsfirewall)
+- [SysmonEvent](/azure/azure-monitor/reference/tables/sysmonevent)
+- [ProtectionStatus](/azure/azure-monitor/reference/tables/protectionstatus)
+- [Update](/azure/azure-monitor/reference/tables/update) and [UpdateSummary](/azure/azure-monitor/reference/tables/updatesummary) when the Update Management solution isn't running in the workspace or solution targeting is enabled.
If the workspace is in the legacy Per Node pricing tier, the Defender for Cloud and Log Analytics allocations are combined and applied jointly to all billable ingested data.
+## How can I monitor my daily usage
+
+You can view your data usage in two different ways, the Azure portal, or by running a script.
+
+**To view your usage in the Azure portal**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Log Analytics workspaces**.
+
+1. Select your workspace.
+
+1. Select **Usage and estimated costs**.
+
+ :::image type="content" source="media/enhanced-security-features-overview/data-usage.png" alt-text="Screenshot of your data usage of your log analytics workspace. " lightbox="media/enhanced-security-features-overview/data-usage.png":::
+
+You can also view estimated costs under different pricing tiers by selecting :::image type="icon" source="media/enhanced-security-features-overview/drop-down-icon.png" border="false"::: for each pricing tier.
++
+**To view your usage by using a script**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Log Analytics workspaces** > **Logs**.
+
+1. Select your time range. Learn about [time ranges](../azure-monitor/logs/log-analytics-tutorial.md).
+
+1. Copy and past the following query into the **Type your query here** section.
+
+ ```azurecli
+ let Unit= 'GB';
+ Usage
+ | where IsBillable == 'TRUE'
+ | where DataType in ('SecurityAlert', 'SecurityBaseline', 'SecurityBaselineSummary', 'SecurityDetection', 'SecurityEvent', 'WindowsFirewall', 'MaliciousIPCommunication', 'SysmonEvent', 'ProtectionStatus', 'Update', 'UpdateSummary')
+ | project TimeGenerated, DataType, Solution, Quantity, QuantityUnit
+ | summarize DataConsumedPerDataType = sum(Quantity)/1024 by DataType, DataUnit = Unit
+ | sort by DataConsumedPerDataType desc
+ ```
+
+1. Select **Run**.
+
+ :::image type="content" source="media/enhanced-security-features-overview/select-run.png" alt-text="Screenshot showing where to enter your query and where the select run button is located." lightbox="media/enhanced-security-features-overview/select-run.png":::
+
+You can learn how to [Analyze usage in Log Analytics workspace](../azure-monitor/logs/analyze-usage.md).
+
+Based on your usage, you won't be billed until you've used your daily allowance. If you're receiving a bill, it's only for the data used after the 500mb has been consumed, or for other service that does not fall under the coverage of Defender for Cloud.
+ ## Next steps This article explained Defender for Cloud's pricing options. For related material, see:
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 05/18/2022 Last updated : 05/30/2022 # What's new in Microsoft Defender for Cloud?
Learn more about [vulnerability management](deploy-vulnerability-assessment-tvm.
### JIT (Just-in-time) access for VMs is now available for AWS EC2 instances (Preview)
-When you [connect AWS accounts](quickstart-onboard-aws.md), JIT will automatically evaluate the network configuration of your instances, security groups and recommend which instances need protection for their exposed management ports. This is similar to how JIT works with Azure. When you onboard unprotected EC2 instances, JIT will block public access to the management ports and only open them with authorized requests for a limited time frame.
+When you [connect AWS accounts](quickstart-onboard-aws.md), JIT will automatically evaluate the network configuration of your instance's security groups and recommend which instances need protection for their exposed management ports. This is similar to how JIT works with Azure. When you onboard unprotected EC2 instances, JIT will block public access to the management ports and only open them with authorized requests for a limited time frame.
Learn how [JIT protects your AWS EC2 instances](just-in-time-access-overview.md#how-jit-operates-with-network-resources-in-azure-and-aws)
Learn how [JIT protects your AWS EC2 instances](just-in-time-access-overview.md#
The Defender profile (preview) is required for Defender for Containers to provide the runtime protections and collects signals from nodes. You can now use the Azure CLI to [add and remove the Defender profile](defender-for-containers-enable.md?tabs=k8s-deploy-cli%2Ck8s-deploy-asc%2Ck8s-verify-asc%2Ck8s-remove-arc%2Ck8s-remove-cli&pivots=defender-for-container-aks#use-azure-cli-to-deploy-the-defender-extension) for an AKS cluster. > [!NOTE]
-> This option is included in [Azure CLI 3.7 and above](/cli/azure/update-azure-cli.md).
+> This option is included in [Azure CLI 3.7 and above](https://docs.microsoft.com/cli/azure/update-azure-cli).
## April 2022
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 05/10/2022 Last updated : 05/31/2022 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you'll find them in the [What's
| Planned change | Estimated date for change | |--|--|
-| [Changes to recommendations for managing endpoint protection solutions](#changes-to-recommendations-for-managing-endpoint-protection-solutions) | May 2022 |
-| [Key Vault recommendations changed to "audit"](#key-vault-recommendations-changed-to-audit) | May 2022 |
+| [Changes to recommendations for managing endpoint protection solutions](#changes-to-recommendations-for-managing-endpoint-protection-solutions) | June 2022 |
+| [Key Vault recommendations changed to "audit"](#key-vault-recommendations-changed-to-audit) | June 2022 |
| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | June 2022 | | [Deprecating three VM alerts](#deprecating-three-vm-alerts) | June 2022| | [Deprecating the "API App should only be accessible over HTTPS" policy](#deprecating-the-api-app-should-only-be-accessible-over-https-policy)|June 2022| ### Changes to recommendations for managing endpoint protection solutions
-**Estimated date for change:** May 2022
+**Estimated date for change:** June 2022
In August 2021, we added two new **preview** recommendations to deploy and maintain the endpoint protection solutions on your machines. For full details, [see the release note](release-notes-archive.md#two-new-recommendations-for-managing-endpoint-protection-solutions-in-preview).
Learn more:
### Key Vault recommendations changed to "audit"
+**Estimated date for change:** June 2022
+ The Key Vault recommendations listed here are currently disabled so that they don't impact your secure score. We will change their effect to "audit". | Recommendation name | Recommendation ID |
The new release will bring the following capabilities:
|External accounts with owner permissions should be removed from your subscription|c3b6ae71-f1f0-31b4-e6c1-d5951285d03d| |External accounts with read permissions should be removed from your subscription|a8c6a4ad-d51e-88fe-2979-d3ee3c864f8b| |External accounts with write permissions should be removed from your subscription|04e7147b-0deb-9796-2e5c-0336343ceb3d|+ #### Recommendations rename This update, will rename two recommendations, and revise their descriptions. The assessment keys will remain unchanged.
-| Property | Current value | New update's change |
-|--|--|--|
-|**First recommendation**| - | - |
-|Assessment key | e52064aa-6853-e252-a11e-dffc675689c2 | No change |
-| Name | [Deprecated accounts with owner permissions should be removed from your subscription](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e52064aa-6853-e252-a11e-dffc675689c2) | Subscriptions should be purged of accounts that are blocked in Active Directory and have owner permissions. |
-| Description | User accounts that have been blocked from signing in, should be removed from your subscriptions.
-These accounts can be targets for attackers looking to find ways to access your data without being noticed. | User accounts that have been blocked from signing into Active Directory, should be removed from your subscriptions. These accounts can be targets for attackers looking to find ways to access your data without being noticed. <br> Learn more about securing the identity perimeter in [Azure Identity Management and access control security best practices](../security/fundamentals/identity-management-best-practices.md). |
-| Related policy | [Deprecated accounts with owner permissions should be removed from your subscription](https://ms.portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2febb62a0c-3560-49e1-89ed-27e074e9f8ad) | Subscriptions should be purged of accounts that are blocked in Active Directory and have owner permissions. |
-|**Second recommendation**| - | - |
-| Assessment key | 00c6d40b-e990-6acf-d4f3-471e747a27c4 | No change |
-| Name | [Deprecated accounts should be removed from your subscription](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/00c6d40b-e990-6acf-d4f3-471e747a27c4) | Subscriptions should be purged of accounts that are blocked in Active Directory and have read and write permissions. |
-| Description | User accounts that have been blocked from signing in, should be removed from your subscriptions. <br> These accounts can be targets for attackers looking to find ways to access your data without being noticed. | User accounts that have been blocked from signing into Active Directory, should be removed from your subscriptions. These accounts can be targets for attackers looking to find ways to access your data without being noticed. <br> Learn more about securing the identity perimeter in [Azure Identity Management and access control security best practices](../security/fundamentals/identity-management-best-practices.md). |
-| Related policy | [Deprecated accounts should be removed from your subscription](https://ms.portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f6b1cbf55-e8b6-442f-ba4c-7246b6381474) | Subscriptions should be purged of accounts that are blocked in Active Directory and have read and write permissions. |
+ | Property | Current value | New update's change |
+ |-|-|-|
+ |**First recommendation**| - | - |
+ |Assessment key | e52064aa-6853-e252-a11e-dffc675689c2 | No change|
+ | Name | [Deprecated accounts with owner permissions should be removed from your subscription](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e52064aa-6853-e252-a11e-dffc675689c2) |Subscriptions should be purged of accounts that are blocked in Active Directory and have owner permissions.|
+ |Description| User accounts that have been blocked from signing in, should be removed from your subscriptions.|These accounts can be targets for attackers looking to find ways to access your data without being noticed. <br> Learn more about securing the identity perimeter in [Azure Identity Management and access control security best practices](../security/fundamentals/identity-management-best-practices.md).|
+ |Related policy|[Deprecated accounts with owner permissions should be removed from your subscription](https://ms.portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2febb62a0c-3560-49e1-89ed-27e074e9f8ad) | Subscriptions should be purged of accounts that are blocked in Active Directory and have owner permissions.|
+ |**Second recommendation**| - | - |
+ | Assessment key | 00c6d40b-e990-6acf-d4f3-471e747a27c4 | No change |
+ | Name | [Deprecated accounts should be removed from your subscription](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/00c6d40b-e990-6acf-d4f3-471e747a27c4)|Subscriptions should be purged of accounts that are blocked in Active Directory and have read and write permissions.|
+|Description|User accounts that have been blocked from signing in, should be removed from your subscriptions. <br> These accounts can be targets for attackers looking to find ways to access your data without being noticed.|User accounts that have been blocked from signing into Active Directory, should be removed from your subscriptions.<br> Learn more about securing the identity perimeter in [Azure Identity Management and access control security best practices](../security/fundamentals/identity-management-best-practices.md).|
+ | Related policy | [Deprecated accounts should be removed from your subscription](https://ms.portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f6b1cbf55-e8b6-442f-ba4c-7246b6381474) | Subscriptions should be purged of accounts that are blocked in Active Directory and have read and write permissions. |
### Deprecating three VM alerts
defender-for-iot How To Analyze Programming Details Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-analyze-programming-details-changes.md
This section describes how to view programming files and compare versions. Searc
|Programming timeline type | Description | |--|--| | Programmed Device | Provides details about the device that was programmed, including the hostname and file. |
-| Recent Events | Displays the 50 most recent events detected by the sensor. <br />To highlight an event, hover over it and click the star. :::image type="icon" source="media/how-to-work-with-maps/star.png" border="false"::: <br /> The last 50 events can be viewed. |
+| Recent Events | Displays the 50 most recent events detected by the sensor. <br />To highlight an event, hover over it and select the star. :::image type="icon" source="media/how-to-work-with-maps/star.png" border="false"::: <br /> The last 50 events can be viewed. |
| Files | Displays the files detected for the chosen date and the file size on the programmed device. <br /> By default, the maximum number of files available for display per device is 300. <br /> By default, the maximum file size for each file is 15 MB. |
-| File status :::image type="icon" source="media/how-to-work-with-maps/status-v2.png" border="false"::: | File labels indicate the status of the file on the device, including: <br /> **Added**: the file was added to the endpoint on the date or time selected. <br /> **Updated**: The file was updated on the date or time selected. <br /> **Deleted**: This file was removed. <br /> **No label**: The file was not changed. |
+| File status :::image type="icon" source="media/how-to-work-with-maps/status-v2.png" border="false"::: | File labels indicate the status of the file on the device, including: <br /> **Added**: the file was added to the endpoint on the date or time selected. <br /> **Updated**: The file was updated on the date or time selected. <br /> **Deleted**: This file was removed. <br /> **No label**: The file wasn't changed. |
| Programming Device | The device that made the programming change. Multiple devices may have carried out programming changes on one programmed device. The hostname, date, or time of change and logged in user are displayed. | | :::image type="icon" source="media/how-to-work-with-maps/current.png" border="false"::: | Displays the current file installed on the programmed device. | | :::image type="icon" source="media/how-to-work-with-maps/download-text.png" border="false"::: | Download a text file of the code displayed. |
defender-for-iot How To Create Data Mining Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-create-data-mining-queries.md
# Run data mining queries
-Using data mining queries to get dynamic, granular information about your network devices, including for specific time periods, internet connectivity, ports and protocols, firmware vrsions, programming commands, and device state. You can use data mining queries for:
+Using data mining queries to get dynamic, granular information about your network devices, including for specific time periods, internet connectivity, ports and protocols, firmware versions, programming commands, and device state. You can use data mining queries for:
- **SOC incident response**: Generate a report in real time to help deal with immediate incident response. For example, Data Mining can generate a report for a list of devices that might require patching. - **Forensics**: Generate a report based on historical data for investigative reports. - **Network security**: Generate a report that helps improve overall network security. For example, generate a report can be generated that lists devices with weak authentication credentials. - **Visibility**: Generate a report that covers all query items to view all baseline parameters of your network.-- **PLC security** Improve security by detecting PLCs in unsecure states for example Program and Remote states.
+- **PLC security** Improve security by detecting PLCs in unsecure states, for example, Program and Remote states.
Data mining information is saved and stored continuously, except for when a device is deleted. Data mining results can be exported and stored externally to a secure server. In addition, the sensor performs automatic daily backups to ensure system continuity and preservation of data.
The following predefined reports are available. These queries are generated in r
- **Internet activity**: Devices that are connected to the internet. - **CVEs**: A list of devices detected with known vulnerabilities, along with CVSSv2 risk scores. - **Excluded CVEs**: A list of all the CVEs that were manually excluded. It is possible to customize the CVE list manually so that the VA reports and attack vectors more accurately reflect your network by excluding or including particular CVEs and updating the CVSSv2 score accordingly.-- **Nonactive devices**: Devices that have not communicated for the past seven days.
+- **Nonactive devices**: Devices that haven't communicated for the past seven days.
- **Active devices**: Active network devices within the last 24 hours. Find these reports in **Analyze** > **Data Mining**. Reports are available for users with Administrator and Security Analyst permissions. Read only users can't access these reports.
defender-for-iot How To Create Trends And Statistics Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-create-trends-and-statistics-reports.md
Protocol dissection | Displays a pie chart that provides you with a look at the
Active TCP connections | Displays a chart that shows the number of active TCP connections in the system. Incident by type | Displays a pie chart that shows the number of incidents by type. This is the number of alerts generated by each engine over a predefined time period. Devices by vendor | Displays a pie chart that shows the number of devices by vendor. The number of devices for a specific vendor is proportional to the size of that deviceΓÇÖs vendor part of the disk relative to other device vendors.
-Number of devices per VLAN | Displays a pie chart that shows the number of discovered devices per VLAN. The size of each slice of the pie is proportional to the number of discovered devices relative to the other slices. Each VLAN appears with the VLAN tag assigned by the sensor or name that you have manually added.
+Number of devices per VLAN | Displays a pie chart that shows the number of discovered devices per VLAN. The size of each slice of the pie is proportional to the number of discovered devices relative to the other slices. Each VLAN appears with the VLAN tag assigned by the sensor or name that you've manually added.
Top bandwidth by VLAN | Displays the bandwidth consumption by VLAN. By default, the widget shows five VLANs with the highest bandwidth usage. You can filter the data by the period presented in the widget. Select the down arrow to show more results.
defender-for-iot How To Gain Insight Into Global Regional And Local Threats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-gain-insight-into-global-regional-and-local-threats.md
The site map in the on-premises management console helps you achieve full security coverage by dividing your network into geographical and logical segments that reflect your business topology: -- **Geographical facility level**: A site reflects a number of devices grouped according to a geographical location presented on the map. By default, Microsoft Defender for IoT provides you with a world map. You update the map to reflect your organizational or business structure. For example, use a map that reflects sites across a specific country, city, or industrial campus. When the site color changes on the map, it provides the SOC team with an indication of critical system status in the facility.
+- **Geographical facility level**: A site reflects many devices grouped according to a geographical location presented on the map. By default, Microsoft Defender for IoT provides you with a world map. You update the map to reflect your organizational or business structure. For example, use a map that reflects sites across a specific country, city, or industrial campus. When the site color changes on the map, it provides the SOC team with an indication of critical system status in the facility.
The map is interactive and enables opening each site and delving into this site's information.
defender-for-iot How To Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-install-software.md
This procedure describes how to add a secondary NIC if you've already installed
### Find your port
-If you are having trouble locating the physical port on your device, you can use the following command to find your port:
+If you're having trouble locating the physical port on your device, you can use the following command to find your port:
```bash sudo ethtool -p <port value> <time-in-seconds>
defender-for-iot How To Investigate All Enterprise Sensor Detections In A Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md
The following table describes the table columns in the device inventory.
| **MAC Address** | The MAC address of the device. | | **Protocols** | The protocols that the device uses. | | **Unacknowledged Alerts** | The number of unhandled alerts associated with this device. |
-| **Is Authorized** | The authorization status of the device:<br />- **True**: The device has been authorized.<br />- **False**: The device has not been authorized. |
+| **Is Authorized** | The authorization status of the device:<br />- **True**: The device has been authorized.<br />- **False**: The device hasn't been authorized. |
| **Is Known as Scanner** | Whether this device performs scanning-like activities in the network. |
-| **Is Programming Device** | Whether this is a programming device:<br />- **True**: The device performs programming activities for PLCs, RTUs, and controllers, which are relevant to engineering stations.<br />- **False**: The device is not a programming device. |
+| **Is Programming Device** | Whether this is a programming device:<br />- **True**: The device performs programming activities for PLCs, RTUs, and controllers, which are relevant to engineering stations.<br />- **False**: The device isn't a programming device. |
| **Groups** | Groups in which this device participates. | | **Last Activity** | The last activity that the device performed. | | **Discovered** | When this device was first seen in the network. |
-| **PLC mode (preview)** | The PLC operating mode includes the Key state (physical) and run state (logical). Possible **Key** states include, Run, Program, Remote, Stop, Invalid, Programming Disabled.Possible Run. The possible **Run** states are Run, Program, Stop, Paused, Exception, Halted, Trapped, Idle, Offline. if both states are the same, only oe state is presented. |
+| **PLC mode (preview)** | The PLC operating mode includes the Key state (physical) and run state (logical). Possible **Key** states include, Run, Program, Remote, Stop, Invalid, Programming Disabled.Possible Run. The possible **Run** states are Run, Program, Stop, Paused, Exception, Halted, Trapped, Idle, Offline. if both states are the same, only one state is presented. |
## What is an Inventory device?
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
You can continue to work with Defender for IoT features even if the activation f
### About activation files for cloud-connected sensors
-Sensors that are cloud connected are not limited by time periods for their activation file. The activation file for cloud-connected sensors is used to ensure the connection to Defender for IoT.
+Sensors that are cloud connected aren't limited by time periods for their activation file. The activation file for cloud-connected sensors is used to ensure the connection to Defender for IoT.
### Upload new activation files
You might need to upload a new activation file for an onboarded sensor when:
### Troubleshoot activation file upload
-You'll receive an error message if the activation file could not be uploaded. The following events might have occurred:
+You'll receive an error message if the activation file couldn't be uploaded. The following events might have occurred:
-- **For locally connected sensors**: The activation file is not valid. If the file is not valid, go to [Defender for IoT in the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started). On the **Sensor Management** page, select the sensor with the invalid file, and download a new activation file.
+- **For locally connected sensors**: The activation file isn't valid. If the file isn't valid, go to [Defender for IoT in the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started). On the **Sensor Management** page, select the sensor with the invalid file, and download a new activation file.
- **For cloud-connected sensors**: The sensor can't connect to the internet. Check the sensor's network configuration. If your sensor needs to connect through a web proxy to access the internet, verify that your proxy server is configured correctly on the **Sensor Network Configuration** screen. Verify that \*.azure-devices.net:443 is allowed in the firewall and/or proxy. If wildcards are not supported or you want more control, the FQDN for your specific endpoint (either a sensor, or for legacy connections, an IoT hub) should be opened in your firewall and/or proxy. For more information, see [Reference - IoT Hub endpoints](../../iot-hub/iot-hub-devguide-endpoints.md).
defender-for-iot How To Manage Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md
Your Defender for IoT deployment is managed through your Microsoft Defender for IoT account subscriptions. You can onboard, edit, and offboard your subscriptions to Defender for IoT in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started).
-For each subscription, you will be asked to define a number of *committed devices*. Committed devices are the approximate number of devices that will be monitored in your enterprise.
+For each subscription, you'll be asked to define a number of *committed devices*. Committed devices are the approximate number of devices that will be monitored in your enterprise.
> [!NOTE] > If you've come to this page because you are a [former CyberX customer](https://blogs.microsoft.com/blog/2020/06/22/microsoft-acquires-cyberx-to-accelerate-and-secure-customers-iot-deployments) and have questions about your account, reach out to your account manager for guidance.
For each subscription, you will be asked to define a number of *committed device
## Subscription billing
-You are billed based on the number of committed devices associated with each subscription.
+You're billed based on the number of committed devices associated with each subscription.
The billing cycle for Microsoft Defender for IoT follows a calendar month. Changes you make to committed devices during the month are implemented one hour after confirming your update, and are reflected in your monthly bill. Subscription *offboarding* also takes effect one hour after confirming the offboard.
This section describes how to onboard a subscription.
1. Select **Subscribe**. 1. Confirm your subscription.
-1. If you have not done so already, onboard a sensor or Set up a sensor.
+1. If you haven't done so already, onboard a sensor or Set up a sensor.
## Update committed devices in a subscription
-You may need to update your subscription with more committed devices, or more fewer committed devices. More devices may require monitoring if, for example, you are increasing existing site coverage, discovered more devices than expected or there are network changes such as adding switches.
+You may need to update your subscription with more committed devices, or fewer committed devices. More devices may require monitoring if, for example, you are increasing existing site coverage, discovered more devices than expected or there are network changes such as adding switches.
**To update a subscription:** 1. Go to [Defender for IoT: Getting started](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started) in the Azure portal.
defender-for-iot How To Manage The Alert Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-the-alert-event.md
Users working with alerts on the Defender for IoT portal on Azure should underst
Parameter | Description |--|--| | **Alert Exclusion rules**| Alert *Exclusion rules* defined in the on-premises management console impact the alerts triggered by managed sensors. As a result, the alerts excluded by these rules also won't be displayed in the Alerts page on the portal. For more information, see [Create alert exclusion rules](how-to-work-with-alerts-on-premises-management-console.md#create-alert-exclusion-rules).
-| **Managing alerts on your sensor** | If you change the status of an alert, or learn or mute an alert on a sensor, the changes are not updated in the Defender for IoT Alerts page on the portal. This means that this alert will stay open on the portal. However another alert won't be triggered from sensor for this activity.
-| **Managing alerts in the portal Alerts page** | Changing the status of an alert on the Azure portal, Alerts page or changing the alert severity on the portal, does not impact the alert status or severity in on-premises sensors.
+| **Managing alerts on your sensor** | If you change the status of an alert, or learn or mute an alert on a sensor, the changes are not updated in the Defender for IoT Alerts page on the portal. This means that this alert will stay open on the portal. However another alert won't be triggered from the sensor for this activity.
+| **Managing alerts in the portal Alerts page** | Changing the status of an alert on the Azure portal, Alerts page or changing the alert severity on the portal, doesn't impact the alert status or severity in on-premises sensors.
## Next steps
defender-for-iot How To Set Up Snmp Mib Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-snmp-mib-monitoring.md
Title: Set up SNMP MIB monitoring description: You can perform sensor health monitoring by using SNMP. The sensor responds to SNMP queries sent from an authorized monitoring server. Previously updated : 01/31/2022 Last updated : 05/31/2022
Note that:
## Prerequisites for AES and 3-DES Encryption Support for SNMP Version 3 - The network management station (NMS) must support Simple Network Management Protocol (SNMP) Version 3 to be able to use this feature.-- It is important to understand the SNMP architecture and the terminology of the architecture to understand the security model used and how the security model interacts with the other subsystems in the architecture.
+- It's important to understand the SNMP architecture and the terminology of the architecture to understand the security model used and how the security model interacts with the other subsystems in the architecture.
- Before you begin configuring SNMP monitoring, you need to open the port UDP 161 in the firewall.
Note that:
| Parameter | Description | |--|--|
- | **Username** | The SNMP username can contain up to 32 characters and include any combination of alphanumeric characters (uppercase letters, lowercase letters, and numbers). Spaces are not allowed. <br /> <br />The username for the SNMP v3 authentication must be configured on the system and on the SNMP server. |
+ | **Username** | The SNMP username can contain up to 32 characters and include any combination of alphanumeric characters (uppercase letters, lowercase letters, and numbers). Spaces aren't allowed. <br /> <br />The username for the SNMP v3 authentication must be configured on the system and on the SNMP server. |
| **Password** | Enter a case-sensitive authentication password. The authentication password can contain 8 to 12 characters and include any combination of alphanumeric characters (uppercase letters, lowercase letters, and numbers). <br /> <br/>The username for the SNMP v3 authentication must be configured on the system and on the SNMP server. | | **Auth Type** | Select MD5 or SHA-1. | | **Encryption** | Select DES (56 bit key size)<sup>[1](#1)</sup> or AES (AES 128 bits supported)<sup>[2](#2)</sup>. |
defender-for-iot How To View Information Per Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-view-information-per-zone.md
The following tools are available for viewing devices and device information fro
To view alerts associated with a specific zone: -- Select the alert icon form the **Zone** window.
+- Select the alert icon from the **Zone** window.
:::image type="content" source="media/how-to-work-with-asset-inventory-information/business-unit-view-v2.png" alt-text="The default Business Unit view with examples.":::
The following additional zone information is available:
- **Connectivity status**: If a sensor is disconnected, connect from the sensor. See [Connect sensors to the on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md#connect-sensors-to-the-on-premises-management-console). -- **Update progress**: If the connected sensor is being upgraded, upgrade statuses will appear. During upgrade, the on-premises management console does not receive device information from the sensor.
+- **Update progress**: If the connected sensor is being upgraded, upgrade statuses will appear. During upgrade, the on-premises management console doesn't receive device information from the sensor.
## Next steps
defender-for-iot How To Work With Threat Intelligence Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-threat-intelligence-packages.md
This option is available for both *cloud connected* and *locally managed* sensor
## Review package update status on the sensor ##
-The package update status and version information is displayed in the sensor **System Settings**, **Threat Intelligence** section.
+The package update status and version information are displayed in the sensor **System Settings**, **Threat Intelligence** section.
## Review package information for cloud connected sensors ##
event-hubs Event Hubs Capture Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-capture-overview.md
Title: Capture streaming events - Azure Event Hubs | Microsoft Docs description: This article provides an overview of the Capture feature that allows you to capture events streaming through Azure Event Hubs. Previously updated : 02/16/2021 Last updated : 05/31/2022 # Capture events through Azure Event Hubs in Azure Blob Storage or Azure Data Lake Storage
Event Hubs Capture enables you to specify your own Azure Blob storage account an
Captured data is written in [Apache Avro][Apache Avro] format: a compact, fast, binary format that provides rich data structures with inline schema. This format is widely used in the Hadoop ecosystem, Stream Analytics, and Azure Data Factory. More information about working with Avro is available later in this article.
+> [!NOTE]
+> When you use no code editor in the Azure portal, you can capture streaming data in Event Hubs in an Azure Data Lake Storage Gen2 account in the **Parquet** format. For more information, see [How to: capture data from Event Hubs in Parquet format](../stream-analytics/capture-event-hub-data-parquet.md?toc=%2Fazure%2Fevent-hubs%2Ftoc.json) and [Tutorial: capture Event Hubs data in Parquet format and analyze with Azure Synapse Analytics](../stream-analytics/event-hubs-parquet-capture-tutorial.md?toc=%2Fazure%2Fevent-hubs%2Ftoc.json).
+ ### Capture windowing Event Hubs Capture enables you to set up a window to control capturing. This window is a minimum size and time configuration with a "first wins policy," meaning that the first trigger encountered causes a capture operation. If you have a fifteen-minute, 100 MB capture window and send 1 MB per second, the size window triggers before the time window. Each partition captures independently and writes a completed block blob at the time of capture, named for the time at which the capture interval was encountered. The storage naming convention is as follows:
The date values are padded with zeroes; an example filename might be:
https://mystorageaccount.blob.core.windows.net/mycontainer/mynamespace/myeventhub/0/2017/12/08/03/03/17.avro ```
-In the event that your Azure storage blob is temporarily unavailable, Event Hubs Capture will retain your data for the data retention period configured on your event hub and back fill the data once your storage account is available again.
+If your Azure storage blob is temporarily unavailable, Event Hubs Capture will retain your data for the data retention period configured on your event hub and back fill the data once your storage account is available again.
### Scaling throughput units or processing units In the standard tier of Event Hubs, the traffic is controlled by [throughput units](event-hubs-scalability.md#throughput-units) and in the premium tier Event Hubs, it's controlled by [processing units](event-hubs-scalability.md#processing-units). Event Hubs Capture copies data directly from the internal Event Hubs storage, bypassing throughput unit or processing unit egress quotas and saving your egress for other processing readers, such as Stream Analytics or Spark.
-Once configured, Event Hubs Capture runs automatically when you send your first event, and continues running. To make it easier for your downstream processing to know that the process is working, Event Hubs writes empty files when there is no data. This process provides a predictable cadence and marker that can feed your batch processors.
+Once configured, Event Hubs Capture runs automatically when you send your first event, and continues running. To make it easier for your downstream processing to know that the process is working, Event Hubs writes empty files when there's no data. This process provides a predictable cadence and marker that can feed your batch processors.
## Setting up Event Hubs Capture
Apache Avro has complete Getting Started guides for [Java][Java] and [Python][Py
Event Hubs Capture is metered similarly to [throughput units](event-hubs-scalability.md#throughput-units) (standard tier) or [processing units](event-hubs-scalability.md#processing-units) (in premium tier): as an hourly charge. The charge is directly proportional to the number of throughput units or processing units purchased for the namespace. As throughput units or processing units are increased and decreased, Event Hubs Capture meters increase and decrease to provide matching performance. The meters occur in tandem. For pricing details, see [Event Hubs pricing](https://azure.microsoft.com/pricing/details/event-hubs/).
-Capture does not consume egress quota as it is billed separately.
+Capture doesn't consume egress quota as it is billed separately.
## Integration with Event Grid
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **[Deutsche Telekom AG](https://www.t-systems.com/de/en/cloud-and-infrastructure/manage-it-efficiently/managed-azure/cloudconnect-for-azure)** | Supported |Supported | Frankfurt2 | | **du datamena** |Supported |Supported | Dubai2 | | **[eir](https://www.eirevo.ie/cloud-services/cloud-connectivity)** |Supported |Supported | Dublin|
-| **[Epsilon Global Communications](https://www.epsilontel.com/solutions/direct-cloud-connect)** |Supported |Supported | Singapore, Singapore2 |
+| **[Epsilon Global Communications](https://epsilontel.com/solutions/cloud-connect/)** |Supported |Supported | Singapore, Singapore2 |
| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** |Supported |Supported | Amsterdam, Amsterdam2, Atlanta, Berlin, Bogota, Canberra2, Chicago, Dallas, Dubai2, Dublin, Frankfurt, Frankfurt2, Geneva, Hong Kong SAR, London, London2, Los Angeles*, Los Angeles2, Melbourne, Miami, Milan, New York, Osaka, Paris, Quebec City, Rio de Janeiro, Sao Paulo, Seattle, Seoul, Silicon Valley, Singapore, Singapore2, Stockholm, Sydney, Tokyo, Toronto, Washington DC, Zurich</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Please create new circuits in Los Angeles2.* | | **Etisalat UAE** |Supported |Supported | Dubai | | **[euNetworks](https://eunetworks.com/services/solutions/cloud-connect/microsoft-azure-expressroute/)** |Supported |Supported | Amsterdam, Amsterdam2, Dublin, Frankfurt, London |
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/overview.md
Just as a blueprint allows an engineer or an architect to sketch a project's des
Azure Blueprints enables cloud architects and central information technology groups to define a repeatable set of Azure resources that implements and adheres to an organization's standards, patterns, and requirements. Azure Blueprints makes it possible for development teams to rapidly
-build and stand up new environments with trust they're building within organizational compliance
+build and start up new environments with trust they're building within organizational compliance
with a set of built-in components, such as networking, to speed up development and delivery. Blueprints are a declarative way to orchestrate the deployment of various resource templates and
governance Gov Dod Impact Level 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-dod-impact-level-4.md
initiative definition.
|[Azure File Sync should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d320205-c6a1-4ac6-873d-46224024e8e2) |Creating a private endpoint for the indicated Storage Sync Service resource allows you to address your Storage Sync Service resource from within the private IP address space of your organization's network, rather than through the internet-accessible public endpoint. Creating a private endpoint by itself does not disable the public endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Storage/StorageSync_PrivateEndpoint_AuditIfNotExists.json) | |[Azure Machine Learning workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](../../../machine-learning/how-to-configure-private-link.md). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) | |[Azure Service Bus namespaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c06e275-d63d-4540-b761-71f364c2111d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Service Bus namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/service-bus-messaging/private-link-service](../../../service-bus-messaging/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_PrivateEndpoint_Audit.json) |
-|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit.json) |
+|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) |
|[Azure Synapse workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Network/WAF_AFD_Enabled_Audit.json) | |[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](../../../private-link/index.yml). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
initiative definition.
|[Azure File Sync should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d320205-c6a1-4ac6-873d-46224024e8e2) |Creating a private endpoint for the indicated Storage Sync Service resource allows you to address your Storage Sync Service resource from within the private IP address space of your organization's network, rather than through the internet-accessible public endpoint. Creating a private endpoint by itself does not disable the public endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Storage/StorageSync_PrivateEndpoint_AuditIfNotExists.json) | |[Azure Machine Learning workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](../../../machine-learning/how-to-configure-private-link.md). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) | |[Azure Service Bus namespaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c06e275-d63d-4540-b761-71f364c2111d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Service Bus namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/service-bus-messaging/private-link-service](../../../service-bus-messaging/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_PrivateEndpoint_Audit.json) |
-|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit.json) |
+|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) |
|[Azure Synapse workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Network/WAF_AFD_Enabled_Audit.json) | |[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](../../../private-link/index.yml). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
initiative definition.
|[Azure File Sync should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d320205-c6a1-4ac6-873d-46224024e8e2) |Creating a private endpoint for the indicated Storage Sync Service resource allows you to address your Storage Sync Service resource from within the private IP address space of your organization's network, rather than through the internet-accessible public endpoint. Creating a private endpoint by itself does not disable the public endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Storage/StorageSync_PrivateEndpoint_AuditIfNotExists.json) | |[Azure Machine Learning workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](../../../machine-learning/how-to-configure-private-link.md). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) | |[Azure Service Bus namespaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c06e275-d63d-4540-b761-71f364c2111d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Service Bus namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/service-bus-messaging/private-link-service](../../../service-bus-messaging/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_PrivateEndpoint_Audit.json) |
-|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit.json) |
+|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) |
|[Azure Synapse workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](../../../private-link/index.yml). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) | |[Cognitive Services accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) |
initiative definition.
|[Azure File Sync should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d320205-c6a1-4ac6-873d-46224024e8e2) |Creating a private endpoint for the indicated Storage Sync Service resource allows you to address your Storage Sync Service resource from within the private IP address space of your organization's network, rather than through the internet-accessible public endpoint. Creating a private endpoint by itself does not disable the public endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Storage/StorageSync_PrivateEndpoint_AuditIfNotExists.json) | |[Azure Machine Learning workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](../../../machine-learning/how-to-configure-private-link.md). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) | |[Azure Service Bus namespaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c06e275-d63d-4540-b761-71f364c2111d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Service Bus namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/service-bus-messaging/private-link-service](../../../service-bus-messaging/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_PrivateEndpoint_Audit.json) |
-|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit.json) |
+|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) |
|[Azure Synapse workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Cognitive Services should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcddd188c-4b82-4c48-a19d-ddf74ee66a01) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about private links at: [https://go.microsoft.com/fwlink/?linkid=2129800](../../../private-link/index.yml). |Audit, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_EnablePrivateEndpoints_Audit.json) | |[Container registries should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](../../../container-registry/container-registry-private-link.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) |
initiative definition.
|[Azure File Sync should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d320205-c6a1-4ac6-873d-46224024e8e2) |Creating a private endpoint for the indicated Storage Sync Service resource allows you to address your Storage Sync Service resource from within the private IP address space of your organization's network, rather than through the internet-accessible public endpoint. Creating a private endpoint by itself does not disable the public endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Storage/StorageSync_PrivateEndpoint_AuditIfNotExists.json) | |[Azure Machine Learning workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](../../../machine-learning/how-to-configure-private-link.md). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) | |[Azure Service Bus namespaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c06e275-d63d-4540-b761-71f364c2111d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Service Bus namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/service-bus-messaging/private-link-service](../../../service-bus-messaging/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_PrivateEndpoint_Audit.json) |
-|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit.json) |
+|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) |
|[Azure Synapse workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Cognitive Services should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcddd188c-4b82-4c48-a19d-ddf74ee66a01) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about private links at: [https://go.microsoft.com/fwlink/?linkid=2129800](../../../private-link/index.yml). |Audit, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_EnablePrivateEndpoints_Audit.json) | |[Container registries should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](../../../container-registry/container-registry-private-link.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) |
Additional articles about Azure Policy:
- See the [initiative definition structure](../concepts/initiative-definition-structure.md). - Review other examples at [Azure Policy samples](./index.md). - Review [Understanding policy effects](../concepts/effects.md).-- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
+- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
governance Gov Dod Impact Level 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-dod-impact-level-5.md
initiative definition.
|[Azure File Sync should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d320205-c6a1-4ac6-873d-46224024e8e2) |Creating a private endpoint for the indicated Storage Sync Service resource allows you to address your Storage Sync Service resource from within the private IP address space of your organization's network, rather than through the internet-accessible public endpoint. Creating a private endpoint by itself does not disable the public endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Storage/StorageSync_PrivateEndpoint_AuditIfNotExists.json) | |[Azure Machine Learning workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](../../../machine-learning/how-to-configure-private-link.md). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) | |[Azure Service Bus namespaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c06e275-d63d-4540-b761-71f364c2111d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Service Bus namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/service-bus-messaging/private-link-service](../../../service-bus-messaging/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_PrivateEndpoint_Audit.json) |
-|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit.json) |
+|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) |
|[Azure Synapse workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Network/WAF_AFD_Enabled_Audit.json) | |[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](../../../private-link/index.yml). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
initiative definition.
|[Azure File Sync should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d320205-c6a1-4ac6-873d-46224024e8e2) |Creating a private endpoint for the indicated Storage Sync Service resource allows you to address your Storage Sync Service resource from within the private IP address space of your organization's network, rather than through the internet-accessible public endpoint. Creating a private endpoint by itself does not disable the public endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Storage/StorageSync_PrivateEndpoint_AuditIfNotExists.json) | |[Azure Machine Learning workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](../../../machine-learning/how-to-configure-private-link.md). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) | |[Azure Service Bus namespaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c06e275-d63d-4540-b761-71f364c2111d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Service Bus namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/service-bus-messaging/private-link-service](../../../service-bus-messaging/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_PrivateEndpoint_Audit.json) |
-|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit.json) |
+|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) |
|[Azure Synapse workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Network/WAF_AFD_Enabled_Audit.json) | |[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](../../../private-link/index.yml). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
initiative definition.
|[Azure File Sync should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d320205-c6a1-4ac6-873d-46224024e8e2) |Creating a private endpoint for the indicated Storage Sync Service resource allows you to address your Storage Sync Service resource from within the private IP address space of your organization's network, rather than through the internet-accessible public endpoint. Creating a private endpoint by itself does not disable the public endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Storage/StorageSync_PrivateEndpoint_AuditIfNotExists.json) | |[Azure Machine Learning workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](../../../machine-learning/how-to-configure-private-link.md). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) | |[Azure Service Bus namespaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c06e275-d63d-4540-b761-71f364c2111d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Service Bus namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/service-bus-messaging/private-link-service](../../../service-bus-messaging/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_PrivateEndpoint_Audit.json) |
-|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit.json) |
+|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) |
|[Azure Synapse workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](../../../private-link/index.yml). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) | |[Cognitive Services accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) |
initiative definition.
|[Azure File Sync should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d320205-c6a1-4ac6-873d-46224024e8e2) |Creating a private endpoint for the indicated Storage Sync Service resource allows you to address your Storage Sync Service resource from within the private IP address space of your organization's network, rather than through the internet-accessible public endpoint. Creating a private endpoint by itself does not disable the public endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Storage/StorageSync_PrivateEndpoint_AuditIfNotExists.json) | |[Azure Machine Learning workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](../../../machine-learning/how-to-configure-private-link.md). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) | |[Azure Service Bus namespaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c06e275-d63d-4540-b761-71f364c2111d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Service Bus namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/service-bus-messaging/private-link-service](../../../service-bus-messaging/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_PrivateEndpoint_Audit.json) |
-|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit.json) |
+|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) |
|[Azure Synapse workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Cognitive Services should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcddd188c-4b82-4c48-a19d-ddf74ee66a01) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about private links at: [https://go.microsoft.com/fwlink/?linkid=2129800](../../../private-link/index.yml). |Audit, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_EnablePrivateEndpoints_Audit.json) | |[Container registries should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](../../../container-registry/container-registry-private-link.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) |
initiative definition.
|[Azure File Sync should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d320205-c6a1-4ac6-873d-46224024e8e2) |Creating a private endpoint for the indicated Storage Sync Service resource allows you to address your Storage Sync Service resource from within the private IP address space of your organization's network, rather than through the internet-accessible public endpoint. Creating a private endpoint by itself does not disable the public endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Storage/StorageSync_PrivateEndpoint_AuditIfNotExists.json) | |[Azure Machine Learning workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](../../../machine-learning/how-to-configure-private-link.md). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) | |[Azure Service Bus namespaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c06e275-d63d-4540-b761-71f364c2111d) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Service Bus namespaces, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/service-bus-messaging/private-link-service](../../../service-bus-messaging/private-link-service.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_PrivateEndpoint_Audit.json) |
-|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit.json) |
+|[Azure SignalR Service should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](../../../azure-signalr/howto-private-endpoints.md). |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) |
|[Azure Synapse workspaces should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72d11df1-dd8a-41f7-8925-b05b960ebafc) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Synapse workspace, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/synapse-analytics/security/how-to-connect-to-workspace-with-private-links](../../../synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceUsePrivateLinks_Audit.json) | |[Cognitive Services should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcddd188c-4b82-4c48-a19d-ddf74ee66a01) |Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about private links at: [https://go.microsoft.com/fwlink/?linkid=2129800](../../../private-link/index.yml). |Audit, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_EnablePrivateEndpoints_Audit.json) | |[Container registries should use private link](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](../../../container-registry/container-registry-private-link.md). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) |
Additional articles about Azure Policy:
- See the [initiative definition structure](../concepts/initiative-definition-structure.md). - Review other examples at [Azure Policy samples](./index.md). - Review [Understanding policy effects](../concepts/effects.md).-- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
+- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
guides Azure Operations Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/guides/operations/azure-operations-guide.md
One of the benefits of using Azure is that you can deploy your applications into
### Azure portal
-The Azure portal is a web-based application that can be used to create, manage, and remove Azure resources and services. The Azure portal is located at [portal.azure.com](https://portal.azure.com). It includes a customizable dashboard and tooling for managing Azure resources. It also provides billing and subscription information. For more information, see [Microsoft Azure portal overview](https://azure.microsoft.com/documentation/articles/azure-portal-overview/) and [Manage Azure resources through portal](../../azure-resource-manager/management/manage-resources-portal.md).
+The Azure portal is a web-based application that can be used to create, manage, and remove Azure resources and services. The Azure portal is located at [portal.azure.com](https://portal.azure.com). It includes a customizable dashboard and tooling for managing Azure resources. It also provides billing and subscription information. For more information, see [Microsoft Azure portal overview](/azure/azure-portal/azure-portal-overview) and [Manage Azure resources through portal](../../azure-resource-manager/management/manage-resources-portal.md).
### Resources
hdinsight Apache Domain Joined Create Configure Enterprise Security Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/apache-domain-joined-create-configure-enterprise-security-cluster.md
description: Learn how to create and configure Enterprise Security Package clust
Previously updated : 12/10/2019 Last updated : 05/31/2022
hdinsight Apache Hadoop Develop Deploy Java Mapreduce Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-develop-deploy-java-mapreduce-linux.md
description: Learn how to use Apache Maven to create a Java-based MapReduce appl
Previously updated : 01/16/2020 Last updated : 05/31/2022 # Develop Java MapReduce programs for Apache Hadoop on HDInsight
In this document, you have learned how to develop a Java MapReduce job. See the
* [Use Apache Hive with HDInsight](hdinsight-use-hive.md) * [Use MapReduce with HDInsight](hdinsight-use-mapreduce.md)
-* [Java Developer Center](https://azure.microsoft.com/develop/java/)
+* [Java Developer Center](https://azure.microsoft.com/develop/java/)
hdinsight Hdinsight Hadoop Create Linux Clusters Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-create-linux-clusters-portal.md
description: Learn to create Apache Hadoop, Apache HBase, Apache Storm, or Apach
Previously updated : 08/06/2020 Last updated : 05/31/2022 # Create Linux-based clusters in HDInsight by using the Azure portal
You've successfully created an HDInsight cluster. Now learn how to work with you
* [Use Apache Hive with HDInsight](hadoop/hdinsight-use-hive.md) * [Get started with Apache HBase on HDInsight](hbase/apache-hbase-tutorial-get-started-linux.md)
-* [Customize Linux-based HDInsight clusters by using script actions](hdinsight-hadoop-customize-cluster-linux.md)
+* [Customize Linux-based HDInsight clusters by using script actions](hdinsight-hadoop-customize-cluster-linux.md)
hdinsight Hdinsight Hadoop Customize Cluster Bootstrap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-customize-cluster-bootstrap.md
description: Learn how to customize HDInsight cluster configuration programmatic
Previously updated : 04/01/2020 Last updated : 05/31/2022 # Customize HDInsight clusters using Bootstrap
healthcare-apis Get Started With Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/get-started-with-fhir.md
You can obtain an Azure AD access token using PowerShell, Azure CLI, REST CCI, o
#### Access using existing tools - [Postman](../fhir/use-postman.md)-- [Rest Client](../fhir/using-rest-client.md)
+- [REST Client](../fhir/using-rest-client.md)
- [cURL](../fhir/using-curl.md) #### Load data
healthcare-apis Register Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/register-application.md
Previously updated : 05/03/2022 Last updated : 05/27/2022
The following steps are required for the DICOM service, but optional for the FHI
1. Select the **API permissions** blade.
- [ ![Add API permissions](dicom/media/dicom-add-api-permissions.png) ](dicom/media/dicom-add-api-permissions.png#lightbox)
+ [ ![Add API permissions](dicom/media/dicom-add-apis-permissions.png) ](dicom/media/dicom-add-apis-permissions.png#lightbox)
2. Select **Add a permission**.
- If you're using Azure Health Data Services, you'll add a permission to the DICOM service by searching for **Azure API for DICOM** under **APIs my organization** uses.
+ If you're using Azure Health Data Services, you'll add a permission to the DICOM service by searching for **Azure Healthcare APIs** under **APIs my organization** uses.
[ ![Search API permissions](dicom/media/dicom-search-apis-permissions.png) ](dicom/media/dicom-search-apis-permissions.png#lightbox)
- The search result for Azure API for DICOM will only return if you've already deployed the DICOM service in the workspace.
+ The search result for Azure Healthcare APIs will only return if you've already deployed the DICOM service in the workspace.
If you're referencing a different resource application, select your DICOM API Resource Application Registration that you created previously under **APIs my organization**.
The following steps are required for the DICOM service, but optional for the FHI
[ ![Select permissions scopes.](dicom/media/dicom-select-scopes.png) ](dicom/media/dicom-select-scopes.png#lightbox) >[!NOTE]
->Use grant_type of client_credentials when trying to otain an access token for the FHIR service using tools such as Postman or REST Client. For more details, visit [Access using Postman](./fhir/use-postman.md) and [Accessing Azure Health Data Services using the REST Client Extension in Visual Studio Code](./fhir/using-rest-client.md).
+>Use grant_type of client_credentials when trying to obtain an access token for the FHIR service using tools such as Postman or REST Client. For more details, visit [Access using Postman](./fhir/use-postman.md) and [Accessing Azure Health Data Services using the REST Client Extension in Visual Studio Code](./fhir/using-rest-client.md).
>>Use grant_type of client_credentials or authentication_doe when trying to obtain an access token for the DICOM service. For more details, visit [Using DICOM with cURL](dicom/dicomweb-standard-apis-curl.md). Your application registration is now complete.
hpc-cache Hpc Cache Security Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-security-info.md
This security information applies to Microsoft Azure HPC Cache. It addresses com
The HPC Cache Service is only accessible through your private virtual network. Microsoft cannot access your virtual network.
-Learn more about [connecting private networks](/security/benchmark/azure/baselines/hpc-cache-security-baseline.md).
+Learn more about [connecting private networks](/security/benchmark/azure/baselines/hpc-cache-security-baseline).
## Network infrastructure requirements
You can also optionally configure network security groups (NSGs) to control inbo
## Next steps
-* Review [Azure HPC Cache security baseline](/security/benchmark/azure/baselines/hpc-cache-security-baseline.md).
+* Review [Azure HPC Cache security baseline](/security/benchmark/azure/baselines/hpc-cache-security-baseline).
iot-central Howto Create Custom Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-custom-analytics.md
In this how-to guide, you learn how to:
## Prerequisites
-To complete the steps in this how-to guide, you need an active Azure subscription.
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+## Run the Script
-### IoT Central application
+The below script will create an IoT Central application, Event Hubs namespace, and Databricks workspace in a resource group called `eventhubsrg`.
-Create an IoT Central application on the [Azure IoT Central application manager](https://aka.ms/iotcentral) website with the following settings:
+```azurecli-interactive
-| Setting | Value |
-| - | -- |
-| Pricing plan | Standard |
-| Application template | In-store analytics ΓÇô condition monitoring |
-| Application name | Accept the default or choose your own name |
-| URL | Accept the default or choose your own unique URL prefix |
-| Directory | Your Azure Active Directory tenant |
-| Azure subscription | Your Azure subscription |
-| Region | Your nearest region |
+# A unique name for the Event Hub Namespace.
+eventhubnamespace="your-event-hubs-name-data-bricks"
-The examples and screenshots in this article use the **United States** region. Choose a location close to you and make sure you create all your resources in the same region.
+# A unique name for the IoT Central application.
+iotcentralapplicationname="your-app-name-data-bricks"
-This application template includes two simulated thermostat devices that send telemetry.
+# A unique name for the Databricks workspace.
+databricksworkspace="your-databricks-name-data-bricks"
-### Resource group
+# Name for the Resource group.
+resourcegroup=eventhubsrg
-Use the [Azure portal to create a resource group](https://portal.azure.com/#create/Microsoft.ResourceGroup) called **IoTCentralAnalysis** to contain the other resources you create. Create your Azure resources in the same location as your IoT Central application.
+eventhub=centralexport
+location=eastus
+authrule=ListenSend
-### Event Hubs namespace
-Use the [Azure portal to create an Event Hubs namespace](https://portal.azure.com/#create/Microsoft.EventHub) with the following settings:
+#Create a resource group for the IoT Central application.
+RESOURCE_GROUP=$(az group create --name $resourcegroup --location $location)
-| Setting | Value |
-| - | -- |
-| Name | Choose your namespace name |
-| Pricing tier | Basic |
-| Subscription | Your subscription |
-| Resource group | IoTCentralAnalysis |
-| Location | East US |
-| Throughput Units | 1 |
+# Create an IoT Central application
+IOT_CENTRAL=$(az iot central app create -n $iotcentralapplicationname -g $resourcegroup -s $iotcentralapplicationname -l $location --mi-system-assigned)
-### Azure Databricks workspace
-Use the [Azure portal to create an Azure Databricks Service](https://portal.azure.com/#create/Microsoft.Databricks) with the following settings:
+# Create an Event Hubs namespace.
+az eventhubs namespace create --name $eventhubnamespace --resource-group $resourcegroup -l $location
+
+# Create an Azure Databricks workspace
+DATABRICKS_JSON=$(az databricks workspace create --resource-group $resourcegroupname --name $databricksworkspace --location $location --sku standard)
++
+# Create an Event Hub
+az eventhubs eventhub create --name $eventhub --resource-group $resourcegroupname --namespace-name $eventhubnamespace
-| Setting | Value |
-| - | -- |
-| Workspace name | Choose your workspace name |
-| Subscription | Your subscription |
-| Resource group | IoTCentralAnalysis |
-| Location | East US |
-| Pricing Tier | Standard |
-When you've created the required resources, your **IoTCentralAnalysis** resource group looks like the following screenshot:
+# Configure the managed identity for your IoT Central application
+# with permissions to send data to an event hub in the resource group.
+MANAGED_IDENTITY=$(az iot central app identity show --name $iotcentralapplicationname \
+ --resource-group $resourcegroup)
+az role assignment create --assignee $(jq -r .principalId <<< $MANAGED_IDENTITY) --role 'Azure Event Hubs Data Sender' --scope $(jq -r .id <<< $RESOURCE_GROUP)
-## Create an event hub
+# Create a connection string to use in Databricks notebook
+az eventhubs eventhub authorization-rule create --eventhub-name $eh --namespace-name $ehns --resource-group $rg --name $authrule --rights Listen Send
+EHAUTH_JSON=$(az eventhubs eventhub authorization-rule keys list --resource-group $rg --namespace-name $ehns --eventhub-name $eh --name $authrule)
-You can configure an IoT Central application to continuously export telemetry to an event hub. In this section, you create an event hub to receive telemetry from your IoT Central application. The event hub delivers the telemetry to your Stream Analytics job for processing.
+# Details of your IoT Central application, databricks workspace, and event hub connection string
-1. In the Azure portal, navigate to your Event Hubs namespace and select **+ Event Hub**.
-1. Name your event hub **centralexport**.
-1. In the list of event hubs in your namespace, select **centralexport**. Then choose **Shared access policies**.
-1. Select **+ Add**. Create a policy named **SendListen** with the **Send** and **Listen** claims.
-1. When the policy is ready, select it in the list, and then copy the **Connection string-primary key** value.
-1. Make a note of this connection string, you use it later when you configure your Databricks notebook to read from the event hub.
+echo "Your IoT Central app: https://$iotcentralapplicationname.azureiotcentral.com/"
+echo "Your Databricks workspace: https://$(jq -r .workspaceUrl <<< $DATABRICKS_JSON)"
+echo "Your event hub connection string is: $(jq -r .primaryConnectionString <<< EHAUTH_JSON)"
-Your Event Hubs namespace looks like the following screenshot:
+```
+Make a note of the three values output by the script, you need them in the following steps.
## Configure export in IoT Central In this section, you configure the application to stream telemetry from its simulated devices to your event hub.
-On the [Azure IoT Central application manager](https://aka.ms/iotcentral) website, navigate to the IoT Central application you created previously. To configure the export, first create a destination:
+Use the URL output by the script to navigate to the IoT Central application it created.
1. Navigate to the **Data export** page, then select **Destinations**. 1. Select **+ New destination**.
On the [Azure IoT Central application manager](https://aka.ms/iotcentral) websit
| -- | -- | | Destination name | Telemetry event hub | | Destination type | Azure Event Hubs |
- | Connection string | The event hub connection string you made a note of previously |
-
- The **Event Hub** shows as **centralexport**.
+ | Authorization | System-assigned managed identity |
+ | Host name | The event hub namespace host name, it's the value you assigned to `eventhubnamespace` in the earlier script |
+ | Event Hub | The event hub name, it's the value you assigned to `eventhub` in the earlier script |
:::image type="content" source="media/howto-create-custom-analytics/data-export-1.png" alt-text="Screenshot showing data export destination.":::
To create the export definition:
Wait until the export status is **Healthy** on the **Data export** page before you continue.
+## Create a device template
+
+To add a device template for the MXChip device:
+
+1. Select **+ New** on the **Device templates** page.
+1. On the **Select type** page, scroll down until you find the **MXCHIP AZ3166** tile in the **Featured device templates** section.
+1. Select the **MXCHIP AZ3166** tile, and then select **Next: Review**.
+1. On the **Review** page, select **Create**.
+
+## Add a device
+
+To add a simulated device to your Azure IoT Central application:
+
+1. Choose **Devices** on the left pane.
+1. Choose the **MXCHIP AZ3166** device template from which you created.
+1. Choose + **New**.
+1. Enter a device name and ID or accept the default. The maximum length of a device name is 148 characters. The maximum length of a device ID is 128 characters.
+1. Turn the **Simulated** toggle to **On**.
+1. Select **Create**.
+
+Repeat these steps to add two more simulated MXChip devices to your application.
+ ## Configure Databricks workspace
-In the Azure portal, navigate to your Azure Databricks service and select **Launch Workspace**. A new tab opens in your browser and signs you in to your workspace.
+Use the URL output by the script to navigate to the Databricks workspace it created.
### Create a cluster
-On the **Azure Databricks** page, under the list of common tasks, select **New Cluster**.
+Navigate to **Create** page in your Databricks environment. Select the **+ Cluster**.
Use the information in the following table to create your cluster:
Use the information in the following table to create your cluster:
| - | -- | | Cluster Name | centralanalysis | | Cluster Mode | Standard |
-| Databricks Runtime Version | 5.5 LTS (Scala 2.11, Spark 2.4.5) |
-| Python Version | 3 |
+| Databricks Runtime Version | Runtime: 10.4 LTS (Scala 2.12, Spark 3.2.1) |
| Enable Autoscaling | No | | Terminate after minutes of inactivity | 30 | | Worker Type | Standard_DS3_v2 |
The following steps show you how to import the library your sample needs into th
Use the following steps to import a Databricks notebook that contains the Python code to analyze and visualize your IoT Central telemetry:
-1. Navigate to the **Workspace** page in your Databricks environment. Select the dropdown next to your account name and then choose **Import**.
+1. Navigate to the **Workspace** page in your Databricks environment. Select the dropdown from the workspace and then choose **Import**.
+
+ :::image type="content" source="media/howto-create-custom-analytics/databricks-import.png" alt-text="Screenshot of data bricks import.":::
1. Choose to import from a URL and enter the following address: [https://github.com/Azure-Samples/iot-central-docs-samples/blob/master/databricks/IoT%20Central%20Analysis.dbc?raw=true](https://github.com/Azure-Samples/iot-central-docs-samples/blob/master/databricks/IoT%20Central%20Analysis.dbc?raw=true)
Use the following steps to import a Databricks notebook that contains the Python
1. Select the **Workspace** to view the imported notebook:
+ :::image type="content" source="media/howto-create-custom-analytics/import-notebook.png" alt-text="Screenshot of Imported notebook.":::
-5. Edit the code in the first Python cell to add the Event Hubs connection string you saved previously:
+1. Use the connection string output by the script to edit the code in the first Python cell to add the Event Hubs connection string:
```python from pyspark.sql.functions import *
You may see an error in the last cell. If so, check the previous cells are runni
### View smoothed data
-In the notebook, scroll down to cell 14 to see a plot of the rolling average humidity by device type. This plot continuously updates as streaming telemetry arrives:
+In the notebook, scroll down to see a plot of the rolling average humidity by device type. This plot continuously updates as streaming telemetry arrives:
:::image type="content" source="media/howto-create-custom-analytics/telemetry-plot.png" alt-text="Screenshot of Smoothed telemetry plot.":::
You can resize the chart in the notebook.
### View box plots
-In the notebook, scroll down to cell 20 to see the [box plots](https://en.wikipedia.org/wiki/Box_plot). The box plots are based on static data so to update them you must rerun the cell:
+In the notebook, scroll down to see the [box plots](https://en.wikipedia.org/wiki/Box_plot). The box plots are based on static data so to update them you must rerun the cell:
:::image type="content" source="media/howto-create-custom-analytics/box-plots.png" alt-text="Screenshot of box plots.":::
You can resize the plots in the notebook.
## Tidy up
-To tidy up after this how-to and avoid unnecessary costs, delete the **IoTCentralAnalysis** resource group in the Azure portal.
+To tidy up after this how-to and avoid unnecessary costs, you can run the following command to delete the resource group:
-You can delete the IoT Central application from the **Management** page within the application.
+```azurecli-interactive
+az group delete -n eventhubsrg
+```
## Next steps
iot-central Howto Use Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-properties.md
Optional fields, such as display name and description, let you add more details
When you create a property, you can specify complex schema types such as **Object** and **Enum**.
-![Screenshot that shows how to add a capability.](./media/howto-use-properties/property.png)
When you select the complex **Schema**, such as **Object**, you need to define the object, too. The following code shows the definition of an Object property type. This object has two fields with types string and integer.
This article uses Node.js for simplicity. For other language examples, see the [
The following view in Azure IoT Central application shows the properties you can see. The view automatically makes the **Device model** property a _read-only device property_. ## Implement writable properties
When the operator sets a writable property in the Azure IoT Central application,
The following view shows the writable properties. When you enter the value and select **Save**, the initial status is **Pending**. When the device accepts the change, the status changes to **Accepted**.
-![Screenshot that shows Pending status.](./media/howto-use-properties/status-pending.png)
-![Screenshot that shows Accepted property.](./media/howto-use-properties/accepted.png)
+
+## Use properties on unassigned devices
+
+You can view and update writable properties on a device that isn't assigned to a device template.
+
+To view existing properties on an unassigned device, navigate to the device in the **Devices** section, select **Manage device**, and then **Device Properties**:
++
+You can update the writable properties in this view:
+ ## Next steps Now that you've learned how to use properties in your Azure IoT Central application, see: * [Payloads](concepts-telemetry-properties-commands.md)
-* [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md)
+* [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md)
iot-develop Troubleshoot Embedded Device Quickstarts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/troubleshoot-embedded-device-quickstarts.md
Title: Troubleshooting the Azure RTOS embedded device quickstarts description: Steps to help you troubleshoot common issues when using the Azure RTOS embedded device quickstarts--++ Last updated 06/10/2021+ # Troubleshooting the Azure RTOS embedded device quickstarts
This issue can occur when you attempt to build the project. It's the result of t
### Description
-The issue can occur because the path to an object file exceeds the default maximum path length in Windows. Examine the build output for a message similar to the following:
+The issue can occur because the path to an object file exceeds the default maximum path length in Windows. Examine the build output for a message similar to the following example:
```output -- Configuring done
CMake Warning in C:/embedded quickstarts/areallyreallyreallylongpath/getting-sta
You can try one of the following options to resolve this error: * Clone the repository into a directory with a shorter path and try again.
-* Follow the instructions in [Maximum Path Length Limitation](/windows/win32/fileio/maximum-file-path-limitation) to enable long paths in Windows 10, version 1607 and later.
+* Follow the instructions in [Maximum Path Length Limitation](/windows/win32/fileio/maximum-file-path-limitation) to enable long paths in Windows 11 and Windows 10, version 1607 and later.
## Issue: Device can't connect to Iot hub ### Description
-The issue can occur after you've created Azure resources, and flashed your device. When you try to connect your newly flashed device to Azure IoT, you see a console message like the following:
+The issue can occur after you've created Azure resources, and flashed your device. When you try to connect your newly flashed device to Azure IoT, you see a console message like the following example:
```output Unable to resolve DNS for MQTT Server
Unable to resolve DNS for MQTT Server
### Description
-After you flash a device that uses a Wi-Fi connection and try to connect to your Wi-Fi network, you get an error message that Wi-Fi is unable to connect.
+After you flash a device that uses a Wi-Fi connection, you get an error message that Wi-Fi is unable to connect.
### Resolution
After you flash a device that uses a Wi-Fi connection and try to connect to your
### Description
-You can't complete the process of flashing your device. You'll know this if you experience any of the following symptoms:
+You can't complete the process of flashing your device. The following symptoms indicate that flashing is incomplete:
* The **.bin* image file that you built doesn't copy to the device. * The utility that you're using to flash the device gives a warning or error.
You can't complete the process of flashing your device. You'll know this if you
### Description
-After you flash your device and connect it to your computer, you get a message like the following in your terminal software:
+After you flash your device and connect it to your computer, you get output like the following message in your terminal software:
```output Failed to initialize the port.
After you flash your device successfully and connect it to your computer, you se
### Description
-After you flash your device and connect it to your computer, you get a repeated message like the following in your terminal window:
+After you flash your device and connect it to your computer, you get output like the following message in your terminal window:
```output Failed to publish temperature
Failed to publish temperature
### Description
-Because [Defender for IoT module](/defender-for-iot/device-builders/iot-security-azure-rtos) is enabled by default from the device end, you might observe extra messages that are caused by that.
+Because [Defender for IoT module](/azure/defender-for-iot/device-builders/iot-security-azure-rtos) is enabled by default from the device end, you might observe extra messages in the output.
### Resolution
If after reviewing the issues in this article, you still can't monitor your devi
* [STMicroelectronics B-L475E-IOT01](https://www.st.com/content/st_com/en/products/evaluation-tools/product-evaluation-tools/mcu-mpu-eval-tools/stm32-mcu-mpu-eval-tools/stm32-discovery-kits/b-l475e-iot01a.html) * [NXP MIMXRT1060-EVK](https://www.nxp.com/design/development-boards/i-mx-evaluation-and-development-boards/mimxrt1060-evk-i-mx-rt1060-evaluation-kit:MIMXRT1060-EVK)
-* [Microchip ATSAME54-XPro](https://www.microchip.com/developmenttools/productdetails/atsame54-xpro)
+* [Microchip ATSAME54-XPro](https://www.microchip.com/developmenttools/productdetails/atsame54-xpro)
iot-hub Iot Hub Devguide Device Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-device-twins.md
This information is kept at every level (not just the leaves of the JSON structu
Tags, desired properties, and reported properties all support optimistic concurrency. If you need to guarantee order of twin property updates, consider implementing synchronization at the application level by waiting for reported properties callback before sending the next update.
-Tags have an ETag, as per [RFC7232](https://tools.ietf.org/html/rfc7232), that represents the tag's JSON representation. You can use ETags in conditional update operations from the solution back end to ensure consistency.
- Device twins have an ETag (`etag` property), as per [RFC7232](https://tools.ietf.org/html/rfc7232), that represents the twin's JSON representation. You can use the `etag` property in conditional update operations from the solution back end to ensure consistency. This is the only option for ensuring consistency in operations that involve the `tags` container. Device twin desired and reported properties also have a `$version` value that is guaranteed to be incremental. Similarly to an ETag, the version can be used by the updating party to enforce consistency of updates. For example, a device app for a reported property or the solution back end for a desired property.
iot-hub Iot Hub Devguide Module Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-module-twins.md
This information is kept at every level (not just the leaves of the JSON structu
## Optimistic concurrency
-Tags, desired, and reported properties all support optimistic concurrency.
+Tags, desired properties, and reported properties all support optimistic concurrency. If you need to guarantee order of twin property updates, consider implementing synchronization at the application level by waiting for reported properties callback before sending the next update.
Module twins have an ETag (`etag` property), as per [RFC7232](https://tools.ietf.org/html/rfc7232), that represents the twin's JSON representation. You can use the `etag` property in conditional update operations from the solution back end to ensure consistency. This is the only option for ensuring consistency in operations that involve the `tags` container.
key-vault Security Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/security-features.md
Azure Private Link Service enables you to access Azure Key Vault and Azure hoste
- Despite known vulnerabilities in TLS protocol, there is no known attack that would allow a malicious agent to extract any information from your key vault when the attacker initiates a connection with a TLS version that has vulnerabilities. The attacker would still need to authenticate and authorize itself, and as long as legitimate clients always connect with recent TLS versions, there is no way that credentials could have been leaked from vulnerabilities at old TLS versions. > [!NOTE]
-> For Azure Key Vault, ensure that the application accessing the Keyvault service should be running on a platform that supports TLS 1.2 or recent version. If the application is dependent on .Net framework, it should be updated as well. You can also make the registry changes mentioned in [this article](/troubleshoot/azure/active-directory/enable-support-tls-environment) to explicitly enable the use of TLS 1.2 at OS level and for .Net framework. To meet with compliance obligations and to improve security posture, Key Vault connections via TLS 1.0 & 1.1 will be deprecated starting on 31st May 2022 and disallowed later in the future.
+> For Azure Key Vault, ensure that the application accessing the Keyvault service should be running on a platform that supports TLS 1.2 or recent version. If the application is dependent on .Net framework, it should be updated as well. You can also make the registry changes mentioned in [this article](/troubleshoot/azure/active-directory/enable-support-tls-environment) to explicitly enable the use of TLS 1.2 at OS level and for .Net framework. To meet with compliance obligations and to improve security posture, Key Vault connections via TLS 1.0 & 1.1 are considered a security risk, and any connections using old TLS protocols will be disallowed in 2023.
## Key Vault authentication options
key-vault Tutorial Rotation Dual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/tutorial-rotation-dual.md
The best way to authenticate to Azure services is by using a [managed identity](
This tutorial shows how to automate the periodic rotation of secrets for databases and services that use two sets of authentication credentials. Specifically, this tutorial shows how to rotate Azure Storage account keys stored in Azure Key Vault as secrets. You'll use a function triggered by Azure Event Grid notification. > [!NOTE]
-> Storage account keys can be automatically managed in Key Vault if you provide shared access signature tokens for delegated access to the storage account. There are services that require storage account connection strings with access keys. For that scenario, we recommend this solution.
+> For Storage account services, using Azure Active Directory to authorize requests is recommended. For more information, see [Authorize access to blobs using Azure Active Directory](../../storage/blobs/authorize-access-azure-active-directory.md). There are services that require storage account connection strings with access keys. For that scenario, we recommend this solution.
Here's the rotation solution described in this tutorial:
In this solution, Azure Key Vault stores storage account individual access keys
* Azure Key Vault. * Two Azure storage accounts.
+> [!NOTE]
+> Rotation of shared storage account key revokes account level shared access signature (SAS) generated based on that key. After storage account key rotation, you must regenerate account-level SAS tokens to avoid disruptions to applications.
+ You can use this deployment link if you don't have an existing key vault and existing storage accounts: [![Link that's labelled Deploy to Azure.](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2FKeyVault-Rotation-StorageAccountKey-PowerShell%2Fmaster%2FARM-Templates%2FInitial-Setup%2Fazuredeploy.json)
lab-services Classroom Labs Fundamentals 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/classroom-labs-fundamentals-1.md
+
+ Title: Architecture fundamentals with lab accounts in Azure Lab Services | Microsoft Docs
+description: This article will cover the fundamental resources used by Lab Services and basic architecture of a lab that using lab accounts.
++ Last updated : 05/30/2022++++
+# Architecture Fundamentals in Azure Lab Services when using lab accounts
++
+Azure Lab Services is a SaaS (software as a service) solution, which means that the resources needed by Lab Services are handled for you. This article will cover the fundamental resources used by Lab Services and basic architecture of a lab.
+
+Azure Lab Services does provide a couple of areas that allow you to use your own resources with Lab Services. For more information about using VMs on your own network, see how to [peer a virtual network](how-to-connect-peer-virtual-network.md). To reuse images from an Azure Compute Gallery, see how to [attach a compute gallery](how-to-attach-detach-shared-image-gallery.md).
+
+Below is the basic architecture of a lab. The lab account is hosted in your subscription. The student VMs, along with the resources needed to support the VMs are hosted in a subscription owned by Azure Lab Services. LetΓÇÖs talk about what is in Azure Lab Service's subscriptions in more detail.
++
+## Hosted Resources
+
+The resources required to run a lab are hosted in one of the Microsoft-managed Azure subscriptions. Resources include:
+
+- template virtual machine for the educator
+- virtual machine for each student
+- network-related items such as a load balancer, virtual network, and network security group.
+
+These subscriptions are monitored for suspicious activity. It's important to note that this monitoring is done externally to the virtual machines through VM extension or network pattern monitoring. If [shutdown on disconnect](how-to-enable-shutdown-disconnect.md) is enabled, a diagnostic extension is enabled on the virtual machine. The extension allows Lab Services to be informed of the remote desktop protocol (RDP) session disconnect event.
+
+## Virtual Network
+
+Each lab is isolated by its own virtual network. If the lab has a [peered virtual network](how-to-connect-peer-virtual-network.md), then each lab is isolated by its own subnet. Students connect to their virtual machine through a load balancer. No student virtual machines have a public IP address; they only have a private IP address. The connection string for the student will be the public IP address of the load balancer and a random port between 49152 and 65535. Inbound rules on the load balancer forward the connection, depending on the operating system, to either port 22 (SSH) or port 3389 (RDP) of the appropriate virtual machine. An NSG prevents outside traffic on any other ports.
+
+## Access control to the virtual machines
+
+Lab Services handles the studentΓÇÖs ability to perform actions like start and stop on their virtual machines. It also controls access to their VM connection information.
+
+Lab Services also handles the registration of students to the service. There are currently two different access settings: restricted and nonrestricted. For more information, see the [manage lab users](how-to-configure-student-usage.md#send-invitations-to-users) article. Restricted access means Lab Services verifies that the students are added as user before allowing access. Nonrestricted means any user can register as long as they have the registration link and there's capacity in the lab. Nonrestricted can be useful for hackathon events.
+
+Student VMs that are hosted in the lab have a username and password set by the creator of the lab. Alternately, the creator of the lab can allow registered students to choose their own password on first sign-in.
+
+## Next steps
+
+To learn more about features available in Lab Services, see [Azure Lab Services concepts](classroom-labs-concepts.md) and [Azure Lab Services overview](lab-services-overview.md).
lab-services Classroom Labs Fundamentals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/classroom-labs-fundamentals.md
Title: Architecture Fundamentals in Azure Lab Services | Microsoft Docs
description: This article will cover the fundamental resources used by Lab Services and basic architecture of a lab. Previously updated : 11/19/2021 Last updated : 05/30/2022 + # Architecture Fundamentals in Azure Lab Services Azure Lab Services is a SaaS (software as a service) solution, which means that the resources needed by Lab Services are handled for you. This article will cover the fundamental resources used by Lab Services and basic architecture of a lab.
-Azure Lab Services does provide a couple of areas that allow you to use your own resources in conjunction with Lab Services. For more information about using VMs on your own network, see how to [peer a virtual network](how-to-connect-peer-virtual-network.md). If using the April 2022 Update, see [Connect to your virtual network in Azure Lab Services](how-to-connect-vnet-injection.md) to use virtual network injection instead of virtual network peering. To reuse images from an Azure Compute Gallery, see how to [attach a compute gallery](how-to-attach-detach-shared-image-gallery.md).
+Azure Lab Services does provide a couple of areas that allow you to use your own resources with Lab Services. For more information about using VMs on your own network, see [Connect to your virtual network in Azure Lab Services](how-to-connect-vnet-injection.md) to use virtual network injection instead of virtual network peering. To reuse images from an Azure Compute Gallery, see how to [attach a compute gallery](how-to-attach-detach-shared-image-gallery.md).
-Below is the basic architecture of a lab. The lab account or lab plan is hosted in your subscription. The student VMs, along with the resources needed to support the VMs are hosted in a subscription owned by Azure Lab Services. LetΓÇÖs talk about what is in Azure Lab Service's subscriptions in more detail.
+Below is the basic architecture of a lab. The lab plan is hosted in your subscription. The student VMs, along with the resources needed to support the VMs are hosted in a subscription owned by Azure Lab Services. LetΓÇÖs talk about what is in Azure Lab Service's subscriptions in more detail.
-![labs basic architecture](./media/classroom-labs-fundamentals/labservices-basic-architecture.png)
## Hosted Resources
-The resources required to run a lab are hosted in one of the Microsoft-managed Azure subscriptions. Resources include a template virtual machine for the educator, virtual machine for each student, and network-related items such as a load balancer, virtual network, and network security group. These subscriptions are monitored for suspicious activity. It is important to note that this monitoring is done externally to the virtual machines through VM extension or network pattern monitoring. If [shutdown on disconnect](how-to-enable-shutdown-disconnect.md) is enabled, a diagnostic extension is enabled on the virtual machine. The extension allows Lab Services to be informed of the remote desktop protocol (RDP) session disconnect event.
+The resources required to run a lab are hosted in one of the Microsoft-managed Azure subscriptions. Resources include:
+
+- template virtual machine for the educator
+- virtual machine for each student
+- network-related items such as a load balancer, virtual network, and network security group
+
+These subscriptions are monitored for suspicious activity. It's important to note that this monitoring is done externally to the virtual machines through VM extension or network pattern monitoring. If [shutdown on disconnect](how-to-enable-shutdown-disconnect.md) is enabled, a diagnostic extension is enabled on the virtual machine. The extension allows Lab Services to be informed of the remote desktop protocol (RDP) session disconnect event.
## Virtual Network
-> [!NOTE]
-> For the latest experience in Azure Lab Services using your virtual network, see [Connect to your virtual network](how-to-connect-vnet-injection.md). This experience replaces the peer virtual network experience.
+Each lab is isolated by its own virtual network. If the lab is using [advanced networking](how-to-connect-vnet-injection.md), then each lab using the same subnet that has been delegated to Azure Lab Services and connected to the lab plan.
+
+Students connect to their virtual machine through a load balancer. No student virtual machines have a public IP address; they only have a private IP address. The connection string for the student will be the public IP address of the load balancer and a random port between:
+
+- 4980-4989 and 5000-6999 for SSH connections
+- 4990-4999 and 7000-8999 for RDP connections
-Each lab is isolated by its own virtual network. If the lab has a [peered virtual network](how-to-connect-peer-virtual-network.md), then each lab is isolated by its own subnet. Students connect to their virtual machine through a load balancer. No student virtual machines have a public IP address; they only have a private ip address. The connection string for the student will be the public IP address of the load balancer and a random port between 49152 and 65535. Inbound rules on the load balancer forward the connection, depending on the operating system, to either port 22 (SSH) or port 3389 (RDP) of the appropriate virtual machine. An NSG prevents outside traffic on any other ports.
+Inbound rules on the load balancer forward the connection, depending on the operating system, to either port 22 (SSH) or port 3389 (RDP) of the appropriate virtual machine. An NSG prevents outside traffic on any other ports.
## Access control to the virtual machines Lab Services handles the studentΓÇÖs ability to perform actions like start and stop on their virtual machines. It also controls access to their VM connection information.
-Lab Services also handles the registration of students to the service. There are currently two different access settings: restricted and nonrestricted. For more information, see the [manage lab users](how-to-configure-student-usage.md#send-invitations-to-users) article. Restricted access means Lab Services verifies that the students are added as user before allowing access. Nonrestricted means any user can register as long as they have the registration link and there is capacity in the lab. Nonrestricted can be useful for hackathon events.
+Lab Services also handles the registration of students to the service. There are currently two different access settings: restricted and nonrestricted. For more information, see the [manage lab users](how-to-configure-student-usage.md#send-invitations-to-users) article. Restricted access means Lab Services verifies that the students are added as user before allowing access. Nonrestricted means any user can register as long as they have the registration link and there's capacity in the lab. Nonrestricted can be useful for hackathon events.
Student VMs that are hosted in the lab have a username and password set by the creator of the lab. Alternately, the creator of the lab can allow registered students to choose their own password on first sign-in.
lab-services How To Configure Firewall Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-configure-firewall-settings.md
Each organization or school will configure their own network in a way that best fits their needs. Sometimes that includes setting firewall rules that block Remote Desktop Protocol (RDP) or Secure Shell (SSH) connections to machines outside their own network. Because Azure Lab Services runs in the public cloud, some extra configuration maybe needed to allow students to access their VM when connecting from the campus network.
-Each lab uses single public IP address and multiple ports. All VMs, both the template VM and student VMs, will use this public IP address. The public IP address wonΓÇÖt change for the life of lab. Each VM will have a different port number. The port numbers range is 49152 - 65535. The combination of public IP address and port number is used to connect educators and students to the correct VM. This article will cover how to find the specific public IP address used by a lab. That information can be used to update inbound and outbound firewall rules so students can access their VMs.
+Each lab uses single public IP address and multiple ports. All VMs, both the template VM and student VMs, will use this public IP address. The public IP address wonΓÇÖt change for the life of lab. Each VM will have a different port number. The port numbers range is 49152 - 65535. If using the April 2022 Update (preview), the port ranges for SSH connections are 4980-4989 and 5000-6999. The port ranges for RDP connections are 4990-4999 and 7000-8999. The combination of public IP address and port number is used to connect educators and students to the correct VM. This article will cover how to find the specific public IP address used by a lab. That information can be used to update inbound and outbound firewall rules so students can access their VMs.
>[!IMPORTANT] >Each lab will have a different public IP address.
The public IP addresses for each lab are listed in the **All labs** page of the
## Conclusion
-Now we know the public IP address for the lab. Inbound and outbound rules can be created for the organization's firewall for the public ip address and the port range 49152 - 65535. Once the rules are updated, students can access their VMs without the network firewall blocking access.
+Now we know the public IP address for the lab. Inbound and outbound rules can be created for the organization's firewall for the public IP address and the port range 49152 - 65535. Once the rules are updated, students can access their VMs without the network firewall blocking access.
## Next steps
lab-services How To Enable Nested Virtualization Template Vm Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-enable-nested-virtualization-template-vm-using-script.md
Last updated 06/26/2020
Nested virtualization enables you to create a multi-VM environment inside a lab's template virtual machine. Publishing the template will provide each user in the lab with a virtual machine set up with multiple VMs within it. For more information about nested virtualization and Azure Lab Services, see [Enable nested virtualization on a template virtual machine in Azure Lab Services](how-to-enable-nested-virtualization-template-vm.md).
-The steps in this article focus on setting up nested virtualization for Windows Server 2016, Windows Server 2019, or Windows 10. You will use a script to set up template machine with Hyper-V. The following steps will guide you through how to use the [Lab Services Hyper-V scripts](https://github.com/Azure/azure-devtestlab/tree/master/samples/ClassroomLabs/Scripts/HyperV).
+The steps in this article focus on setting up nested virtualization for Windows Server 2016, Windows Server 2019, or Windows 10. You will use a script to set up template machine with Hyper-V. The following steps will guide you through how to use the [Lab Services Hyper-V scripts](https://github.com/Azure/LabServices/tree/main/General_Scripts/PowerShell/HyperV).
>[!IMPORTANT] >Select **Large (nested virtualization)** or **Medium (nested virtualization)** for the virtual machine size when creating the lab. Nested virtualization will not work otherwise.
load-testing Tutorial Cicd Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/tutorial-cicd-github-actions.md
Title: 'Tutorial: Identify performance regressions with Azure Load Testing and GitHub Actions'
+ Title: 'Tutorial: Automate regression testing with GitHub Actions'
description: 'In this tutorial, you learn how to automate performance regression testing by using Azure Load Testing and GitHub Actions CI/CD workflows.' Previously updated : 03/28/2022 Last updated : 05/30/2022 #Customer intent: As an Azure user, I want to learn how to automatically test builds for performance regressions on every pull request and/or deployment by using GitHub Actions. # Tutorial: Identify performance regressions with Azure Load Testing Preview and GitHub Actions
-This tutorial describes how to automate performance regression testing by using Azure Load Testing Preview and GitHub Actions. You'll set up a GitHub Actions CI/CD workflow to deploy a sample Node.js application on Azure and trigger a load test using the [Azure Load Testing action](https://github.com/marketplace/actions/azure-load-testing). Once the load test finishes, you'll use the Azure Load Testing dashboard to identify performance issues.
+This tutorial describes how to automate performance regression testing with Azure Load Testing Preview and GitHub Actions.
-You'll deploy a sample Node.js web app on Azure App Service. The web app uses Azure Cosmos DB for storing the data. The sample application also contains an Apache JMeter script to load test three APIs.
+You'll set up a GitHub Actions CI/CD workflow to deploy a sample Node.js application on Azure and trigger a load test using the [Azure Load Testing action](https://github.com/marketplace/actions/azure-load-testing).
-If you're using Azure Pipelines for your CI/CD workflows, see the corresponding [Azure Pipelines tutorial](./tutorial-cicd-azure-pipelines.md).
+You'll then define test failure criteria to ensure the application meets your goals. When a criterion isn't met, the CI/CD pipeline will fail. For more information, see [Define load test failure criteria](./how-to-define-test-criteria.md).
+
+Finally, you'll make the load test configurable by passing parameters from the CI/CD pipeline to the JMeter script. For example, you could use a GitHub secret to pass an authentication token the script. For more information, see [Parameterize load tests with secrets and environment variables](./how-to-parameterize-load-tests.md).
-Learn more about the [key concepts for Azure Load Testing](./concept-load-testing-concepts.md).
+If you're using Azure Pipelines for your CI/CD workflows, see the corresponding [Azure Pipelines tutorial](./tutorial-cicd-azure-pipelines.md).
You'll learn how to:
You'll learn how to:
## Set up the sample application repository
-To get started with this tutorial, you first need to set up a sample Node.js web application. The sample application contains a GitHub Actions workflow definition to deploy the application on Azure and trigger a load test.
+To get started with this tutorial, you first need to set up a sample Node.js web application. The sample application repository contains a GitHub Actions workflow definition that deploys the Node.js application on Azure and then triggers a load test.
[!INCLUDE [azure-load-testing-set-up-sample-application](../../includes/azure-load-testing-set-up-sample-application.md)]
First, you'll create an Azure Active Directory [service principal](../active-dir
> [!NOTE] > Azure Login supports multiple ways to authenticate with Azure. For other authentication options, see the [Azure and GitHub integration site](/azure/developer/github).
- The output is the role assignment credentials that provide access to your resource. The command should output a JSON object similar to this.
+ The output is the role assignment credentials that provide access to your resource. The command outputs a JSON object similar to the following snippet.
```json {
First, you'll create an Azure Active Directory [service principal](../active-dir
} ```
-1. Copy this JSON object, which you can use to authenticate from GitHub.
+1. Copy this JSON object. You'll store this value as a GitHub secret in a later step.
-1. Grant permissions to the service principal to create and run tests with Azure Load Testing. The **Load Test Contributor** role grants permissions to create, manage and run tests in an Azure Load Testing resource.
+1. Assign the service principal the **Load Test Contributor** role, which grants permission to create, manage and run tests in an Azure Load Testing resource.
First, retrieve the ID of the service principal object by running this Azure CLI command:
First, you'll create an Azure Active Directory [service principal](../active-dir
az ad sp list --filter "displayname eq 'my-load-test-cicd'" -o table ```
- Next, run the following Azure CLI command to assign the *Load Test Contributor* role to the service principal.
+ Next, assign the **Load Test Contributor** role to the service principal.
+
+ Replace the placeholder text `<sp-object-id>` with the `ObjectId` value from the previous Azure CLI command. Also, replace `<subscription-id>` with your Azure subscription ID.
```azurecli az role assignment create --assignee "<sp-object-id>" \
First, you'll create an Azure Active Directory [service principal](../active-dir
--scope /subscriptions/<subscription-id>/resourceGroups/<resource-group-name> \ --subscription "<subscription-id>" ```
-
+ In the previous command, replace the placeholder text `<sp-object-id>` with the `ObjectId` value from the previous Azure CLI command. Also, replace `<subscription-id>` with your Azure subscription ID.
+You now have a service principal that the necessary permissions to create and run a load test.
+ ### Configure the GitHub secret
-You'll add a GitHub secret **AZURE_CREDENTIALS** to your repository for the service principal you created in the previous step. The Azure Login action in the GitHub Actions workflow uses this secret to authenticate with Azure.
+Next, add a GitHub secret **AZURE_CREDENTIALS** to your repository to store the service principal you created earlier. You'll pass this GitHub secret to the Azure Login action to authenticate with Azure.
1. In [GitHub](https://github.com), browse to your forked repository, select **Settings** > **Secrets** > **New repository secret**.
You'll add a GitHub secret **AZURE_CREDENTIALS** to your repository for the serv
### Authenticate with Azure
-You can now use the `AZURE_CREDENTIALS` secret with the Azure Login action in your CI/CD workflow. The *workflow.yml* file in the sample application already has the necessary configuration:
+You can now use the `AZURE_CREDENTIALS` secret with the Azure Login action in your CI/CD workflow. The *.github/workflows/workflow.yml* file in the sample application repository already has this configuration:
```yml jobs:
jobs:
creds: ${{ secrets.AZURE_CREDENTIALS }} ```
-You've now authorized your GitHub Actions workflow to access your Azure Load Testing resource. You'll now configure the CI/CD workflow to run a load test by using Azure Load Testing.
+You've now authorized your GitHub Actions workflow to access your Azure Load Testing resource. You'll now configure the CI/CD workflow to run a load test with Azure Load Testing.
## Configure the GitHub Actions workflow to run a load test
-In this section, you'll set up a GitHub Actions workflow that triggers the load test. The sample application repository contains a workflow file *SampleApp.yaml*. The workflow first deploys the sample web application to Azure App Service, and then invokes the load test by using the [Azure Load Testing Action](https://github.com/marketplace/actions/azure-load-testing). The GitHub Actions uses an environment variable to pass the URL of the web application to the Apache JMeter script.
+In this section, you'll set up a GitHub Actions workflow that triggers the load test by using the [Azure Load Testing Action](https://github.com/marketplace/actions/azure-load-testing).
-The GitHub Actions workflow performs the following steps for every update to the main branch:
+The following code snippet shows an example of how to trigger a load test using the `azure/load-testing` action:
+
+```yml
+- name: 'Azure Load Testing'
+uses: azure/load-testing@v1
+with:
+ loadTestConfigFile: 'my-jmeter-script.jmx'
+ loadTestResource: my-load-test-resource
+ resourceGroup: my-resource-group
+ env: |
+ [
+ {
+ "name": "webapp",
+ "value": "my-web-app.azurewebsites.net"
+ }
+ ]
+```
+
+The sample application repository already contains a sample workflow file *.github/workflows/workflow.yml*. The GitHub Actions workflow performs the following steps for every update to the main branch:
- Deploy the sample Node.js application to an Azure App Service web app. - Create an Azure Load Testing resource using the *ARMTemplate/template.json* Azure Resource Manager (ARM) template, if the resource doesn't exist yet. Learn more about ARM templates [here](../azure-resource-manager/templates/overview.md).
Follow these steps to configure the GitHub Actions workflow for your environment
LOAD_TEST_RESOURCE_GROUP: "<your-azure-load-testing-resource-group-name>" ```
- These variables are used to configure the GitHub actions for deploying the sample application to Azure, and to connect to your Azure Load Testing resource.
+ These variables are used to configure the GitHub Actions for deploying the sample application to Azure, and to connect to your Azure Load Testing resource.
1. Commit your changes directly to the main branch.
- :::image type="content" source="./media/tutorial-cicd-github-actions/commit-workflow.png" alt-text="Screenshot that shows selections for committing changes to the GitHub Actions workflow file.":::
- The commit will trigger the GitHub Actions workflow in your repository. You can verify that the workflow is running by going to the **Actions** tab. ## View load test results
-To view the results of the load test in the GitHub Actions workflow log:
+When the load test finishes, view the results in the GitHub Actions workflow log:
1. Select the **Actions** tab in your GitHub repository to view the list of workflow runs.
To view the results of the load test in the GitHub Actions workflow log:
1. On the screen that shows the workflow run's details, select the **loadTestResults** artifact to download the result files for the load test. :::image type="content" source="./media/tutorial-cicd-github-actions/github-actions-artifacts.png" alt-text="Screenshot that shows artifacts of the workflow run.":::
-
+ ## Define test pass/fail criteria
-In this section, you'll add criteria to determine whether your load test passes or fails. If at least one of the pass/fail criteria evaluates to `true`, the load test is unsuccessful.
+You can use test failure criteria to define thresholds for when a load test should fail. For example, a test might fail when the percentage of failed requests surpasses a specific value.
+
+When at least one of the failure criteria is met, the load test status is failed. As a result, the CI/CD workflow will also fail and the development team can be alerted.
-You can specify these criteria in the test configuration YAML file:
+You can specify these criteria in the [test configuration YAML file](./reference-test-config-yaml.md):
1. Edit the *SampleApp.yml* file in your GitHub repository.
You can specify these criteria in the test configuration YAML file:
## Pass parameters to your load tests from the workflow
-Next, you'll parameterize your load test by using workflow variables. These parameters can be secrets, such as passwords, or non-secrets.
+Next, you'll parameterize your load test by using workflow variables. These parameters can be secrets, such as passwords, or non-secrets. For more information, see [Parameterize load tests with secrets and environment variables](./how-to-parameterize-load-tests.md).
-In this tutorial, you'll reconfigure the sample application to accept only secure requests. To send a secure request, you need to pass a secret value in the HTTP request:
+In this tutorial, you'll now use the *SampleApp_Secrets.jmx* JMeter test script. This script invokes an application endpoint that requires a secure value to be passed as an HTTP header.
-1. Edit the *SampleApp.yaml* file in your GitHub repository.
+1. Edit the *SampleApp.yaml* file in your GitHub repository and update the `testPlan` configuration setting to use the *SampleApp_Secrets.jmx* file.
- Update the `testPlan` configuration setting to use the *SampleApp_Secrets.jmx* file:
+ The `testPlan` setting specifies which JMeter script Azure Load Testing uses.
```yml version: v0.1
In this tutorial, you'll reconfigure the sample application to accept only secur
You've now created a GitHub Actions workflow that uses Azure Load Testing for automatically running load tests. By using pass/fail criteria, you can set the status of the CI/CD workflow. With parameters, you can make the running of load tests configurable.
+* Learn more about the [key concepts for Azure Load Testing](./concept-load-testing-concepts.md).
* Learn more about the [Azure Load Testing Action](https://github.com/marketplace/actions/azure-load-testing). * Learn how to [parameterize a load test](./how-to-parameterize-load-tests.md). * Learn how to [define test pass/fail criteria](./how-to-define-test-criteria.md).
logic-apps Logic Apps Enterprise Integration Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-certificates.md
You can use the following certificate types in your workflows:
* [Public certificates](https://en.wikipedia.org/wiki/Public_key_certificate), which you must purchase from a public internet [certificate authority (CA)](https://en.wikipedia.org/wiki/Certificate_authority). These certificates don't require any keys.
-* Private certificates or [*self-signed certificates*](https://en.wikipedia.org/wiki/Self-signed_certificate), which you create and issue yourself. However, these certificates require private keys.
+* Private certificates or [*self-signed certificates*](https://en.wikipedia.org/wiki/Self-signed_certificate), which you create and issue yourself. However, these certificates require [private keys in an Azure key vault](#prerequisites).
If you're new to logic apps, review [What is Azure Logic Apps](logic-apps-overview.md)? For more information about B2B enterprise integration, review [B2B enterprise integration workflows with Azure Logic Apps and Enterprise Integration Pack](logic-apps-enterprise-integration-overview.md).
If you're new to logic apps, review [What is Azure Logic Apps](logic-apps-overvi
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
- * [Add a corresponding public certificate](#add-public-certificate) to your key vault. This certificate appears in your [agreement's **Send** and **Receive** settings for signing and encrypting messages](logic-apps-enterprise-integration-agreements.md). For example, review [Reference for AS2 messages settings in Azure Logic Apps](logic-apps-enterprise-integration-as2-message-settings.md).
+ * [Add the corresponding public certificate](#add-public-certificate) to your key vault. This certificate appears in your [agreement's **Send** and **Receive** settings for signing and encrypting messages](logic-apps-enterprise-integration-agreements.md). For example, review [Reference for AS2 messages settings in Azure Logic Apps](logic-apps-enterprise-integration-as2-message-settings.md).
* At least two [trading partners](logic-apps-enterprise-integration-partners.md) and an [agreement between those partners](logic-apps-enterprise-integration-agreements.md) in your integration account. An agreement requires a host partner and a guest partner. Also, an agreement requires that both partners use the same or compatible *business identity* qualifier that's appropriate for an AS2, X12, EDIFACT, or RosettaNet agreement.
If you're new to logic apps, review [What is Azure Logic Apps](logic-apps-overvi
<a name="add-public-certificate"></a>
-## Add a public certificate
+## Use a public certificate
To use a *public certificate* in your workflow, you have to first add the certificate to your integration account.
To use a *public certificate* in your workflow, you have to first add the certif
|-|-|-|-| | **Name** | Yes | <*certificate-name*> | Your certificate's name, which is `publicCert` in this example | | **Certificate Type** | Yes | **Public** | Your certificate's type |
- | **Certificate** | Yes | <*certificate-file-name*> | To browse for the certificate file that you want to add, select the folder icon next to the **Certificate** box. |
+ | **Certificate** | Yes | <*certificate-file-name*> | To browse for the certificate file that you want to add, select the folder icon next to the **Certificate** box. Select the certificate that you want to use. |
||||| ![Screenshot showing the Azure portal and integration account with "Add" selected and the "Add Certificate" pane with public certificate details.](media/logic-apps-enterprise-integration-certificates/public-certificate-details.png)
To use a *public certificate* in your workflow, you have to first add the certif
![Screenshot showing the Azure portal and integration account with the public certificate in the "Certificates" list.](media/logic-apps-enterprise-integration-certificates/new-public-certificate.png)
-<a name="add-public-certificate"></a>
+<a name="add-private-certificate"></a>
-## Add a private certificate
+## Use a private certificate
-To use a *private certificate* in your workflow, you have to first add the certificate to your integration account. Make sure that you've also met the [prerequisites private certificates](#prerequisites).
+To use a *private certificate* in your workflow, you have to first meet the [prerequisites for private keys](#prerequisites), and add a public certificate to your integration account.
1. In the [Azure portal](https://portal.azure.com) search box, enter `integration accounts`, and select **Integration accounts**.
To use a *private certificate* in your workflow, you have to first add the certi
|-|-|-|-| | **Name** | Yes | <*certificate-name*> | Your certificate's name, which is `privateCert` in this example | | **Certificate Type** | Yes | **Private** | Your certificate's type |
- | **Certificate** | Yes | <*certificate-file-name*> | To browse for the certificate file that you want to add, select the folder icon next to the **Certificate** box. In the key vault that contains your private key, the file you add there is the public certificate. |
+ | **Certificate** | Yes | <*certificate-file-name*> | To browse for the certificate file that you want to add, select the folder icon next to the **Certificate** box. Select the public certificate that corresponds to the private key that's stored in your key vault. |
| **Resource Group** | Yes | <*integration-account-resource-group*> | Your integration account's resource group, which is `Integration-Account-RG` in this example | | **Key Vault** | Yes | <*key-vault-name*> | Your key vault name | | **Key name** | Yes | <*key-name*> | Your key name |
logic-apps Logic Apps Scenario Social Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-scenario-social-serverless.md
Title: Create customer insights dashboard
description: Manage customer feedback, social media data, and more by building a customer dashboard with Azure Logic Apps and Azure Functions. ms.suite: integration-- Last updated 03/15/2018
machine-learning Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/import-data.md
This article describes a component in Azure Machine Learning designer.
Use this component to load data into a machine learning pipeline from existing cloud data services. > [!Note]
-> All functionality provided by this component can be done by **datastore** and **datasets** in the worksapce landing page. We recommend you use **datastore** and **dataset** which includes additional features like data monitoring. To learn more, see [How to Access Data](../v1/how-to-access-data.md) and [How to Register Datasets](../v1/how-to-create-register-datasets.md) article.
+> All functionality provided by this component can be done by **datastore** and **datasets** in the workspace landing page. We recommend you use **datastore** and **dataset** which includes additional features like data monitoring. To learn more, see [How to Access Data](../v1/how-to-access-data.md) and [How to Register Datasets](../v1/how-to-create-register-datasets.md) article.
> After you register a dataset, you can find it in the **Datasets** -> **My Datasets** category in designer interface. This component is reserved for Studio(classic) users to for a familiar experience. >
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-train.md
The following shows two ways of creating an MLTable.
```Python from azure.ai.ml.constants import AssetTypes
-from azure.ai.ml import automl
-from azure.ai.ml.entities import JobInput
+from azure.ai.ml import automl, Input
# A. Create MLTable for training data from your local directory
-my_training_data_input = JobInput(
+my_training_data_input = Input(
type=AssetTypes.MLTABLE, path="./data/training-mltable-folder" ) # B. Remote MLTable definition
-my_training_data_input = JobInput(type=AssetTypes.MLTABLE, path="azureml://datastores/workspaceblobstore/paths/Classification/Train")
+my_training_data_input = Input(type=AssetTypes.MLTABLE, path="azureml://datastores/workspaceblobstore/paths/Classification/Train")
``` ### Training, validation, and test data
machine-learning How To Create Register Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-register-data-assets.md
For a complete example, see the [working_with_uris.ipynb notebook](https://githu
# [Python-SDK](#tab/Python-SDK) ```python from azure.ai.ml.entities import Data
-from azure.ai.ml._constants import AssetTypes
+from azure.ai.ml.constants import AssetTypes
# select one from: my_path = 'abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>' # adls gen2
path: wasbs://mainstorage9c05dabf5c924.blob.core.windows.net/azureml-blobstore-5
### Consume registered URI Folder data assets in job ```python
-from azure.ai.ml.entities import Data, UriReference, JobInput, CommandJob
-from azure.ai.ml._constants import AssetTypes
+from azure.ai.ml import Input, command
+from azure.ai.ml.entities import Data
+from azure.ai.ml.constants import AssetTypes
registered_data_asset = ml_client.data.get(name='titanic', version='1') my_job_inputs = {
- "input_data": JobInput(
+ "input_data": Input(
type=AssetTypes.URI_FOLDER, path=registered_data_asset.id ) }
-job = CommandJob(
+job = command(
code="./src", command='python read_data_asset.py --input_folder ${{inputs.input_data}}', inputs=my_job_inputs,
returned_job.services["Studio"].endpoint
# [Python-SDK](#tab/Python-SDK) ```python from azure.ai.ml.entities import Data
-from azure.ai.ml._constants import AssetTypes
+from azure.ai.ml.constants import AssetTypes
# select one from: my_file_path = '<path>/<file>' # local
Below we show an example of versioning the sample data in this repo. The data is
# [Python-SDK](#tab/Python-SDK) ```python from azure.ai.ml.entities import Data
-from azure.ai.ml._constants import AssetTypes
+from azure.ai.ml.constants import AssetTypes
import mltable my_data = Data(
machine-learning How To Manage Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-models.md
Use the tabs below to select where your model is located.
```python from azure.ai.ml.entities import Model
-from azure.ai.ml._constants import ModelType
+from azure.ai.ml.constants import ModelType
file_model = Model( path="mlflow-model/model.pkl",
A model can be created from a cloud path using any one of the following supporte
```python from azure.ai.ml.entities import Model
-from azure.ai.ml._constants import ModelType
+from azure.ai.ml.constants import ModelType
cloud_model = Model( path= "azureml://datastores/workspaceblobstore/paths/model.pkl"
Example:
```python from azure.ai.ml.entities import Model
-from azure.ai.ml._constants import ModelType
+from azure.ai.ml.constants import ModelType
run_model = Model( path="runs:/$RUN_ID/model/"
Saving model from a named output:
```python from azure.ai.ml.entities import Model
-from azure.ai.ml._constants import ModelType
+from azure.ai.ml.constants import ModelType
run_model = Model( path="azureml://jobs/$RUN_ID/outputs/artifacts/paths/model/"
machine-learning How To Read Write Data V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-read-write-data-v2.md
ml_client = MLClient(InteractiveBrowserCredential(), subscription_id, resource_g
## Read local data in a job
-You can use data from your current working directory in a training job with the JobInput class.
-The JobInput class allows you to define data inputs from a specific file, `uri_file` or a folder location, `uri_folder`. In the JobInput object, you specify the `path` of where your data is located; the path can be a local path or a cloud path. Azure Machine Learning supports `https://`, `abfss://`, `wasbs://` and `azureml://` URIs.
+You can use data from your current working directory in a training job with the Input class.
+The Input class allows you to define data inputs from a specific file, `uri_file` or a folder location, `uri_folder`. In the Input object, you specify the `path` of where your data is located; the path can be a local path or a cloud path. Azure Machine Learning supports `https://`, `abfss://`, `wasbs://` and `azureml://` URIs.
> [!IMPORTANT] > If the path is local, but your compute is defined to be in the cloud, Azure Machine Learning will automatically upload the data to cloud storage for you.
The JobInput class allows you to define data inputs from a specific file, `uri_f
# [Python-SDK](#tab/Python-SDK) ```python
-from azure.ai.ml.entities import Data, UriReference, JobInput, CommandJob
-from azure.ai.ml._constants import AssetTypes
+from azure.ai.ml import Input, command
+from azure.ai.ml.entities import Data
+from azure.ai.ml.constants import AssetTypes
my_job_inputs = {
- "input_data": JobInput(
+ "input_data": Input(
path='./sample_data', # change to be your local directory type=AssetTypes.URI_FOLDER ) }
-job = CommandJob(
+job = command(
code="./src", # local path where the code is stored command='python train.py --input_folder ${{inputs.input_data}}', inputs=my_job_inputs,
The following code shows how to read in uri_folder type data from Azure Data Lak
```python
-from azure.ai.ml.entities import Data, UriReference, JobInput, CommandJob
-from azure.ai.ml._constants import AssetTypes
+from azure.ai.ml import Input, command
+from azure.ai.ml.entities import Data
+from azure.ai.ml.constants import AssetTypes
my_job_inputs = {
- "input_data": JobInput(
+ "input_data": Input(
path='abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>', # Blob: 'https://<account_name>.blob.core.windows.net/<container_name>/path' type=AssetTypes.URI_FOLDER ) }
-job = CommandJob(
+job = command(
code="./src", # local path where the code is stored command='python train.py --input_folder ${{inputs.input_data}}', inputs=my_job_inputs,
compute: azureml:cpu-cluster
You can read and write data from your job into your cloud-based storage.
-The JobInput defaults the mode - how the input will be exposed during job runtime - to InputOutputModes.RO_MOUNT (read-only mount). Put another way, Azure Machine Learning will mount the file or folder to the compute and set the file/folder to read-only. By design, you can't write to JobInputs only JobOutputs. The data is automatically uploaded to cloud storage.
+The Input defaults the mode - how the input will be exposed during job runtime - to InputOutputModes.RO_MOUNT (read-only mount). Put another way, Azure Machine Learning will mount the file or folder to the compute and set the file/folder to read-only. By design, you can't write to JobInputs only JobOutputs. The data is automatically uploaded to cloud storage.
Matrix of possible types and modes for job inputs and outputs:
As you can see from the table, `eval_download` and `eval_mount` are unique to `m
# [Python-SDK](#tab/Python-SDK) ```python
-from azure.ai.ml.entities import Data, UriReference, JobInput, CommandJob, JobOutput
-from azure.ai.ml._constants import AssetTypes
+from azure.ai.ml import Input, command
+from azure.ai.ml.entities import Data, JobOutput
+from azure.ai.ml.constants import AssetTypes
my_job_inputs = {
- "input_data": JobInput(
+ "input_data": Input(
path='abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>', type=AssetTypes.URI_FOLDER )
my_job_outputs = {
) }
-job = CommandJob(
+job = command(
code="./src", #local path where the code is stored command='python pre-process.py --input_folder ${{inputs.input_data}} --output_folder ${{outputs.output_folder}}', inputs=my_job_inputs,
The following example demonstrates versioning of sample data, and shows how to r
```python from azure.ai.ml.entities import Data
-from azure.ai.ml._constants import AssetTypes
+from azure.ai.ml.constants import AssetTypes
my_data = Data( path="./sample_data/titanic.csv",
To register data that is in a cloud location, you can specify the path with any
```python from azure.ai.ml.entities import Data
-from azure.ai.ml._constants import AssetTypes
+from azure.ai.ml.constants import AssetTypes
my_path = 'abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>' # adls gen2
The following example demonstrates how to consume `version` 1 of the registered
```python
-from azure.ai.ml.entities import Data, UriReference, JobInput, CommandJob
-from azure.ai.ml._constants import AssetTypes
+from azure.ai.ml import Input, command
+from azure.ai.ml.entities import Data
+from azure.ai.ml.constants import AssetTypes
registered_data_asset = ml_client.data.get(name='titanic', version='1') my_job_inputs = {
- "input_data": JobInput(
+ "input_data": Input(
type=AssetTypes.URI_FOLDER, path=registered_data_asset.id ) }
-job = CommandJob(
+job = command(
code="./src", command='python read_data_asset.py --input_folder ${{inputs.input_data}}', inputs=my_job_inputs,
machine-learning How To Use Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-data.md
Use the tabs below to select where your data is located.
When you pass local data, the data is automatically uploaded to cloud storage as part of the job submission. ```python
-from azure.ai.ml.entities import Data, UriReference, JobInput, CommandJob
-from azure.ai.ml._constants import AssetTypes
+from azure.ai.ml import Input, command
+from azure.ai.ml.entities import Data
+from azure.ai.ml.constants import AssetTypes
my_job_inputs = {
- "input_data": JobInput(
+ "input_data": Input(
path='./sample_data', # change to be your local directory type=AssetTypes.URI_FOLDER ) }
-job = CommandJob(
+job = command(
code="./src", # local path where the code is stored command='python train.py --input_folder ${{inputs.input_data}}', inputs=my_job_inputs,
returned_job.services["Studio"].endpoint
# [ADLS Gen2](#tab/use-adls) ```python
-from azure.ai.ml.entities import Data, UriReference, JobInput, CommandJob
-from azure.ai.ml._constants import AssetTypes
+from azure.ai.ml import Input, command
+from azure.ai.ml.entities import Data, CommandJob
+from azure.ai.ml.constants import AssetTypes
# in this example we my_job_inputs = {
- "input_data": JobInput(
+ "input_data": Input(
path='abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>', type=AssetTypes.URI_FOLDER ) }
-job = CommandJob(
+job = command(
code="./src", # local path where the code is stored command='python train.py --input_folder ${{inputs.input_data}}', inputs=my_job_inputs,
returned_job.services["Studio"].endpoint
# [Blob](#tab/use-blob) ```python
-from azure.ai.ml.entities import Data, UriReference, JobInput, CommandJob
-from azure.ai.ml._constants import AssetTypes
+from azure.ai.ml import Input, command
+from azure.ai.ml.entities import Data, CommandJob
+from azure.ai.ml.constants import AssetTypes
# in this example we my_job_inputs = {
- "input_data": JobInput(
+ "input_data": Input(
path='https://<account_name>.blob.core.windows.net/<container_name>/path', type=AssetTypes.URI_FOLDER ) }
-job = CommandJob(
+job = command(
code="./src", # local path where the code is stored command='python train.py --input_folder ${{inputs.input_data}}', inputs=my_job_inputs,
Use the tabs below to select where your data is located.
# [Blob](#tab/rw-blob) ```python
-from azure.ai.ml.entities import Data, UriReference, JobInput, CommandJob, JobOutput
-from azure.ai.ml._constants import AssetTypes
+from azure.ai.ml import Input, command
+from azure.ai.ml.entities import Data, CommandJob, JobOutput
+from azure.ai.ml.constants import AssetTypes
my_job_inputs = {
- "input_data": JobInput(
+ "input_data": Input(
path='https://<account_name>.blob.core.windows.net/<container_name>/path', type=AssetTypes.URI_FOLDER )
my_job_outputs = {
) }
-job = CommandJob(
+job = command(
code="./src", #local path where the code is stored command='python pre-process.py --input_folder ${{inputs.input_data}} --output_folder ${{outputs.output_folder}}', inputs=my_job_inputs,
returned_job.services["Studio"].endpoint
# [ADLS Gen2](#tab/rw-adls) ```python
-from azure.ai.ml.entities import Data, UriReference, JobInput, CommandJob, JobOutput
-from azure.ai.ml._constants import AssetTypes
+from azure.ai.ml import Input, command
+from azure.ai.ml.entities import Data, CommandJob, JobOutput
+from azure.ai.ml.constants import AssetTypes
my_job_inputs = {
- "input_data": JobInput(
+ "input_data": Input(
path='abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>', type=AssetTypes.URI_FOLDER )
my_job_outputs = {
) }
-job = CommandJob(
+job = command(
code="./src", #local path where the code is stored command='python pre-process.py --input_folder ${{inputs.input_data}} --output_folder ${{outputs.output_folder}}', inputs=my_job_inputs,
returned_job.services["Studio"].endpoint
```python from azure.ai.ml.entities import Data
-from azure.ai.ml._constants import AssetTypes
+from azure.ai.ml.constants import AssetTypes
# select one from: my_path = 'abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>' # adls gen2
ml_client.data.create_or_update(my_data)
### Consume registered data assets in job ```python
-from azure.ai.ml.entities import Data, UriReference, JobInput, CommandJob
-from azure.ai.ml._constants import AssetTypes
+from azure.ai.ml import Input, command
+from azure.ai.ml.entities import Data, Input, CommandJob
+from azure.ai.ml.constants import AssetTypes
registered_data_asset = ml_client.data.get(name='titanic', version='1') my_job_inputs = {
- "input_data": JobInput(
+ "input_data": Input(
type=AssetTypes.URI_FOLDER, path=registered_data_asset.id ) }
-job = CommandJob(
+job = command(
code="./src", command='python read_data_asset.py --input_folder ${{inputs.input_data}}', inputs=my_job_inputs,
inputs:
The following example shows how to do this using the v2 SDK: ```python
-from azure.ai.ml.entities import Data, UriReference, JobInput, CommandJob
-from azure.ai.ml._constants import AssetTypes
+from azure.ai.ml import Input, command
+from azure.ai.ml.entities import Data, CommandJob
+from azure.ai.ml.constants import AssetTypes
registered_v1_data_asset = ml_client.data.get(name='<ASSET NAME>', version='<VERSION NUMBER>') my_job_inputs = {
- "input_data": JobInput(
+ "input_data": Input(
type=AssetTypes.MLTABLE, path=registered_v1_data_asset.id, mode="eval_mount" ) }
-job = CommandJob(
+job = command(
code="./src", #local path where the code is stored command='python train.py --input_data ${{inputs.input_data}}', inputs=my_job_inputs,
machine-learning Reference Yaml Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-overview.md
The Azure Machine Learning CLI (v2), an extension to the Azure CLI, often uses a
| [Compute cluster (AmlCompute)](reference-yaml-compute-aml.md) | https://azuremlschemas.azureedge.net/latest/amlCompute.schema.json | | [Compute instance](reference-yaml-compute-instance.md) | https://azuremlschemas.azureedge.net/latest/computeInstance.schema.json | | [Attached Virtual Machine](reference-yaml-compute-vm.md) | https://azuremlschemas.azureedge.net/latest/vmCompute.schema.json |
-| [Attached Azure Arc-enabled Kubernetes (KubernetesCompute)](reference-yaml-compute-kubernetes.md) | https://azuremlschemas.azureedge.net/latest/kubernetesCompute.schema.json |
+| [Attached Azure Arc-enabled Kubernetes (KubernetesCompute)](reference-yaml-compute-kubernetes.md) | `https://azuremlschemas.azureedge.net/latest/kubernetesCompute.schema.json` |
## Job
managed-instance-apache-cassandra Dba Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/dba-commands.md
Azure Managed Instance for Apache Cassandra provides automated deployment, scaling, and [management operations](management-operations.md) for open-source Apache Cassandra data centers. The automation in the service should be sufficient for many use cases. However, this article describes how to run DBA commands manually when the need arises. > [!IMPORTANT]
-> Nodetool commands are in public preview.
+> Nodetool and sstable commands are in public preview.
> This feature is provided without a service level agreement, and it's not recommended for production workloads. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-<!-- ## DBA command support
-Azure Managed Instance for Apache Cassandra allows you to run `nodetool` and `sstable` commands via Azure CLI, for routine DBA administration. Not all commands are supported and there are some limitations. For supported commands, see the sections below. -->
- ## DBA command support
-Azure Managed Instance for Apache Cassandra allows you to run `nodetool` commands via Azure CLI, for routine DBA administration. Not all commands are supported and there are some limitations. For supported commands, see the sections below.
+Azure Managed Instance for Apache Cassandra allows you to run `nodetool` and `sstable` commands via Azure CLI, for routine DBA administration. Not all commands are supported and there are some limitations. For supported commands, see the sections below.
>[!WARNING] > Some of these commands can destabilize the cassandra cluster and should only be run carefully and after being tested in non-production environments. Where possible a `--dry-run` option should be deployed first. Microsoft cannot offer any SLA or support on issues with running commands which alter the default database configuration and/or tables.
-## How to run a nodetool command
+## How to run a `nodetool` command
Azure Managed Instance for Apache Cassandra provides the following Azure CLI command to run DBA commands: ```azurecli-interactive
Both will return a json of the following form:
} ```
-<!-- ## How to run an sstable command
+## How to run an `sstable` command
-The `sstable` commands require read/write access to the cassandra data directory and the cassandra database to be stopped. To accomodate this, two additional parameters `--cassandra-stop-start true` and `--readwrite true` need to be given:
+The `sstable` commands require read/write access to the cassandra data directory and the cassandra database to be stopped. To accommodate this, two extra parameters `--cassandra-stop-start true` and `--readwrite true` need to be given:
```azurecli-interactive az managed-cassandra cluster invoke-command --resource-group <test-rg> --cluster-name <test-cluster> --host <ip> --cassandra-stop-start true --readwrite true --command-name sstableutil --arguments "system"="peers"
The `sstable` commands require read/write access to the cassandra data directory
"commandOutput": "Listing files...\n/var/lib/cassandra/data/system/peers-37f71aca7dc2383ba70672528af04d4f/me-1-big-CompressionInfo.db\n/var/lib/cassandra/data/system/peers-37f71aca7dc2383ba70672528af04d4f/me-1-big-Data.db\n/var/lib/cassandra/data/system/peers-37f71aca7dc2383ba70672528af04d4f/me-1-big-Digest.crc32\n/var/lib/cassandra/data/system/peers-37f71aca7dc2383ba70672528af04d4f/me-1-big-Filter.db\n/var/lib/cassandra/data/system/peers-37f71aca7dc2383ba70672528af04d4f/me-1-big-Index.db\n/var/lib/cassandra/data/system/peers-37f71aca7dc2383ba70672528af04d4f/me-1-big-Statistics.db\n/var/lib/cassandra/data/system/peers-37f71aca7dc2383ba70672528af04d4f/me-1-big-Summary.db\n/var/lib/cassandra/data/system/peers-37f71aca7dc2383ba70672528af04d4f/me-1-big-TOC.txt\n", "exitCode": 0 }
-``` -->
+```
-<!-- ## List of supported sstable commands
+## List of supported `sstable` commands
For more information on each command, see https://cassandra.apache.org/doc/latest/cassandra/tools/sstable/https://docsupdatetracker.net/index.html
For more information on each command, see https://cassandra.apache.org/doc/lates
* `sstablesplit` * `sstablerepairedset` * `sstableofflinerelevel`
-* `sstableexpiredblockers` -->
+* `sstableexpiredblockers`
-## List of supported nodetool commands
+## List of supported `nodetool` commands
For more information on each command, see https://cassandra.apache.org/doc/latest/cassandra/tools/nodetool/nodetool.html
managed-instance-apache-cassandra Ldap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/ldap.md
+
+ Title: How to enable LDAP authentication in Azure Managed Instance for Apache Cassandra
+description: Learn how to enable LDAP authentication in Azure Managed Instance for Apache Cassandra
++++ Last updated : 05/23/2022++
+# How to enable LDAP authentication in Azure Managed Instance for Apache Cassandra
+
+Azure Managed Instance for Apache Cassandra provides automated deployment and scaling operations for managed open-source Apache Cassandra data centers. This article discusses how to enable LDAP authentication to your clusters and data centers.
+
+> [!IMPORTANT]
+> LDAP authentication is in public preview.
+> This feature is provided without a service level agreement, and it's not recommended for production workloads.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- An Azure Managed Instance for Apache Cassandra cluster. Review how to [create an Azure Managed Instance for Apache Cassandra cluster from the Azure portal](create-cluster-portal.md).
+
+## Deploy an LDAP Server in Azure
+In this section, we'll walk through creating a simple LDAP server on a Virtual Machine in Azure. If you already have an LDAP server running, you can skip this section and review [how to enable LDAP authentication](ldap.md#enable-ldap-authentication).
+
+1. Deploy a Virtual Machine in Azure using Ubuntu Server 18.04 LTS. You can follow instructions [here](visualize-prometheus-grafana.md#deploy-an-ubuntu-server).
+
+1. Give your server a DNS name:
+
+ :::image type="content" source="./media/ldap/dns.jpg" alt-text="Screenshot of virtual machine d n s name in Azure portal." lightbox="./media/ldap/dns.jpg" border="true":::
+
+1. Install Docker on the virtual machine. We recommend [this](https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-18-04) tutorial.
+
+1. In the home directory, copy and paste the following text and hit enter. This command will create a file containing a test LDAP user account.
+
+ ```shell
+ mkdir ldap-user && cd ldap-user && cat >> user.ldif <<EOL
+ dn: uid=admin,dc=example,dc=org
+ uid: admin
+ cn: admin
+ sn: 3
+ objectClass: top
+ objectClass: posixAccount
+ objectClass: inetOrgPerson
+ loginShell: /bin/bash
+ homeDirectory: /home/admin
+ uidNumber: 14583102
+ gidNumber: 14564100
+ userPassword: admin
+ mail: admin@example.com
+ gecos: admin
+ EOL
+ ```
+
+1. Navigate back up to home directory
+
+ ```shell
+ cd ..
+ ```
+
+1. Run the below command, replacing `<dnsname>` with the dns name you created for your LDAP server earlier. This command will deploy an LDAP server with TLS enabled to a Docker container, and will also copy the user file you created earlier to the container.
+
+ ```shell
+ sudo docker run --hostname <dnsname>.uksouth.cloudapp.azure.com --name <dnsname> -v $(pwd)/ldap-user:/container/service/slapd/assets/test --detach osixia/openldap:1.5.0
+ ```
+
+1. Now copy out the certificates folder from the container (replace `<dnsname>` with the dns name you created for your LDAP server):
+
+ ```shell
+ sudo docker cp <dnsname>:/container/service/slapd/assets/certs certs
+ ```
+
+1. Verify that dns name is correct:
+
+ ```shell
+ openssl x509 -in certs/ldap.crt -text
+ ```
+ :::image type="content" source="./media/ldap/dns-verify.jpg" alt-text="Screenshot of output from command to verify certificate." lightbox="./media/ldap/dns-verify.jpg" border="true":::
+
+1. Copy the `ldap.crt` file to [clouddrive](../cloud-shell/persisting-shell-storage.md) in Azure CLI for use later.
+
+1. Add the user to the ldap (replace `<dnsname>` with the dns name you created for your LDAP server):
+
+ ```shell
+ sudo docker container exec <dnsname> ldapadd -H ldap://<dnsname>.uksouth.cloudapp.azure.com -D "cn=admin,dc=example,dc=org" -w admin -f /container/service/slapd/assets/test/user.ldif
+ ```
+
+## Enable LDAP authentication
+
+> [!IMPORTANT]
+> If you skipped the above section because you already have an existing LDAP server, please ensure that it has server SSL certificates enabled. The `subject alternative name (dns name)` specified for the certificate must also match the domain of the server that LDAP is hosted on, or authentication will fail.
+
+1. Currently, LDAP authentication is a public preview feature. Run the below command to add the required Azure CLI extension:
+
+ ```azurecli-interactive
+ az extension add --upgrade --name cosmosdb-preview
+ ```
+
+1. Set authentication method to "Ldap" on the cluster, replacing `<resource group>` and `<cluster name>` with the appropriate values:
+
+ ```azurecli-interactive
+ az managed-cassandra cluster update -g <resource group> -c <cluster name> --authentication-method "Ldap"
+ ```
+
+1. Now set properties at the data center level. Replace `<resource group>` and `<cluster name>` with the appropriate values, and `<dnsname>` with the dns name you created for your LDAP server.
+
+ > [!NOTE]
+ > The below command is based on the LDAP setup in the earlier section. If you skipped that section because you already have an existing LDAP server, provide the corresponding values for that server instead. Ensure you have uploaded a certificate file like `ldap.crt` to your [clouddrive](../cloud-shell/persisting-shell-storage.md) in Azure CLI.
+
+ ```azurecli-interactive
+ ldap_search_base_distinguished_name='dc=example,dc=org'
+ ldap_server_certificates='/usr/csuser/clouddrive/ldap.crt'
+ ldap_server_hostname='<dnsname>.uksouth.cloudapp.azure.com'
+ ldap_service_user_distinguished_name='cn=admin,dc=example,dc=org'
+ ldap_service_user_password='admin'
+
+ az managed-cassandra datacenter update -g `<resource group>` -c `<cluster name>` -d datacenter-1 --ldap-search-base-dn $ldap_search_base_distinguished_name --ldap-server-certs $ldap_server_certificates --ldap-server-hostname $ldap_server_hostname --ldap-service-user-dn $ldap_service_user_distinguished_name --ldap-svc-user-pwd $ldap_service_user_password
+ ```
+
+1. Once this command has completed, you should be able to use [CQLSH](https://cassandra.apache.org/doc/latest/cassandra/tools/cqlsh.html) (see below) or any Apache Cassandra open-source client driver to connect to your managed instance data center with the user added in the above step:
+
+ ```shell
+ export SSL_VALIDATE=false
+ cqlsh --debug --ssl <data-node-ip> -u <user> -p <password>
+ ```
+
+## Next steps
+
+* [LDAP authentication with Azure Active Directory](../active-directory/fundamentals/auth-ldap.md)
+* [Manage Azure Managed Instance for Apache Cassandra resources using Azure CLI](manage-resources-cli.md)
+* [Deploy a Managed Apache Spark Cluster with Azure Databricks](deploy-cluster-databricks.md)
marketplace Azure Resource Manager Test Drive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-resource-manager-test-drive.md
Previously updated : 12/06/2021 Last updated : 06/03/2022
You can use any valid name for your parameters; test drive recognizes parameter
Test drive initializes this parameter with a **Base Uri** of your deployment package so you can use this parameter to construct a Uri of any file included in your package.
+> [!NOTE]
+> The `baseUri` parameter cannot be used in conjunction with a custom script extension.
+ ```JSON "parameters": { ...
mysql Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-data-in-replication.md
Modifying the parameter `replicate_wild_ignore_table` used to create replication
- The source server version must be at least MySQL version 5.7. - Our recommendation is to have the same version for source and replica server versions. For example, both must be MySQL version 5.7 or both must be MySQL version 8.0.-- Our recommendation is to have a primary key in each table. If we have table without primary key, you might face slowness in replication. To create primary keys for tables you can use [invisible column](https://dev.mysql.com/doc/refman/8.0/en/create-table-gipks.html) if your MySQL version is greater than 8.0.23(*alter table <table name> add column <column name> bigint auto_increment INVISIBLE PRIMARY KEY;*).
+- Our recommendation is to have a primary key in each table. If we have table without primary key, you might face slowness in replication. To create primary keys for tables you can use [invisible column](https://dev.mysql.com/doc/refman/8.0/en/create-table-gipks.html) if your MySQL version is greater than 8.0.23 `(ALTER TABLE <table name> ADD COLUMN <column name> bigint AUTO_INCREMENT INVISIBLE PRIMARY KEY;)`.
- The source server should use the MySQL InnoDB engine. - User must have permissions to configure binary logging and create new users on the source server. - Binary log files on the source server shouldn't be purged before the replica applies those changes. If the source is Azure Database for MySQL refer how to configure binlog_expire_logs_seconds for [Flexible server](./concepts-server-parameters.md#binlog_expire_logs_seconds) or [Single server](../concepts-server-parameters.md#binlog_expire_logs_seconds)
mysql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-high-availability.md
Automatic backups, both snapshots and log backups, are performed on locally redu
>[!Note] >For both zone-redundant and same-zone HA:
->* If there's a failure, the time needed for the standby replica to take over the role of primary depends on the binary log application on the standby. So we recommend that you use primary keys on all tables to reduce failover time. Failover times are typically between 60 and 120 seconds.To create primary keys for tables you can use [invisible column](https://dev.mysql.com/doc/refman/8.0/en/create-table-gipks.html) if your MySQL version is greater than 8.0.23(*alter table <table name> add column <column name> bigint auto_increment INVISIBLE PRIMARY KEY;*).
+>* If there's a failure, the time needed for the standby replica to take over the role of primary depends on the binary log application on the standby. So we recommend that you use primary keys on all tables to reduce failover time. Failover times are typically between 60 and 120 seconds.To create primary keys for tables you can use [invisible column](https://dev.mysql.com/doc/refman/8.0/en/create-table-gipks.html) if your MySQL version is greater than 8.0.23 `(ALTER TABLE <table name> ADD COLUMN <column name> bigint AUTO_INCREMENT INVISIBLE PRIMARY KEY;)`.
>* The standby server isn't available for read or write operations. It's a passive standby to enable fast failover. >* Always use a fully qualified domain name (FQDN) to connect to your primary server. Avoid using an IP address to connect. If there's a failover, after the primary and standby server roles are switched, a DNS A record might change. That change would prevent the application from connecting to the new primary server if an IP address is used in the connection string.
mysql How To Read Replicas Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-read-replicas-cli.md
az mysql flexible-server replica list --server-name mydemoserver --resource-grou
Replication to a read replica server can be stopped using the following command: ```azurecli-interactive
-az mysql flexible-server replica stop-replication --replica-name mydemoreplicaserver --resource-group myresourcegroup
+az mysql flexible-server replica stop-replication --name mydemoreplicaserver --resource-group myresourcegroup
``` ### Delete a replica server
object-anchors New Unity Hololens App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/object-anchors/quickstarts/new-unity-hololens-app.md
We'll first set up our project and Unity scene:
1. Select the **Player settings...** button. 1. The **Project Settings** window will open up. 1. Select the **XR Plug-in Management** entry.
-1. Follow the <a href="/windows/mixed-reality/develop/unity/xr-project-setup#configuring-xr-plugin-management-for-openxr" target="_blank">Configuring XR Plugin Management for OpenXR</a> documentation to set up the **OpenXR** with **Microsoft HoloLens feature set** in the **Plug-in Providers** list.
+1. Follow the <a href="/windows/mixed-reality/develop/unity/new-openxr-project-with-mrtk#configure-the-project-for-the-hololens-2" target="_blank">Configuring XR Plugin Management for OpenXR</a> documentation to set up the **OpenXR** with **Microsoft HoloLens feature set** in the **Plug-in Providers** list.
## Set capabilities
We'll first set up our project and Unity scene:
1. Select the **Quality** entry. 1. In the column under the **Universal Windows Platform** logo, select on the arrow at the **Default** row and select **Very Low**. You'll know the setting is applied correctly when the box in the **Universal Windows Platform** column and **Very Low** row is green. 1. Close the **Project Settings** and the **Build Settings** windows.
-1. Follow the <a href="/windows/mixed-reality/develop/unity/xr-project-setup#optimization" target="_blank">Optimization</a> documentation to apply the recommended project settings for HoloLens 2.
+1. Follow the <a href="/windows/mixed-reality/develop/unity/new-openxr-project-with-mrtk#optimization" target="_blank">Optimization</a> documentation to apply the recommended project settings for HoloLens 2.
## Set up the main virtual camera
openshift Howto Byok https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-byok.md
Title: Encrypt OS disks with a customer-managed key (CMK) on Azure Red Hat OpenShift (ARO)
-description: Encrypt OS disks with a customer-managed key (CMK) on Azure Red Hat OpenShift (ARO)
--
+ Title: Encrypt OS disks with a customer-managed key (CMK) on Azure Red Hat OpenShift
+description: Encrypt OS disks with a customer-managed key (CMK) on Azure Red Hat OpenShift
++
-keywords: encryption, byok, aro, deploy, openshift, red hat
+keywords: encryption, byok, deploy, openshift, red hat, key
Last updated 10/18/2021 ms.devlang: azurecli
-# Encrypt OS disks with a customer-managed key (CMK) on Azure Red Hat OpenShift (ARO) (preview)
+# Encrypt OS disks with a customer-managed key on Azure Red Hat OpenShift
-By default, the OS disks of the virtual machines in an Azure Red Hat OpenShift cluster were encrypted with auto-generated keys managed by Microsoft Azure. For additional security, customers can encrypt the OS disks with self-managed keys when deploying an ARO cluster. This features allows for more control by encrypting confidential data with customer-managed keys.
+By default, the OS disks of the virtual machines in an Azure Red Hat OpenShift cluster were encrypted with auto-generated keys managed by Microsoft Azure. For additional security, customers can encrypt the OS disks with self-managed keys when deploying an Azure Red Hat OpenShift cluster. This feature allows for more control by encrypting confidential data with customer-managed keys (CMK).
-Clusters created with customer-managed keys have a default storage class enabled with their keys. Therefore, both OS disks and data disks are encrypted by these keys. The customer-managed keys are stored in Azure Key Vault. For more information about using Azure Key Vault to create and maintain keys, see [Server-side encryption of Azure Disk Storage](../key-vault/general/basic-concepts.md) in the Microsoft Azure documentation.
+Clusters created with customer-managed keys have a default storage class enabled with their keys. Therefore, both OS disks and data disks are encrypted by these keys. The customer-managed keys are stored in Azure Key Vault.
-With host-based encryption, the data stored on the VM host of your ARO agent nodes' VMs is encrypted at rest and flows encrypted to the Storage service. This means the temp disks are encrypted at rest with platform-managed keys. The cache of OS and data disks is encrypted at rest with either platform-managed keys or customer-managed keys depending on the encryption type set on those disks. By default, when using ARO, OS and data disks are encrypted at rest with platform-managed keys, meaning that the caches for these disks are also by default encrypted at rest with platform-managed keys. You can specify your own managed keys following the encryption steps below. The cache for these disks will then also be encrypted using the key that you specify in this step.
+For more information about using Azure Key Vault to create and maintain keys, see [Server-side encryption of Azure Disk Storage](../key-vault/general/basic-concepts.md) in the Microsoft Azure documentation.
-> [!IMPORTANT]
-> ARO preview features are available on a self-service, opt-in basis. Preview features are provided "as is" and "as available," and they are excluded from the service-level agreements and limited warranty. Preview features are partially covered by customer support on a best-effort basis. As such, these features are not meant for production use.
+With host-based encryption, the data stored on the VM host of your Azure Red Hat OpenShift agent nodes' VMs is encrypted at rest and flows encrypted to the Storage service. Host-base encryption means the temp disks are encrypted at rest with platform-managed keys.
-## Limitation
-It is the responsibility of the customers to maintain the Key Vault and Disk Encryption Set in Azure. Failure to maintain the keys will result in broken ARO clusters. The VMs stop working and therefore the entire ARO cluster stops functioning. The Azure Red Hat OpenShift Engineering team cannot access the keys; therefore, they cannot back up, replicate, or retrieve the keys. For details about using Disk Encryption Sets to manage your encryption keys, see [Server-side encryption of Azure Disk Storage](../virtual-machines/disk-encryption.md) in the Microsoft Azure documentation.
+The cache of OS and data disks is encrypted at rest with either platform-managed keys or customer-managed keys, depending on the encryption type set on those disks. By default, when using Azure Red Hat OpenShift, OS and data disks are encrypted at rest with platform-managed keys, meaning that the caches for these disks are also by default encrypted at rest with platform-managed keys.
-## Prerequisites
-* [Verify your permissions](tutorial-create-cluster.md#verify-your-permissions). You must have either Contributor and User Access Administrator permissions, or Owner permissions.
-* Register the resource providers if you have multiple Azure subscriptions. For registration details, see [Register the resource providers](tutorial-create-cluster.md#register-the-resource-providers).
+You can specify your own managed keys following the encryption steps below. The cache for these disks also will be encrypted using the key that you specify in this step.
-## Install the preview Azure CLI extension
-Install and use the Azure CLI to create a Key Vault. The Azure CLI allows the execution of commands through a terminal using interactive command-line prompts or a script.
+## Limitation
+It's the responsibility of customers to maintain the Key Vault and Disk Encryption Set in Azure. Failure to maintain the keys will result in broken Azure Red Hat OpenShift clusters. The VMs will stop working and, as a result, the entire Azure Red Hat OpenShift cluster will stop functioning.
-> [!NOTE]
-> The CLI extension is required for the preview feature only.
+The Azure Red Hat OpenShift Engineering team can't access the keys. Therefore, they can't back up, replicate, or retrieve the keys.
-1. Click the following URL to download both the Python wheel and the CLI extension:
- [https://aka.ms/az-aroext-latest.whl](https://aka.ms/az-aroext-latest.whl)
-1. Run the following command:
- ```azurecli-interactive
- az extension add --upgrade -s <path to downloaded .whl file>
- ```
-1. Verify that the CLI extension is being used:
- ```azurecli-interactive
- az extension list
- [
- {
- "experimental": false,
- "extensionType": "whl",
- "name": "aro",
- "path": "<path may differ depending on system>",
- "preview": true,
- "version": "1.0.1"
- }
- ]
- ```
+For details about using Disk Encryption Sets to manage your encryption keys, see [Server-side encryption of Azure Disk Storage](../virtual-machines/disk-encryption.md) in the Microsoft Azure documentation.
+
+## Prerequisites
+* [Verify your permissions](tutorial-create-cluster.md#verify-your-permissions). You must have either Contributor and User Access Administrator permissions or Owner permissions.
+* If you have multiple Azure subscriptions, register the resource providers. For registration details, see [Register the resource providers](tutorial-create-cluster.md#register-the-resource-providers).
## Create a virtual network containing two empty subnets Create a virtual network containing two empty subnets. If you have an existing virtual network that meets your needs, you can skip this step. To review the procedure of creating a virtual network, see [Create a virtual network containing two empty subnets](tutorial-create-cluster.md#create-a-virtual-network-containing-two-empty-subnets). ## Create an Azure Key Vault instance You must use an Azure Key Vault instance to store your keys. Create a new Key Vault with purge protection enabled. Then, create a new key within the Key Vault to store your own custom key.
-1. Set additional environment permissions:
+
+1. Set more environment permissions:
``` export KEYVAULT_NAME=$USER-enckv export KEYVAULT_KEY_NAME=$USER-key
You must use an Azure Key Vault instance to store your keys. Create a new Key Va
``` ## Create an Azure Disk Encryption Set
-The Azure Disk Encryption Set is used as the reference point for disks in ARO clusters. It is connected to the Azure Key Vault that you created in the previous step, and pulls the customer-managed keys from that location.
+The Azure Disk Encryption Set is used as the reference point for disks in Azure Red Hat OpenShift clusters. It's connected to the Azure Key Vault that you created in the previous step, and pulls the customer-managed keys from that location.
```azurecli-interactive az disk-encryption-set create -n $DISK_ENCRYPTION_SET_NAME \ -l $LOCATION \
az keyvault set-policy -n $KEYVAULT_NAME \
--key-permissions wrapkey unwrapkey get ```
-## Create an ARO cluster
-Create an ARO cluster to use the customer-managed keys.
+## Create an Azure Red Hat OpenShift cluster
+Create an Azure Red Hat OpenShift cluster to use the customer-managed keys.
```azurecli-interactive az aro create --resource-group $RESOURCEGROUP \ --name $CLUSTER \
az aro create --resource-group $RESOURCEGROUP \
--worker-subnet worker-subnet \ --disk-encryption-set $DES_ID ```
-After creating the ARO cluster, all VMs are encrypted with the customer-managed encryption keys.
+After you create the Azure Red Hat OpenShift cluster, all VMs are encrypted with the customer-managed encryption keys.
To verify that you configured the keys correctly, run the following commands: 1. Get the name of the cluster Resource Group where the cluster VMs, disks, and so on are located:
To verify that you configured the keys correctly, run the following commands:
```azurecli-interactive az disk list -g $CLUSTERRESOURCEGROUP --query '[].encryption' ```
- The field `diskEncryptionSetId` in the output must point to the Disk Encryption Set that you specified while creating the ARO cluster.
+ The field `diskEncryptionSetId` in the output must point to the Disk Encryption Set that you specified while creating the Azure Red Hat OpenShift cluster.
openshift Howto Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-upgrade.md
description: Learn how to upgrade an Azure Red Hat OpenShift cluster running Ope
Last updated 1/10/2021--
-keywords: aro, openshift, az aro, red hat, cli
++
+keywords: aro, openshift, az aro, red hat, cli, azure, MUO, managed, upgrade, operator
+#Customer intent: I need to understand how to upgrade my Azure Red Hat OpenShift cluster running OpenShift 4.
-# Upgrade an Azure Red Hat OpenShift (ARO) cluster
+# Upgrade an Azure Red Hat OpenShift cluster
-Part of the ARO cluster lifecycle involves performing periodic upgrades to the latest OpenShift version. It is important you apply the latest security releases, or upgrade to get the latest features. This article shows you how to upgrade all components in an OpenShift cluster using the OpenShift Web Console.
+As part of the Azure Red Hat OpenShift cluster lifecycle, you need to perform periodic upgrades to the latest version of the OpenShift platform. Upgrading your Azure Red Hat OpenShift clusters enables you to upgrade to the latest features and functionalities and apply the latest security releases.
+
+This article shows you how to upgrade all components in an OpenShift cluster using the OpenShift web console or the managed-upgrade-operator (MUO).
## Before you begin
-This article requires that you're running the Azure CLI version 2.0.65 of later. Run `az --version` to find your current version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli)
+* This article requires that you're running the Azure CLI version 2.6.0 or later. Run `az --version` to find your current version. If you need to install or upgrade Azure CLI/it, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+* This article assumes you have access to an existing Azure Red Hat OpenShift cluster as a user with `admin` privileges.
+
+* This article assumes you've updated your Azure Red Hat OpenShift pull secret for an existing Azure Red Hat OpenShift 4.x cluster. Including the **cloud.openshift.com** entry from your pull secret enables your cluster to start sending telemetry data to Red Hat.
+
+ For more information, see [Add or update your Red Hat pull secret on an Azure Red Hat OpenShift 4 cluster](howto-add-update-pull-secret.md).
+
+## Check for Azure Red Hat OpenShift cluster upgrades
+
+1. From the top-left of the OpenShift web console, which is the default when you sign as the kuberadmin, select the **Administration** tab.
+
+2. Select **Cluster Settings** and open the **Details** tab. You'll see the version, update status, and channel. The channel isn't configured by default.
+
+3. Select the **Channel** link, and at the prompt enter the desired update channel, for example **stable-4.10**. Once the desired channel is chosen, a graph showing available releases and channels is displayed. If the **Update Status** for your cluster shows **Updates Available**, you can update your cluster.
+
+## Upgrade your Azure Red Hat OpenShift cluster with the OpenShift web console
+
+From the OpenShift web console in the previous step, set the **Channel** to the correct channel for the version that you want to update to, such as `stable-4.10`.
+
+Selection a version to update to, and select **Update**. You'll see the update status change to: `Update to <product-version> in progress`. You can review the progress of the cluster update by watching the progress bars for the operators and nodes.
+
+## Scheduling individual upgrades using the managed-upgrade-operator
+
+Use the managed-upgrade-operator (MUO) to upgrade your Azure Red Hat OpenShift cluster.
-This article assumes you have access to an existing Azure Red Hat OpenShift cluster as a user with `admin` privileges.
+The managed-upgrade-operator manages automated cluster upgrades. The managed-upgrade-operator starts the cluster upgrade, but it doesn't perform any activities of the cluster upgrade process itself. The OpenShift Container Platform (OCP) is responsible for upgrading the clusters. The goal of the managed-upgrade-operator is to satisfy the operating conditions that a managed cluster must hold, both before and after starting the cluster upgrade.
-## Check for available ARO cluster upgrades
+1. Prepare the configuration file, as shown in the following example for upgrading to OpenShift 4.10.
-From the OpenShift web console, select **Administration** > **Cluster Settings** and open the **Details** tab.
+```
+apiVersion: upgrade.managed.openshift.io/v1alpha1
+kind: UpgradeConfig
+metadata:
+ name: managed-upgrade-config
+ namespace: openshift-managed-upgrade-operator
+spec:
+ type: "ARO"
+ upgradeAt: "2022-02-08T03:20:00Z"
+ PDBForceDrainTimeout: 60
+ desired:
+ channel: "stable-4.10"
+ version: "4.10.10"
+```
-If the **Update Status** for your cluster reflects **Updates Available**, you can update your cluster.
+where:
-## Upgrade your ARO cluster
+* `channel` is the channel the configuration file will pull from, according to the lifecycle policy. The channel used should be `stable-4.10`.
+* `version` is the version that you wish to upgrade to, such as `4.10.10`.
+* `upgradeAT` is the time when the upgrade will take place.
-From the web console in the previous step, set the **Channel** to the correct channel for the version that you want to update to, such as `stable-4.5`.
+2. Apply the configuration file:
-Selection a version to update to, and select **Update**. You'll see the update status change to: `Update to <product-version> in progress`. You can review the progress of the cluster update by watching the progress bars for the Operators and nodes.
+```azurecli-interactive
+$ oc create -f <file_name>.yaml
+```
## Next steps-- [Learn to upgrade an ARO cluster using the OC CLI](https://docs.openshift.com/container-platform/4.5/updating/updating-cluster-between-minor.html)-- You can find information about available OpenShift Container Platform advisories and updates in the [errata section](https://access.redhat.com/downloads/content/290/ver=4.6/rhel8/4.6.0/x86_64/product-errata) of the Customer Portal.
+- [Learn to upgrade an Azure Red Hat OpenShift cluster using the OC CLI](https://docs.openshift.com/container-platform/4.10/updating/https://docsupdatetracker.net/index.html).
+- You can find information about available OpenShift Container Platform advisories and updates in the [errata section](https://access.redhat.com/downloads/content/290/ver=4.10/rhel8/4.10.13/x86_64/product-software) of the Customer Portal.
postgresql Application Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/application-best-practices.md
You can use [Data-in Replication](./concepts-read-replicas.md) for failover scen
## Database deployment ### Configure CI/CD deployment pipeline
-Occasionally, you need to deploy changes to your database. In such cases, you can use continuous integration (CI) through [GitHub actions](https://github.com/Azure/postgresql/blob/master/README.md) for your PostgreSQL server to update the database by running a custom script against it.
+Occasionally, you need to deploy changes to your database. In such cases, you can use continuous integration (CI) through [GitHub Actions](https://github.com/Azure/postgresql/blob/master/README.md) for your PostgreSQL server to update the database by running a custom script against it.
### Define manual database deployment process During manual database deployment, follow these steps to minimize downtime or reduce the risk of failed deployment:
postgresql How To Deploy Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-deploy-github-action.md
You will use the connection string as a GitHub secret.
1. Open the first result to see detailed logs of your workflow's run.
- :::image type="content" source="media/how-to-deploy-github-action/gitbub-action-postgres-success.png" alt-text="Log of GitHub actions run":::
+ :::image type="content" source="media/how-to-deploy-github-action/gitbub-action-postgres-success.png" alt-text="Log of GitHub Actions run":::
## Clean up resources
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
Previously updated : 03/23/2022 Last updated : 05/31/2022
For Azure services, use the recommended zone names as described in the following
| Azure Database for MariaDB (Microsoft.DBforMariaDB/servers) / mariadbServer | privatelink.mariadb.database.azure.com | mariadb.database.azure.com | | Azure Key Vault (Microsoft.KeyVault/vaults) / vault | privatelink.vaultcore.azure.net | vault.azure.net <br> vaultcore.azure.net | | Azure Key Vault (Microsoft.KeyVault/managedHSMs) / Managed HSMs | privatelink.managedhsm.azure.net | managedhsm.azure.net |
-| Azure Kubernetes Service - Kubernetes API (Microsoft.ContainerService/managedClusters) / management | privatelink.{region}.azmk8s.io | {region}.azmk8s.io |
+| Azure Kubernetes Service - Kubernetes API (Microsoft.ContainerService/managedClusters) / management | privatelink.{region}.azmk8s.io </br> {subzone}.privatelink.{region}.azmk8s.io | {region}.azmk8s.io |
| Azure Search (Microsoft.Search/searchServices) / searchService | privatelink.search.windows.net | search.windows.net | | Azure Container Registry (Microsoft.ContainerRegistry/registries) / registry | privatelink.azurecr.io | azurecr.io | | Azure App Configuration (Microsoft.AppConfiguration/configurationStores) / configurationStores | privatelink.azconfig.io | azconfig.io |
For Azure services, use the recommended zone names as described in the following
| Microsoft Purview (Microsoft.Purview) / portal| privatelink.purviewstudio.azure.com | purview.azure.com | | Azure Digital Twins (Microsoft.DigitalTwins) / digitalTwinsInstances | privatelink.digitaltwins.azure.net | digitaltwins.azure.net | | Azure HDInsight (Microsoft.HDInsight) | privatelink.azurehdinsight.net | azurehdinsight.net |
-| Azure Arc (Microsoft.HybridCompute) / hybridcompute | privatelink.his.arc.azure.com<br />privatelink.guestconfiguration.azure.com | his.arc.azure.com<br />guestconfiguration.azure.com |
+| Azure Arc (Microsoft.HybridCompute) / hybridcompute | privatelink.his.arc.azure.com<br />privatelink.guestconfiguration.azure.com | his.arc.azure.com<br/>guestconfiguration.azure.com |
| Azure Media Services (Microsoft.Media) / keydelivery, liveevent, streamingendpoint | privatelink.media.azure.net | media.azure.net | | Azure Data Explorer (Microsoft.Kusto) | privatelink.{region}.kusto.windows.net | {region}.kusto.windows.net |
+| Azure Static Web Apps (Microsoft.Web/staticSites) / staticSites | privatelink.azurestaticapps.net </br> privatelink.{partitionId}.azurestaticapps.net | azurestaticapps.net </br> {partitionId}.azurestaticapps.net |
<sup>1</sup>To use with IoT Hub's built-in Event Hub compatible endpoint. To learn more, see [private link support for IoT Hub's built-in endpoint](../iot-hub/virtual-network-support.md#built-in-event-hub-compatible-endpoint)
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md
# Customer intent: As someone who has a basic network background but is new to Azure, I want to understand the capabilities of private endpoints so that I can securely connect to my Azure PaaS services within the virtual network. Previously updated : 02/17/2022 Last updated : 05/31/2022 # What is a private endpoint?
A private-link resource is the destination target of a specified private endpoin
| Azure App Service | Microsoft.Web/hostingEnvironments | hosting environment | | Azure App Service | Microsoft.Web/sites | sites | | Azure Static Web Apps | Microsoft.Web/staticSites | staticSites |
+| Azure Media Services | Microsoft.Media/mediaservices | keydelivery, liveevent, streamingendpoint |
> [!NOTE] > You can create private endpoints only on a General Purpose v2 (GPv2) storage account.
public-multi-access-edge-compute-mec Tutorial Create Vm Using Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/tutorial-create-vm-using-python-sdk.md
In this tutorial, you use Python SDK to deploy resources in Azure public multi-access edge compute (MEC) Preview. The tutorial provides Python code to deploy a virtual machine (VM) and its dependencies in Azure public MEC.
-For information about Python SDKs, see [Azure libraries for Python usage patterns](/azure/developer/python/azure-sdk-library-usage-patterns?tabs=pip).
+For information about Python SDKs, see [Azure libraries for Python usage patterns](/azure/developer/python/sdk/azure-sdk-library-usage-patterns).
In this tutorial, you learn how to:
In this tutorial, you learn how to:
To use SSH to connect to the VM in the Azure public MEC, the best method is to deploy a jump box in an Azure region where your resource group was deployed in the previous section.
-1. Follow the steps in [Use the Azure libraries to provision a virtual machine](/azure/developer/python/azure-sdk-example-virtual-machines?tabs=cmd).
+1. Follow the steps in [Use the Azure libraries to provision a virtual machine](/azure/developer/python/sdk/examples/azure-sdk-example-virtual-machines).
1. Note your own publicIpAddress in the output from the python-example-ip field of the jump server VM. Use this address to access the VM in the next section.
resource-mover Tutorial Move Region Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/tutorial-move-region-sql.md
In this tutorial, learn how to move Azure SQL databases and elastic pools to a different Azure region, using [Azure Resource Mover](overview.md). > [!NOTE]
-> Azure Resource Mover is currently in preview.
+> Azure Resource Mover is currently GA.
In this tutorial, you learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.m
## Check SQL requirements 1. [Check](support-matrix-move-region-sql.md) which database/elastic pool features are supported for moving to another region.
-2. In the target region, create a target server for each source server. [Learn more](/azure/azure-sql/database/active-geo-replication-security-configure#how-to-configure-logins-and-users).
+2. In the target region, create a target server for each source server and ensure proper user access. [Learn more](/azure/azure-sql/database/active-geo-replication-security-configure#how-to-configure-logins-and-users).
4. If databases are encrypted with transparent data encryption (TDE) and you use your own encryption key in Azure Key Vault, [learn how to](../key-vault/general/move-region.md) move key vaults to another region. 5. If SQL data sync is enabled, moving member databases is supported. After the move, you need to set up SQL data sync to the new target database. 6. Remove advanced data security settings before the move. After the move, [configure the settings](/azure/azure-sql/database/azure-defender-for-sql) at the SQL Server level in the target region.
Select resources you want to move.
## Move the SQL Server
-Assign a target SQL Server in the target region, and commit the move.
+Azure Resource Mover currently doesn't move SQL Server across regions. Assign a target SQL Server in the target region, and commit the move.
### Assign a target SQL Server
In this tutorial, you:
Now, trying moving Azure VMs to another region. > [!div class="nextstepaction"]
-> [Move Azure VMs](./tutorial-move-region-virtual-machines.md)
+> [Move Azure VMs](./tutorial-move-region-virtual-machines.md)
search Cognitive Search Quickstart Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-quickstart-blob.md
Previously updated : 10/07/2021 Last updated : 05/31/2022 # Quickstart: Translate text and recognize entities using the Import data wizard
Before you begin, have the following prerequisites in place:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
-+ Azure Cognitive Search service. [Create a service](search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) under your current subscription. You can use a free service for this quickstart.
++ Azure Cognitive Search. [Create a service](search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices). You can use a free service for this quickstart. + Azure Storage account with Blob Storage. [Create a storage account](../storage/common/storage-account-create.md?tabs=azure-portal) or [find an existing account](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/).
Before you begin, have the following prerequisites in place:
+ Choose the StorageV2 (general purpose V2). > [!NOTE]
-> This quickstart also uses [Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) for the AI. Because the workload is so small, Cognitive Services is tapped behind the scenes for free processing for up to 20 transactions. This means that you can complete this exercise without having to create an additional Cognitive Services resource.
+> This quickstart uses [Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) for the AI. Because the workload is so small, Cognitive Services is tapped behind the scenes for free processing for up to 20 transactions. You can complete this exercise without having to create a Cognitive Services resource.
## Set up your data
You are now ready to move on the Import data wizard.
Next, configure AI enrichment to invoke language detection, text translation, and entity recognition.
-1. For this quickstart, we are using the **Free** Cognitive Services resource. The sample data consists of 10 files, so the daily, per-indexer allotment of 20 free transactions on Cognitive Services is sufficient for this quickstart.
+1. For this quickstart, you can use the **Free** Cognitive Services resource. The sample data consists of 10 files, so the daily, per-indexer allotment of 20 free transactions on Cognitive Services is sufficient for this quickstart.
:::image type="content" source="media/cognitive-search-quickstart-blob/free-enrichments.png" alt-text="Attach free Cognitive Services processing" border="true":::
For this quickstart, the wizard does a good job setting reasonable defaults:
:::image type="content" source="media/cognitive-search-quickstart-blob/index-fields-lang-entities.png" alt-text="Index fields" border="true":::
-Marking a field as **Retrievable** does not mean that the field *must* be present in the search results. You can precisely control search results composition by using the **$select** query parameter to specify which fields to include. For text-heavy fields like `content`, the **$select** parameter is your solution for shaping manageable search results to the human users of your application, while ensuring client code has access to all the information it needs via the **Retrievable** attribute.
+Marking a field as **Retrievable** doesn't mean that the field *must* be present in the search results. You can precisely control search results composition by using the **$select** query parameter to specify which fields to include. For text-heavy fields like `content`, the **$select** parameter is your solution for shaping manageable search results to the human users of your application, while ensuring client code has access to all the information it needs via the **Retrievable** attribute.
### Step 4 - Configure the indexer
When you're working in your own subscription, it's a good idea at the end of a p
You can find and manage resources in the portal, using the **All resources** or **Resource groups** link in the left-navigation pane.
-If you are using a free service, remember that you are limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
+If you're using a free service, remember that you're limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
## Next steps
search Cognitive Search Quickstart Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-quickstart-ocr.md
Previously updated : 10/07/2021 Last updated : 05/31/2022
Before you begin, have the following prerequisites in place:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
-+ Azure Cognitive Search service. [Create a service](search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) under your current subscription. You can use a free service for this quickstart.
++ Azure Cognitive Search . [Create a service](search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices). You can use a free service for this quickstart. + Azure Storage account with Blob Storage. [Create a storage account](../storage/common/storage-account-create.md?tabs=azure-portal) or [find an existing account](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/).
Before you begin, have the following prerequisites in place:
+ Choose the StorageV2 (general purpose V2). > [!NOTE]
-> This quickstart also uses [Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) for the AI. Because the workload is so small, Cognitive Services is tapped behind the scenes for free processing for up to 20 transactions. This means that you can complete this exercise without having to create an additional Cognitive Services resource.
+> This quickstart uses [Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) for the AI. Because the workload is so small, Cognitive Services is tapped behind the scenes for free processing for up to 20 transactions. You can complete this exercise without having to create a Cognitive Services resource.
## Set up your data
In the following steps, set up a blob container in Azure Storage to store hetero
You should have 10 files containing photographs of signs.
-There is a second subfolder that includes landmark buildings. If you want to [attach a Cognitive Services key](cognitive-search-attach-cognitive-services.md), you can include these files as well to see how image analysis works over image files that do not include embedded text. The key is necessary for jobs that exceed the free allotment.
+There is a second subfolder that includes landmark buildings. If you want to [attach a Cognitive Services key](cognitive-search-attach-cognitive-services.md), you can include these files as well to see how image analysis works over image files that don't include embedded text. The key is necessary for jobs that exceed the free allotment.
You are now ready to move on the Import data wizard.
You are now ready to move on the Import data wizard.
Next, configure AI enrichment to invoke OCR and image analysis.
-1. For this quickstart, we are using the **Free** Cognitive Services resource. The sample data consists of 19 files, so the daily, per-indexer allotment of 20 free transactions on Cognitive Services is sufficient for this quickstart.
+1. For this quickstart, you can use the **Free** Cognitive Services resource. The sample data consists of 19 files, so the daily, per-indexer allotment of 20 free transactions on Cognitive Services is sufficient for this quickstart.
:::image type="content" source="media/cognitive-search-quickstart-blob/free-enrichments.png" alt-text="Attach free Cognitive Services processing" border="true":::
For this quickstart, the wizard does a good job setting reasonable defaults:
:::image type="content" source="media/cognitive-search-quickstart-blob/index-fields-ocr-images.png" alt-text="Index fields" border="true":::
-Marking a field as **Retrievable** does not mean that the field *must* be present in the search results. You can precisely control search results composition by using the **$select** query parameter to specify which fields to include. For text-heavy fields like `content`, the **$select** parameter is your solution for shaping manageable search results to the human users of your application, while ensuring client code has access to all the information it needs via the **Retrievable** attribute.
+Marking a field as **Retrievable** doesn't mean that the field *must* be present in the search results. You can precisely control search results composition by using the **$select** query parameter to specify which fields to include. For text-heavy fields like `content`, the **$select** parameter is your solution for shaping manageable search results to the human users of your application, while ensuring client code has access to all the information it needs via the **Retrievable** attribute.
### Step 4 - Configure the indexer
When you're working in your own subscription, it's a good idea at the end of a p
You can find and manage resources in the portal, using the **All resources** or **Resource groups** link in the left-navigation pane.
-If you are using a free service, remember that you are limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
+If you're using a free service, remember that you're limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
## Next steps
search Search Dotnet Sdk Migration Version 11 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-dotnet-sdk-migration-version-11.md
ms.devlang: csharp Previously updated : 04/25/2022 Last updated : 05/31/2022
The benefits of upgrading are summarized as follows:
+ Consistency with other Azure client libraries. **Azure.Search.Documents** takes a dependency on [Azure.Core](/dotnet/api/azure.core) and [System.Text.Json](/dotnet/api/system.text.json), and follows conventional approaches for common tasks such as client connections and authorization.
+**Microsoft.Azure.Search** is officially retired. If you're using an old version, we recommend upgrading to the next higher version, repeating the process in succession until you reach version 11 and **Azure.Search.Documents**. An incremental upgrade strategy makes it easier to find and fix blocking issues. See [Previous version docs](/previous-versions/azure/search/) for guidance.
+ ## Package comparison Version 11 consolidates and simplifies package management so that there are fewer to manage.
Where applicable, the following table maps the client libraries between the two
## Naming and other API differences
-Besides the client differences (noted previously and thus omitted here), multiple other APIs have been renamed and in some cases redesigned. Class name differences are summarized below. This list is not exhaustive but it does group API changes by task, which can be helpful for revisions on specific code blocks. For an itemized list of API updates, see the [change log](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/search/Azure.Search.Documents/CHANGELOG.md) for `Azure.Search.Documents` on GitHub.
+Besides the client differences (noted previously and thus omitted here), multiple other APIs have been renamed and in some cases redesigned. Class name differences are summarized below. This list isn't exhaustive but it does group API changes by task, which can be helpful for revisions on specific code blocks. For an itemized list of API updates, see the [change log](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/search/Azure.Search.Documents/CHANGELOG.md) for `Azure.Search.Documents` on GitHub.
### Authentication and encryption
SearchClient client = new SearchClient(endpoint, "mountains", credential, client
Response<SearchResults<Mountain>> results = client.Search<Mountain>("Rainier"); ```
-If you are using Newtonsoft.Json for JSON serialization, you can pass in global naming policies using similar attributes, or by using properties on [JsonSerializerSettings](https://www.newtonsoft.com/json/help/html/T_Newtonsoft_Json_JsonSerializerSettings.htm). For an example equivalent to the one above, see the [Deserializing documents example](https://github.com/Azure/azure-sdk-for-net/blob/259df3985d9710507e2454e1591811f8b3a7ad5d/sdk/core/Microsoft.Azure.Core.Spatial.NewtonsoftJson/README.md) in the Newtonsoft.Json readme.
+If you're using Newtonsoft.Json for JSON serialization, you can pass in global naming policies using similar attributes, or by using properties on [JsonSerializerSettings](https://www.newtonsoft.com/json/help/html/T_Newtonsoft_Json_JsonSerializerSettings.htm). For an example equivalent to the one above, see the [Deserializing documents example](https://github.com/Azure/azure-sdk-for-net/blob/259df3985d9710507e2454e1591811f8b3a7ad5d/sdk/core/Microsoft.Azure.Core.Spatial.NewtonsoftJson/README.md) in the Newtonsoft.Json readme.
<a name="WhatsNew"></a>
The following steps get you started on a code migration by walking through the f
SearchIndexClient indexClient = new SearchIndexClient(endpoint, credential); ```
-1. Add new client references for indexer-related objects. If you are using indexers, datasources, or skillsets, change the client references to [SearchIndexerClient](/dotnet/api/azure.search.documents.indexes.searchindexerclient). This client is new in version 11 and has no antecedent.
+1. Add new client references for indexer-related objects. If you're using indexers, datasources, or skillsets, change the client references to [SearchIndexerClient](/dotnet/api/azure.search.documents.indexes.searchindexerclient). This client is new in version 11 and has no antecedent.
1. Revise collections and lists. In the new SDK, all lists are read-only to avoid downstream issues if the list happens to contain null values. The code change is to add items to a list. For example, instead of assigning strings to a Select property, you would add them as follows:
The following steps get you started on a code migration by walking through the f
Given the sweeping changes to libraries and APIs, an upgrade to version 11 is non-trivial and constitutes a breaking change in the sense that your code will no longer be backward compatible with version 10 and earlier. For a thorough review of the differences, see the [change log](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/search/Azure.Search.Documents/CHANGELOG.md) for `Azure.Search.Documents`.
-In terms of service version updates, where code changes in version 11 relate to existing functionality (and not just a refactoring of the APIs), you will find the following behavior changes:
+In terms of service version updates, where code changes in version 11 relate to existing functionality (and not just a refactoring of the APIs), you'll find the following behavior changes:
+ [BM25 ranking algorithm](index-ranking-similarity.md) replaces the previous ranking algorithm with newer technology. New services will use this algorithm automatically. For existing services, you must set parameters to use the new algorithm.
search Search What Is Azure Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-what-is-azure-search.md
Previously updated : 01/03/2022 Last updated : 05/31/2022 # What is Azure Cognitive Search?
When you create a search service, you'll work with the following capabilities:
+ A search engine for full text search with storage for user-owned content in a search index + Rich indexing, with [text analysis](search-analyzers.md) and [optional AI enrichment](cognitive-search-concept-intro.md) for advanced content extraction and transformation
-+ Rich query capabilities, including simple syntax, full Lucene syntax, and typeahead search
++ Rich query syntax that supplements free text search with filters, autocomplete, regex, geo-search and more + Programmability through REST APIs and client libraries in Azure SDKs for .NET, Python, Java, and JavaScript + Azure integration at the data layer, machine learning layer, and AI (Cognitive Services)
Across the Azure platform, Cognitive Search can integrate with other Azure servi
On the search service itself, the two primary workloads are *indexing* and *querying*.
-+ [Indexing](search-what-is-an-index.md) is an intake process that loads content into to your search service and makes it searchable. Internally, inbound text is processed into tokens and stored in inverted indexes for fast scans. You can upload any text that is in the form of JSON documents.
++ [**Indexing**](search-what-is-an-index.md) is an intake process that loads content into to your search service and makes it searchable. Internally, inbound text is processed into tokens and stored in inverted indexes for fast scans. You can upload any text that is in the form of JSON documents. Additionally, if your content includes mixed files, you have the option of adding *AI enrichment* through [cognitive skills](cognitive-search-working-with-skillsets.md). AI enrichment can extract text embedded in application files, and also infer text and structure from non-text files by analyzing the content.
- The skills providing the analysis are predefined ones from Microsoft, or custom skills that you create. The subsequent analysis and transformations can result in new information and structures that did not previously exist, providing high utility for many search and knowledge mining scenarios.
+ The skills providing the analysis are predefined ones from Microsoft, or custom skills that you create. The subsequent analysis and transformations can result in new information and structures that didn't previously exist, providing high utility for many search and knowledge mining scenarios.
-+ [Querying](search-query-overview.md) can happen once an index is populated with searchable text, when your client app sends query requests to a search service and handles responses. All query execution is over a search index that you create, own, and store in your service. In your client app, the search experience is defined using APIs from Azure Cognitive Search, and can include relevance tuning, autocomplete, synonym matching, fuzzy matching, pattern matching, filter, and sort.
++ [**Querying**](search-query-overview.md) can happen once an index is populated with searchable text, when your client app sends query requests to a search service and handles responses. All query execution is over a search index that you create, own, and store in your service. In your client app, the search experience is defined using APIs from Azure Cognitive Search, and can include relevance tuning, autocomplete, synonym matching, fuzzy matching, pattern matching, filter, and sort.
-Functionality is exposed through a simple [REST API](/rest/api/searchservice/) or [.NET SDK](search-howto-dotnet-sdk.md) that masks the inherent complexity of information retrieval. You can also use the Azure portal for service administration and content management, with tools for prototyping and querying your indexes and skillsets. Because the service runs in the cloud, infrastructure and availability are managed by Microsoft.
+Functionality is exposed through a simple [REST API](/rest/api/searchservice/), or Azure SDKs like the [Azure SDK for .NET](search-howto-dotnet-sdk.md), that masks the inherent complexity of information retrieval. You can also use the Azure portal for service administration and content management, with tools for prototyping and querying your indexes and skillsets. Because the service runs in the cloud, infrastructure and availability are managed by Microsoft.
## Why use Cognitive Search?
For more information about specific functionality, see [Features of Azure Cognit
An end-to-end exploration of core search features can be accomplished in four steps:
-1. [**Decide on a tier**](search-sku-tier.md). One free search service is allowed per subscription. All quickstarts can be completed on the free tier. For more capacity and capabilities, you will need a [billable tier](https://azure.microsoft.com/pricing/details/search/).
+1. [**Decide on a tier**](search-sku-tier.md) and region. One free search service is allowed per subscription. All quickstarts can be completed on the free tier. For more capacity and capabilities, you'll need a [billable tier](https://azure.microsoft.com/pricing/details/search/).
1. [**Create a search service**](search-create-service-portal.md) in the Azure portal.
An end-to-end exploration of core search features can be accomplished in four st
1. [**Finish with Search Explorer**](search-explorer.md), using a portal client to query the search index you just created.
-Alternatively, you can create, load, and query a search index in atomically:
+Alternatively, you can create, load, and query a search index in atomic steps:
1. [**Create a search index**](search-what-is-an-index.md) using the portal, [REST API](/rest/api/searchservice/create-index), [.NET SDK](search-howto-dotnet-sdk.md), or another SDK. The index schema defines the structure of searchable content.
Customers often ask how Azure Cognitive Search compares with other search-relate
|-|--| | Microsoft Search | [Microsoft Search](/microsoftsearch/overview-microsoft-search) is for Microsoft 365 authenticated users who need to query over content in SharePoint. It's offered as a ready-to-use search experience, enabled and configured by administrators, with the ability to accept external content through connectors from Microsoft and other sources. If this describes your scenario, then Microsoft Search with Microsoft 365 is an attractive option to explore.<br/><br/>In contrast, Azure Cognitive Search executes queries over an index that you define, populated with data and documents you own, often from diverse sources. Azure Cognitive Search has crawler capabilities for some Azure data sources through [indexers](search-indexer-overview.md), but you can push any JSON document that conforms to your index schema into a single, consolidated searchable resource. You can also customize the indexing pipeline to include machine learning and lexical analyzers. Because Cognitive Search is built to be a plug-in component in larger solutions, you can integrate search into almost any app, on any platform.| |Bing | [Bing Web Search API](../cognitive-services/bing-web-search/index.yml) searches the indexes on Bing.com for matching terms you submit. Indexes are built from HTML, XML, and other web content on public sites. Built on the same foundation, [Bing Custom Search](/azure/cognitive-services/bing-custom-search/) offers the same crawler technology for web content types, scoped to individual web sites.<br/><br/>In Cognitive Search, you can define and populate the index. You can use [indexers](search-indexer-overview.md) to crawl data on Azure data sources, or push any index-conforming JSON document to your search service. |
-|Database search | Many database platforms include a built-in search experience. SQL Server has [full text search](/sql/relational-databases/search/full-text-search). Cosmos DB and similar technologies have queryable indexes. When evaluating products that combine search and storage, it can be challenging to determine which way to go. Many solutions use both: DBMS for storage, and Azure Cognitive Search for specialized search features.<br/><br/>Compared to DBMS search, Azure Cognitive Search stores content from heterogeneous sources and offers specialized text processing features such as linguistic-aware text processing (stemming, lemmatization, word forms) in [56 languages](/rest/api/searchservice/language-support). It also supports autocorrection of misspelled words, [synonyms](/rest/api/searchservice/synonym-map-operations), [suggestions](/rest/api/searchservice/suggestions), [scoring controls](/rest/api/searchservice/add-scoring-profiles-to-a-search-index), [facets](search-faceted-navigation.md), and [custom tokenization](/rest/api/searchservice/custom-analyzers-in-azure-search). The [full text search engine](search-lucene-query-architecture.md) in Azure Cognitive Search is built on Apache Lucene, an industry standard in information retrieval. However, while Azure Cognitive Search persists data in the form of an inverted index, it is not a replacement for true data storage and we don't recommend using it in that capacity. For more information, see this [forum post](https://stackoverflow.com/questions/40101159/can-azure-search-be-used-as-a-primary-database-for-some-data). <br/><br/>Resource utilization is another inflection point in this category. Indexing and some query operations are often computationally intensive. Offloading search from the DBMS to a dedicated solution in the cloud preserves system resources for transaction processing. Furthermore, by externalizing search, you can easily adjust scale to match query volume.|
-|Dedicated search solution | Assuming you have decided on dedicated search with full spectrum functionality, a final categorical comparison is between on premises solutions or a cloud service. Many search technologies offer controls over indexing and query pipelines, access to richer query and filtering syntax, control over rank and relevance, and features for self-directed and intelligent search. <br/><br/>A cloud service is the right choice if you want a turn-key solution with minimal overhead and maintenance, and adjustable scale. <br/><br/>Within the cloud paradigm, several providers offer comparable baseline features, with full-text search, geospatial search, and the ability to handle a certain level of ambiguity in search inputs. Typically, it's a [specialized feature](search-features-list.md), or the ease and overall simplicity of APIs, tools, and management that determines the best fit. |
+|Database search | Many database platforms include a built-in search experience. SQL Server has [full text search](/sql/relational-databases/search/full-text-search). Cosmos DB and similar technologies have queryable indexes. When evaluating products that combine search and storage, it can be challenging to determine which way to go. Many solutions use both: DBMS for storage, and Azure Cognitive Search for specialized search features.<br/><br/>Compared to DBMS search, Azure Cognitive Search stores content from heterogeneous sources and offers specialized text processing features such as linguistic-aware text processing (stemming, lemmatization, word forms) in [56 languages](/rest/api/searchservice/language-support). It also supports autocorrection of misspelled words, [synonyms](/rest/api/searchservice/synonym-map-operations), [suggestions](/rest/api/searchservice/suggestions), [scoring controls](/rest/api/searchservice/add-scoring-profiles-to-a-search-index), [facets](search-faceted-navigation.md), and [custom tokenization](/rest/api/searchservice/custom-analyzers-in-azure-search). The [full text search engine](search-lucene-query-architecture.md) in Azure Cognitive Search is built on Apache Lucene, an industry standard in information retrieval. However, while Azure Cognitive Search persists data in the form of an inverted index, it isn't a replacement for true data storage and we don't recommend using it in that capacity. For more information, see this [forum post](https://stackoverflow.com/questions/40101159/can-azure-search-be-used-as-a-primary-database-for-some-data). <br/><br/>Resource utilization is another inflection point in this category. Indexing and some query operations are often computationally intensive. Offloading search from the DBMS to a dedicated solution in the cloud preserves system resources for transaction processing. Furthermore, by externalizing search, you can easily adjust scale to match query volume.|
+|Dedicated search solution | Assuming you've decided on dedicated search with full spectrum functionality, a final categorical comparison is between on premises solutions or a cloud service. Many search technologies offer controls over indexing and query pipelines, access to richer query and filtering syntax, control over rank and relevance, and features for self-directed and intelligent search. <br/><br/>A cloud service is the right choice if you want a turn-key solution with minimal overhead and maintenance, and adjustable scale. <br/><br/>Within the cloud paradigm, several providers offer comparable baseline features, with full-text search, geospatial search, and the ability to handle a certain level of ambiguity in search inputs. Typically, it's a [specialized feature](search-features-list.md), or the ease and overall simplicity of APIs, tools, and management that determines the best fit. |
Among cloud providers, Azure Cognitive Search is strongest for full text search workloads over content stores and databases on Azure, for apps that rely primarily on search for both information retrieval and content navigation. Key strengths include: + Data integration (crawlers) at the indexing layer.++ AI and machine learning integration with Azure Cognitive Services, useful if you need to make unsearchable content full text-searchable. + Security integration with Azure Active Directory for trusted connections, and with Azure Private Link integration to support private connections to a search index in no-internet scenarios.
-+ Machine learning and AI integration with Azure Cognitive Services, useful if you need to make unsearchable content types full text-searchable.
+ Linguistic and custom text analysis in 56 languages. + [Full search experience](search-features-list.md): rich query language, relevance tuning and semantic ranking, faceting, autocomplete queries and suggested results, and synonyms. + Azure scale, reliability, and world-class availability.
security Antimalware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/antimalware.md
The Microsoft Antimalware Client and Service is installed by default in a disabl
When using Azure App Service on Windows, the underlying service that hosts the web app has Microsoft Antimalware enabled on it. This is used to protect Azure App Service infrastructure and does not run on customer content. > [!NOTE]
-> Microsoft Defender Antivirus is the built-in Antimalware enabled in Windows Server 2016. The Microsoft Defender Antivirus Interface is also enabled by default on some Windows Server 2016 SKU's [see here for more information](/windows/threat-protection/windows-defender-antivirus/windows-defender-antivirus-on-windows-server-2016).
+> Microsoft Defender Antivirus is the built-in Antimalware enabled in Windows Server 2016. The Microsoft Defender Antivirus Interface is also enabled by default on some Windows Server 2016 SKU's [see here for more information](/microsoft-365/security/defender-endpoint/microsoft-defender-antivirus-windows).
> The Azure VM Antimalware extension can still be added to a Windows Server 2016 Azure VM with Microsoft Defender Antivirus, but in this scenario the extension will apply any optional [configuration policies](https://gallery.technet.microsoft.com/Antimalware-For-Azure-5ce70efe) to be used by Microsoft Defender Antivirus, the extension will not deploy any additional antimalware services. > You can read more about this update [here](/archive/blogs/azuresecurity/update-to-azure-antimalware-extension-for-cloud-services).
sentinel Automate Incident Handling With Automation Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-incident-handling-with-automation-rules.md
This article explains what Microsoft Sentinel automation rules are, and how to u
## What are automation rules?
-Automation rules are a way to centrally manage the automation of incident handling, allowing you to perform simple automation tasks without using playbooks. For example, automation rules allow you to automatically assign incidents to the proper personnel, tag incidents to classify them, and change the status of incidents and close them. Automation rules can also automate responses for multiple analytics rules at once, control the order of actions that are executed, and run playbooks for those cases where more complex automation tasks are necessary. In short, automation rules streamline the use of automation in Microsoft Sentinel, enabling you to simplify complex workflows for your incident orchestration processes.
+Automation rules are a way to centrally manage the automation of incident handling, allowing you to perform simple automation tasks without using playbooks.
+
+For example, automation rules allow you to automatically:
+- Suppress noisy incidents.
+- Triage new incidents by changing their status from New to Active and assigning an owner.
+- Tag incidents to classify them.
+- Escalate an incident by assigning a new owner.
+- Close resolved incidents, specifying a reason and adding comments.
+
+Automation rules can also:
+- Automate responses for multiple analytics rules at once.
+- Control the order of actions that are executed.
+- Inspect the incident's contents (alerts, entities, and other properties) and take further action by calling a playbook.
+
+In short, automation rules streamline the use of automation in Microsoft Sentinel, enabling you to simplify complex workflows for your incident orchestration processes.
## Components Automation rules are made up of several components:
-### Trigger
+- **[Triggers](#triggers)** that define what kind of incident event will cause the rule to run, subject to...
+- **[Conditions](#conditions)** that will determine the exact circumstances under which the rule will run and perform...
+- **[Actions](#actions)** to change the incident in some way or call a [playbook](automate-responses-with-playbooks.md).
+
+### Triggers
-Automation rules are triggered by the creation of an incident.
+Automation rules are triggered **when an incident is created or updated** (the update trigger is now in **Preview**). Recall that incidents are created from alerts by analytics rules, of which there are several types, as explained in [Detect threats with built-in analytics rules in Microsoft Sentinel](detect-threats-built-in.md).
-To review ΓÇô incidents are created from alerts by analytics rules, of which there are several types, as explained in the tutorial [Detect threats with built-in analytics rules in Microsoft Sentinel](detect-threats-built-in.md).
+The following table shows the different possible ways that incidents can be created or updated that will cause an automation rule to run.
+
+| Trigger type | Events that cause the rule to run |
+| | |
+| **When incident is created** | - A new incident is created by an analytics rule.<br>- An incident is ingested from Microsoft 365 Defender.<br>- A new incident is created manually. |
+| **When incident is updated**<br>(Preview) | - An incident's status is changed (closed/reopened/triaged).<br>- An incident's owner is assigned or changed.<br>- An incident's severity is raised or lowered.<br>- Alerts are added to an incident.<br>- Comments, tags, or tactics are added to an incident. |
### Conditions
-Complex sets of conditions can be defined to govern when actions (see below) should run. These conditions are typically based on the states or values of attributes of incidents and their entities, and they can include `AND`/`OR`/`NOT`/`CONTAINS` operators.
+Complex sets of conditions can be defined to govern when actions (see below) should run. These conditions include the event that triggers the rule (incident created or updated), the states or values of the incident's properties and [entity properties](entities-reference.md), and also the analytics rule or rules that generated the incident.
+
+When an automation rule is triggered, it checks the triggering incident against the conditions defined in the rule. The property-based conditions are evaluated according to **the current state** of the property at the moment the evaluation occurs, or according to **changes in the state** of the property (see below for details). Since a single incident creation or update event could trigger several automation rules, the **order** in which they run (see below) makes a difference in determining the outcome of the conditions' evaluation. The **actions** defined in the rule will run only if all the conditions are satisfied.
+
+#### Incident create trigger
+
+For rules defined using the trigger **When an incident is created**, you can define conditions that check the **current state** of the values of a given list of incident properties, using one or more of the following operators:
+
+An incident property's value
+- **equals** or **does not equal** the value defined in the condition.
+- **contains** or **does not contain** the value defined in the condition.
+- **starts with** or **does not start with** the value defined in the condition.
+- **ends with** or **does not end with** the value defined in the condition.
+++
+The **current state** in this context refers to the moment the condition is evaluated - that is, the moment the automation rule runs. If more than one automation rule is defined to run in response to the creation of this incident, then changes made to the incident by an earlier-run automation rule are considered the current state for later-run rules.
+
+#### Incident update trigger
+
+The conditions evaluated in rules defined using the trigger **When an incident is updated** include all of those listed for the incident creation trigger. But the update trigger includes more properties that can be evaluated.
+
+One of these properties is **Updated by**. This property lets you track the type of source that made the change in the incident. You can create a condition evaluating whether the incident was updated by one of the following:
+
+- an application
+- a user
+- an alert grouping (that added alerts to the incident)
+- a playbook
+- an automation rule
+- Microsoft 365 Defender
+
+Using this condition, for example, you can instruct this automation rule to run on any change made to an incident, except if it was made by another automation rule.
+
+More to the point, the update trigger also uses other operators that check **state changes** in the values of incident properties as well as their current state. A **state change** condition would be satisfied if:
+
+An incident property's value was
+- **changed** (regardless of the actual value before or after).
+- **changed from** the value defined in the condition.
+- **changed to** the value defined in the condition.
+- **added** to (this applies to properties with a list of values).
+
+> [!NOTE]
+> - An automation rule, based on the update trigger, can run on an incident that was updated by another automation rule, based on the incident creation trigger, that ran on the incident.
+>
+> - Also, if an incident is updated by an automation rule that ran on the incident's creation, the incident can be evaluated by *both* a subsequent *incident-creation* automation rule *and* an *incident-update* automation rule, both of which will run if the incident satisfies the rules' conditions.
+>
+> - If an incident triggers both create-trigger and update-trigger automation rules, the create-trigger rules will run first, according to their **[Order](#order)** numbers, and then the update-trigger rules will run, according to *their* **Order** numbers.
+ ### Actions
For example, if "First Automation Rule" changed an incident's severity from Medi
### Incident-triggered automation
-Until now, only alerts could trigger an automated response, through the use of playbooks. With automation rules, incidents can now trigger automated response chains, which can include new incident-triggered playbooks ([special permissions are required](#permissions-for-automation-rules-to-run-playbooks)), when an incident is created.
+Before automation rules existed, only alerts could trigger an automated response, through the use of playbooks. With automation rules, incidents can now trigger automated response chains, which can include new incident-triggered playbooks ([special permissions are required](#permissions-for-automation-rules-to-run-playbooks)), when an incident is created.
### Trigger playbooks for Microsoft providers
Automation rules provide a way to automate the handling of Microsoft security al
Microsoft security alerts include the following: -- Microsoft Defender for Cloud Apps
+- Microsoft Defender for Cloud Apps (formerly Microsoft Cloud App Security)
- Azure AD Identity Protection-- Microsoft Defender for Cloud
+- Microsoft Defender for Cloud (formerly Azure Defender or Azure Security Center)
- Defender for IoT (formerly Azure Security Center for IoT)-- Microsoft Defender for Office 365 (formerly Office 365 ATP)-- Microsoft Defender for Endpoint (formerly MDATP)-- Microsoft Defender for Identity (formerly Azure ATP)
+- Microsoft Defender for Office 365
+- Microsoft Defender for Endpoint
+- Microsoft Defender for Identity
### Multiple sequenced playbooks/actions in a single rule
You can add expiration dates for your automation rules. There may be cases other
You can automatically add free-text tags to incidents to group or classify them according to any criteria of your choosing.
+## Use cases added by update trigger
+
+Now that changes made to incidents can trigger automation rules, more scenarios are open to automation.
+
+### Extend automation when incident evolves
+
+You can use the update trigger to apply many of the above use cases to incidents as their investigation progresses and analysts add alerts, comments, and tags. Control alert grouping in incidents.
+
+### Update orchestration and notification
+
+Notify your various teams and other personnel when changes are made to incidents, so they won't miss any critical updates. Escalate incidents by assigning them to new owners and informing the new owners of their assignments. Control when and how incidents are reopened.
+
+### Maintain synchronization with external systems
+
+If you've used playbooks to create tickets in external systems when incidents are created, you can use an update-trigger automation rule to call a playbook that will update those tickets.
+ ## Automation rules execution Automation rules are run sequentially, according to the order you determine. Each automation rule is executed after the previous one has finished its run. Within an automation rule, all actions are run sequentially in the order in which they are defined.
Playbook actions within an automation rule may be treated differently under some
| Less than a second | Immediately after playbook is completed | | Less than two minutes | Up to two minutes after playbook began running,<br>but no more than 10 seconds after the playbook is completed | | More than two minutes | Two minutes after playbook began running,<br>regardless of whether or not it was completed |
-|
### Permissions for automation rules to run playbooks
In the specific case of a Managed Security Service Provider (MSSP), where a serv
## Creating and managing automation rules
-You can create and manage automation rules from different points in the Microsoft Sentinel experience, depending on your particular need and use case.
+You can [create and manage automation rules](create-manage-use-automation-rules.md) from different points in the Microsoft Sentinel experience, depending on your particular need and use case.
- **Automation blade**
- Automation rules can be centrally managed in the new **Automation** blade (which replaces the **Playbooks** blade), under the **Automation rules** tab. (You can also now manage playbooks in this blade, under the **Playbooks** tab.) From there, you can create new automation rules and edit the existing ones. You can also drag automation rules to change the order of execution, and enable or disable them.
+ Automation rules can be centrally managed in the **Automation** blade, under the **Automation rules** tab. From there, you can create new automation rules and edit the existing ones. You can also drag automation rules to change the order of execution, and enable or disable them.
In the **Automation** blade, you see all the rules that are defined on the workspace, along with their status (Enabled/Disabled) and which analytics rules they are applied to.
- When you need an automation rule that will apply to many analytics rules, create it directly in the **Automation** blade. From the top menu, click **Create** and **Add new rule**, which opens the **Create new automation rule** panel. From here you have complete flexibility in configuring the rule: you can apply it to any analytics rules (including future ones) and define the widest range of conditions and actions.
+ When you need an automation rule that will apply to many analytics rules, create it directly in the **Automation** blade.
- **Analytics rule wizard**
- In the **Automated response** tab of the analytics rule wizard, you can see, manage, and create automation rules that apply to the particular analytics rule being created or edited in the wizard.
-
- When you click **Create** and one of the rule types (**Scheduled query rule** or **Microsoft incident creation rule**) from the top menu in the **Analytics** blade, or if you select an existing analytics rule and click **Edit**, you'll open the rule wizard. When you select the **Automated response** tab, you will see a section called **Incident automation**, under which the automation rules that currently apply to this rule will be displayed. You can select an existing automation rule to edit, or click **Add new** to create a new one.
+ In the **Automated response** tab of the analytics rule wizard, under **Incident automation**, you can view, edit, and create automation rules that apply to the particular analytics rule being created or edited in the wizard.
- You'll notice that when you create the automation rule from here, the **Create new automation rule** panel shows the **analytics rule** condition as unavailable, because this rule is already set to apply only to the analytics rule you're editing in the wizard. All the other configuration options are still available to you.
+ You'll notice that when you create an automation rule from here, the **Create new automation rule** panel shows the **analytics rule** condition as unavailable, because this rule is already set to apply only to the analytics rule you're editing in the wizard. All the other configuration options are still available to you.
- **Incidents blade**
- You can also create an automation rule from the **Incidents** blade, in order to respond to a single, recurring incident. This is useful when creating a [suppression rule](#incident-suppression) for automatically closing "noisy" incidents. Select an incident from the queue and click **Create automation rule** from the top menu.
-
- You'll notice that the **Create new automation rule** panel has populated all the fields with values from the incident. It names the rule the same name as the incident, applies it to the analytics rule that generated the incident, and uses all the available entities in the incident as conditions of the rule. It also suggests a suppression (closing) action by default, and suggests an expiration date for the rule. You can add or remove conditions and actions, and change the expiration date, as you wish.
-
-## Auditing automation rule activity
+ You can also create an automation rule from the **Incidents** blade, in order to respond to a single, recurring incident. This is useful when creating a [suppression rule](#incident-suppression) for [automatically closing "noisy" incidents](false-positives.md).
-You may be interested in knowing what happened to a given incident, and what a certain automation rule may or may not have done to it. You have a full record of incident chronicles available to you in the *SecurityIncident* table in the **Logs** blade. Use the following query to see all your automation rule activity:
+ You'll notice that when you create an automation rule from here, the **Create new automation rule** panel has populated all the fields with values from the incident. It names the rule the same name as the incident, applies it to the analytics rule that generated the incident, and uses all the available entities in the incident as conditions of the rule. It also suggests a suppression (closing) action by default, and suggests an expiration date for the rule. You can add or remove conditions and actions, and change the expiration date, as you wish.
-```kusto
-SecurityIncident
-| where ModifiedBy contains "Automation"
-```
## Next steps
-In this document, you learned how to use automation rules to manage your Microsoft Sentinel incidents queue and implement some basic incident-handling automation.
+In this document, you learned about how automation rules can help you to manage your Microsoft Sentinel incidents queue and implement some basic incident-handling automation.
+- [Create and use Microsoft Sentinel automation rules to manage incidents](create-manage-use-automation-rules.md).
- To learn more about advanced automation options, see [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md).-- For help in implementing automation rules and playbooks, see [Tutorial: Use playbooks to automate threat responses in Microsoft Sentinel](tutorial-respond-threats-playbook.md).
+- For help in implementing playbooks, see [Tutorial: Use playbooks to automate threat responses in Microsoft Sentinel](tutorial-respond-threats-playbook.md).
sentinel Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automation.md
Microsoft Sentinel, in addition to being a Security Information and Event Manage
## Automation rules
-Automation rules (now generally available!) allow users to centrally manage the automation of incident handling. Besides letting you assign playbooks to incidents (not just to alerts as before), automation rules also allow you to automate responses for multiple analytics rules at once, automatically tag, assign, or close incidents without the need for playbooks, and control the order of actions that are executed. Automation rules will streamline automation use in Microsoft Sentinel and will enable you to simplify complex workflows for your incident orchestration processes.
+Automation rules (now generally available!) allow users to centrally manage the automation of incident handling. Besides letting you assign playbooks to incidents (not just to alerts as before), automation rules also allow you to automate responses for multiple analytics rules at once, automatically tag, assign, or close incidents without the need for playbooks, and control the order of actions that are executed. Automation rules also allow you to apply automations when an incident is **updated** (now in **Preview**), as well as when it's created. This new capability will further streamline automation use in Microsoft Sentinel and will enable you to simplify complex workflows for your incident orchestration processes.
Learn more with this [complete explanation of automation rules](automate-incident-handling-with-automation-rules.md).
In this document, you learned how Microsoft Sentinel uses automation to help you
- To learn about automation of incident handling, see [Automate incident handling in Microsoft Sentinel](automate-incident-handling-with-automation-rules.md). - To learn more about advanced automation options, see [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md).-- For help in implementing automation rules and playbooks, see [Tutorial: Use playbooks to automate threat responses in Microsoft Sentinel](tutorial-respond-threats-playbook.md).
+- To get started creating automation rules, see [Create and use Microsoft Sentinel automation rules to manage incidents](create-manage-use-automation-rules.md)
+- For help in implementing advanced automation with playbooks, see [Tutorial: Use playbooks to automate threat responses in Microsoft Sentinel](tutorial-respond-threats-playbook.md).
sentinel Create Manage Use Automation Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-manage-use-automation-rules.md
+
+ Title: Create and use Microsoft Sentinel automation rules to manage incidents | Microsoft Docs
+description: This article explains how to create and use automation rules in Microsoft Sentinel to manage and handle incidents, in order to maximize your SOC's efficiency and effectiveness in response to security threats.
++ Last updated : 05/23/2022+++
+# Create and use Microsoft Sentinel automation rules to manage incidents
++
+This article explains how to create and use automation rules in Microsoft Sentinel to manage and handle incidents, in order to maximize your SOC's efficiency and effectiveness in response to security threats.
+
+In this article you'll learn how to define the triggers and conditions that will determine when your automation rule will run, the various actions that you can have the rule perform, and the remaining features and functionalities.
+
+## Design your automation rule
+
+### Determine the scope
+
+The first step in designing and defining your automation rule is figuring out which incidents you want it to apply to. This determination will directly impact how you create the rule.
+
+You also want to determine your use case. What are you trying to accomplish with this automation? Consider the following options:
+
+- Suppress noisy incidents (see [this article on handling false positives](false-positives.md#add-exceptions-by-using-automation-rules) instead)
+- Triage new incidents by changing their status from New to Active and assigning an owner.
+- Tag incidents to classify them.
+- Escalate an incident by assigning a new owner.
+- Close resolved incidents, specifying a reason and adding comments.
+- Analyze the incident's contents (alerts, entities, and other properties) and take further action by calling a playbook.
+
+### Determine the trigger
+
+Do you want this automation to be activated when new incidents are created? Or any time an incident gets updated?
+
+Automation rules are triggered **when an incident is created or updated** (the update trigger is now in **Preview**). Recall that incidents are created from alerts by analytics rules, of which there are several types, as explained in [Detect threats with built-in analytics rules in Microsoft Sentinel](detect-threats-built-in.md).
+
+The following table shows the different possible ways that incidents can be created or updated that will cause an automation rule to run.
+
+| Trigger type | Events that cause the rule to run |
+| | |
+| **When incident is created** | - A new incident is created by an analytics rule.<br>- An incident is ingested from Microsoft 365 Defender.<br>- A new incident is created manually. |
+| **When incident is updated**<br>(Preview) | - An incident's status is changed (closed/reopened/triaged).<br>- An incident's owner is assigned or changed.<br>- An incident's severity is raised or lowered.<br>- Alerts are added to an incident.<br>- Comments, tags, or tactics are added to an incident. |
+
+## Create your automation rule
+
+Most of the following instructions apply to any and all use cases for which you'll create automation rules.
+
+- For the use case of suppressing noisy incidents, see [this article on handling false positives](false-positives.md#add-exceptions-by-using-automation-rules).
+- For creating an automation rule that will apply to a single specific analytics rule, see [this article on configuring automated response in analytics rules](detect-threats-custom.md#set-automated-responses-and-create-the-rule).
+
+1. From the **Automation** blade in the Microsoft Sentinel navigation menu, select **Create** from the top menu and choose **Automation rule**.
+
+ :::image type="content" source="./media/create-manage-use-automation-rules/add-rule-automation.png" alt-text="Screenshot of creating a new automation rule in the Automation blade." lightbox="./media/create-manage-use-automation-rules/add-rule-automation.png":::
+
+1. The **Create new automation rule** panel opens. Enter a name for your rule.
+
+ :::image type="content" source="media/create-manage-use-automation-rules/create-automation-rule.png" alt-text="Screenshot of Create new automation rule wizard.":::
+
+1. If you want the automation rule to take effect only on certain analytics rules, specify which ones by modifying the **If Analytics rule name** condition.
+
+### Choose your trigger
+
+From the **Trigger** drop-down, select **When incident is created** or **When incident is updated (Preview)** according to what you decided when designing your rule.
++
+### Add conditions
+
+Add any other conditions you want this automation rule's activation to depend on. Select **+ Add condition** and choose conditions from the drop-down list. The list of conditions is populated by incident property and [entity property](entities-reference.md) fields.
+
+1. Select a property from the first drop-down box on the left. You can begin typing any part of a property name in the search box to dynamically filter the list, so you can find what you're looking for quickly.
+ :::image type="content" source="media/create-manage-use-automation-rules/filter-list.png" alt-text="Screenshot of typing in a search box to filter the list of choices.":::
+
+1. Select an operator from the next drop-down box to the right.
+ :::image type="content" source="media/create-manage-use-automation-rules/select-operator.png" alt-text="Screenshot of selecting a condition operator for automation rules.":::
+
+ The list of operators you can choose from varies according to the selected trigger and property. Here's a summary of what's available:
+
+ #### Conditions available with the create trigger
+
+ | Property | Operator set |
+ | -- | -- |
+ | - Title<br>- Description<br>- Tag<br>- All listed entity properties | - Equals/Does not equal<br>- Contains/Does not contain<br>- Starts with/Does not start with<br>- Ends with/Does not end with |
+ | - Severity<br>- Status<br>- Incident provider | - Equals/Does not equal |
+ | - Tactics<br>- Alert product names | - Contains/Does not contain |
+
+ #### Conditions available with the update trigger
+
+ | Property | Operator set |
+ | -- | -- |
+ | - Title<br>- Description<br>- Tag<br>- All listed entity properties | - Equals/Does not equal<br>- Contains/Does not contain<br>- Starts with/Does not start with<br>- Ends with/Does not end with |
+ | - Tag (in addition to above)<br>- Alerts<br>- Comments | - Added |
+ | - Severity<br>- Status | - Equals/Does not equal<br>- Changed<br>- Changed from<br>- Changed to |
+ | - Owner | - Changed |
+ | - Incident provider<br>- Updated by | - Equals/Does not equal |
+ | - Tactics | - Contains/Does not contain<br>- Added |
+ | - Alert product names | - Contains/Does not contain |
+
+1. Enter a value in the text box on the right. Depending on the property you chose, this might be a drop-down list from which you would select the values you choose. You might also be able to add several values by selecting the icon to the right of the text box (highlighted by the red arrow below).
+
+ :::image type="content" source="media/create-manage-use-automation-rules/add-values-to-condition.png" alt-text="Screenshot of adding values to your condition in automation rules.":::
+
+### Add actions
+
+Choose the actions you want this automation rule to take. Available actions include **Assign owner**, **Change status**, **Change severity**, **Add tags**, and **Run playbook**. You can add as many actions as you like.
++
+If you add a **Run playbook** action, you will be prompted to choose from the drop-down list of available playbooks.
+
+- Only playbooks that start with the **incident trigger** can be run from automation rules, so only they will appear in the list.
+
+- <a name="explicit-permissions"></a>Microsoft Sentinel must be granted explicit permissions in order to run playbooks based on the incident trigger. If a playbook appears "grayed out" in the drop-down list, it means Sentinel does not have permission to that playbook's resource group. Click the **Manage playbook permissions** link to assign permissions.
+
+ In the **Manage permissions** panel that opens up, mark the check boxes of the resource groups containing the playbooks you want to run, and click **Apply**.
+ :::image type="content" source="./media/tutorial-respond-threats-playbook/manage-permissions.png" alt-text="Manage permissions":::
+
+ You yourself must have **owner** permissions on any resource group to which you want to grant Microsoft Sentinel permissions, and you must have the **Logic App Contributor** role on any resource group containing playbooks you want to run.
+
+- If you don't yet have a playbook that will take the action you have in mind, [create a new playbook](tutorial-respond-threats-playbook.md). You will have to exit the automation rule creation process and restart it after you have created your playbook.
+
+### Finish creating your rule
+
+1. Set an **expiration date** for your automation rule if you want it to have one.
+
+1. Enter a number under **Order** to determine where in the sequence of automation rules this rule will run.
+
+1. Click **Apply**. You're done!
+
+## Audit automation rule activity
+
+Find out what automation rules may have done to a given incident. You have a full record of incident chronicles available to you in the *SecurityIncident* table in the **Logs** blade. Use the following query to see all your automation rule activity:
+
+```kusto
+SecurityIncident
+| where ModifiedBy contains "Automation"
+```
+
+## Automation rules execution
+
+Automation rules are run sequentially, according to the order you determine. Each automation rule is executed after the previous one has finished its run. Within an automation rule, all actions are run sequentially in the order in which they are defined.
+
+Playbook actions within an automation rule may be treated differently under some circumstances, according to the following criteria:
+
+| Playbook run time | Automation rule advances to the next action... |
+| -- | |
+| Less than a second | Immediately after playbook is completed |
+| Less than two minutes | Up to two minutes after playbook began running,<br>but no more than 10 seconds after the playbook is completed |
+| More than two minutes | Two minutes after playbook began running,<br>regardless of whether or not it was completed |
+|
+
+## Next steps
+
+In this document, you learned how to use automation rules to manage your Microsoft Sentinel incidents queue and implement some basic incident-handling automation.
+
+- To learn more about advanced automation options, see [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md).
+- For help in implementing automation rules and playbooks, see [Tutorial: Use playbooks to automate threat responses in Microsoft Sentinel](tutorial-respond-threats-playbook.md).
sentinel Deploy Side By Side https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/deploy-side-by-side.md
+
+ Title: Deploy Microsoft Sentinel side-by-side to an existing SIEM.
+description: Learn how to deploy Microsoft Sentinel side-by-side to an existing SIEM.
++ Last updated : 05/30/2022+++
+# Deploy Microsoft Sentinel side-by-side to an existing SIEM
+
+Your security operations center (SOC) team uses centralized security information and event management (SIEM) and security orchestration, automation, and response (SOAR) solutions to protect your increasingly decentralized digital estate.
+
+This article describes how to deploy Microsoft Sentinel in a side-by-side configuration together with your existing SIEM.
+
+## Select a side-by-side approach and method
+
+Use a side-by-side architecture either as a short-term, transitional phase that leads to a completely cloud-hosted SIEM, or as a medium- to long-term operational model, depending on the SIEM needs of your organization.
+
+For example, while the recommended architecture is to use a side-by-side architecture just long enough to complete a migration to Microsoft Sentinel, your organization may want to stay with your side-by-side configuration for longer, such as if you aren't ready to move away from your legacy SIEM. Typically, organizations who use a long-term, side-by-side configuration use Microsoft Sentinel to analyze only their cloud data.
+
+Consider the pros and cons for each approach when deciding which one to use.
+
+> [!NOTE]
+> Many organizations avoid running multiple on-premises analytics solutions because of cost and complexity.
+>
+> Microsoft Sentinel provides [pay-as-you-go pricing](billing.md) and flexible infrastructure, giving SOC teams time to adapt to the change. Deploy and test your content at a pace that works best for your organization, and learn about how to [fully migrate to Microsoft Sentinel](migration.md).
+>
+### Short-term approach
+
+|**Pros** |**Cons** |
+|||
+|ΓÇó Gives SOC staff time to adapt to new processes as you deploy workloads and analytics.<br><br>ΓÇó Gains deep correlation across all data sources for hunting scenarios.<br><br>ΓÇó Eliminates having to do analytics between SIEMs, create forwarding rules, and close investigations in two places.<br><br>ΓÇó Enables your SOC team to quickly downgrade legacy SIEM solutions, eliminating infrastructure and licensing costs. |ΓÇó Can require a steep learning curve for SOC staff. |
+
+### Medium- to long-term approach
+
+|**Pros** |**Cons** |
+|||
+|ΓÇó Lets you use key Microsoft Sentinel benefits, like AI, ML, and investigation capabilities, without moving completely away from your legacy SIEM.<br><br>ΓÇó Saves money compared to your legacy SIEM, by analyzing cloud or Microsoft data in Microsoft Sentinel. |ΓÇó Increases complexity by separating analytics across different databases.<br><br>ΓÇó Splits case management and investigations for multi-environment incidents.<br><br>ΓÇó Incurs greater staff and infrastructure costs.<br><br>ΓÇó Requires SOC staff to be knowledgeable about two different SIEM solutions. |
+
+### Send alerts from a legacy SIEM to Microsoft Sentinel (Recommended)
+
+Send alerts, or indicators of anomalous activity, from your legacy SIEM to Microsoft Sentinel.
+
+- Ingest and analyze cloud data in Microsoft Sentinel
+- Use your legacy SIEM to analyze on-premises data and generate alerts.
+- Forward the alerts from your on-premises SIEM into Microsoft Sentinel to establish a single interface.
+
+For example, forward alerts using [Logstash](connect-logstash.md), [APIs](/rest/api/securityinsights/), or [Syslog](connect-syslog.md), and store them in [JSON](https://techcommunity.microsoft.com/t5/azure-sentinel/tip-easily-use-json-fields-in-sentinel/ba-p/768747) format in your Microsoft Sentinel [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md).
+
+By sending alerts from your legacy SIEM to Microsoft Sentinel, your team can cross-correlate and investigate those alerts in Microsoft Sentinel. The team can still access the legacy SIEM for deeper investigation if needed. Meanwhile, you can continue deploying data sources over an extended transition period.
+
+This recommended, side-by-side deployment method provides you with full value from Microsoft Sentinel and the ability to deploy data sources at the pace that's right for your organization. This approach avoids duplicating costs for data storage and ingestion while you move your data sources over.
+
+For more information, see:
+
+- [Migrate QRadar offenses to Microsoft Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/migrating-qradar-offenses-to-azure-sentinel/ba-p/2102043)
+- [Export data from Splunk to Microsoft Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/how-to-export-data-from-splunk-to-azure-sentinel/ba-p/1891237).
+
+If you want to fully migrate to Microsoft Sentinel, review the full [migration guide](migration.md).
+
+### Send alerts and enriched incidents from Microsoft Sentinel to a legacy SIEM
+
+Analyze some data in Microsoft Sentinel, such as cloud data, and then send the generated alerts to a legacy SIEM. Use the *legacy* SIEM as your single interface to do cross-correlation with the alerts that Microsoft Sentinel generated. You can still use Microsoft Sentinel for deeper investigation of the Microsoft Sentinel-generated alerts.
+
+This configuration is cost effective, as you can move your cloud data analysis to Microsoft Sentinel without duplicating costs or paying for data twice. You still have the freedom to migrate at your own pace. As you continue to shift data sources and detections over to Microsoft Sentinel, it becomes easier to migrate to Microsoft Sentinel as your primary interface. However, simply forwarding enriched incidents to a legacy SIEM limits the value you get from Microsoft Sentinel's investigation, hunting, and automation capabilities.
+
+For more information, see:
+
+- [Send enriched Microsoft Sentinel alerts to your legacy SIEM](https://techcommunity.microsoft.com/t5/azure-sentinel/sending-enriched-azure-sentinel-alerts-to-3rd-party-siem-and/ba-p/1456976)
+- [Send enriched Microsoft Sentinel alerts to IBM QRadar](https://techcommunity.microsoft.com/t5/azure-sentinel/azure-sentinel-side-by-side-with-qradar/ba-p/1488333)
+- [Ingest Microsoft Sentinel alerts into Splunk](https://techcommunity.microsoft.com/t5/azure-sentinel/azure-sentinel-side-by-side-with-splunk/ba-p/1211266)
+
+### Other methods
+
+The following table describes side-by-side configurations that are *not* recommended, with details as to why:
+
+|Method |Description |
+|||
+|**Send Microsoft Sentinel logs to your legacy SIEM** | With this method, you'll continue to experience the cost and scale challenges of your on-premises SIEM. <br><br>You'll pay for data ingestion in Microsoft Sentinel, along with storage costs in your legacy SIEM, and you can't take advantage of Microsoft Sentinel's SIEM and SOAR detections, analytics, User Entity Behavior Analytics (UEBA), AI, or investigation and automation tools. |
+|**Send logs from a legacy SIEM to Microsoft Sentinel** | While this method provides you with the full functionality of Microsoft Sentinel, your organization still pays for two different data ingestion sources. Besides adding architectural complexity, this model can result in higher costs. |
+|**Use Microsoft Sentinel and your legacy SIEM as two fully separate solutions** | You could use Microsoft Sentinel to analyze some data sources, like your cloud data, and continue to use your on-premises SIEM for other sources. This setup allows for clear boundaries for when to use each solution, and avoids duplication of costs. <br><br>However, cross-correlation becomes difficult, and you can't fully diagnose attacks that cross both sets of data sources. In today's landscape, where threats often move laterally across an organization, such visibility gaps can pose significant security risks. |
+
+## Use automation to streamline processes
+
+Use automated workflows to group and prioritize alerts into a common incident, and modify its priority.
+
+For more information, see:
+
+- [Security Orchestration, Automation, and Response (SOAR) in Microsoft Sentinel](automation.md).
+- [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md)
+- [Automate incident handling in Microsoft Sentinel with automation rules](automate-incident-handling-with-automation-rules.md)
+
+## Next steps
+
+Explore Microsoft's Microsoft Sentinel resources to expand your skills and get the most out of Microsoft Sentinel.
+
+Also consider increasing your threat protection by using Microsoft Sentinel alongside [Microsoft 365 Defender](./microsoft-365-defender-sentinel-integration.md) and [Microsoft Defender for Cloud](../security-center/azure-defender.md) for [integrated threat protection](https://www.microsoft.com/security/business/threat-protection). Benefit from the breadth of visibility that Microsoft Sentinel delivers, while diving deeper into detailed threat analysis.
+
+For more information, see:
+
+- [Rule migration best practices](https://techcommunity.microsoft.com/t5/azure-sentinel/best-practices-for-migrating-detection-rules-from-arcsight/ba-p/2216417)
+- [Webinar: Best Practices for Converting Detection Rules](https://www.youtube.com/watch?v=njXK1h9lfR4)
+- [Security Orchestration, Automation, and Response (SOAR) in Microsoft Sentinel](automation.md)
+- [Manage your SOC better with incident metrics](manage-soc-with-incident-metrics.md)
+- [Microsoft Sentinel learning path](/learn/paths/security-ops-sentinel/)
+- [SC-200 Microsoft Security Operations Analyst certification](/learn/certifications/exams/sc-200)
+- [Microsoft Sentinel Ninja training](https://techcommunity.microsoft.com/t5/azure-sentinel/become-an-azure-sentinel-ninja-the-complete-level-400-training/ba-p/1246310)
+- [Investigate an attack on a hybrid environment with Microsoft Sentinel](https://mslearn.cloudguides.com/guides/Investigate%20an%20attack%20on%20a%20hybrid%20environment%20with%20Azure%20Sentinel)
sentinel Detect Threats Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/detect-threats-custom.md
In the **Set rule logic** tab, you can either write a query directly in the **Ru
### Alert enrichment
-> [!IMPORTANT]
-> The alert enrichment features are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- - Use the **Entity mapping** configuration section to map parameters from your query results to Microsoft Sentinel-recognized entities. Entities enrich the rules' output (alerts and incidents) with essential information that serves as the building blocks of any investigative processes and remedial actions that follow. They are also the criteria by which you can group alerts together into incidents in the **Incident settings** tab. Learn more about [entities in Microsoft Sentinel](entities.md).
If you see that your query would trigger too many or too frequent alerts, you ca
### Event grouping and rule suppression
-> [!IMPORTANT]
-> Event grouping is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- - Under **Event grouping**, choose one of two ways to handle the grouping of **events** into **alerts**: - **Group all events into a single alert** (the default setting). The rule generates a single alert every time it runs, as long as the query returns more results than the specified **alert threshold** above. The alert includes a summary of all the events returned in the results.
If you see that your query would trigger too many or too frequent alerts, you ca
In the **Incident Settings** tab, you can choose whether and how Microsoft Sentinel turns alerts into actionable incidents. If this tab is left alone, Microsoft Sentinel will create a single, separate incident from each and every alert. You can choose to have no incidents created, or to group several alerts into a single incident, by changing the settings in this tab.
-> [!IMPORTANT]
-> The incident settings tab is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- For example: :::image type="content" source="media/tutorial-detect-threats-custom/incident-settings-tab.png" alt-text="Define the incident creation and alert grouping settings":::
In the **Alert grouping** section, if you want a single incident to be generated
## Set automated responses and create the rule 1. In the **Automated responses** tab, you can set automation based on the alert or alerts generated by this analytics rule, or based on the incident created by the alerts.+ - For alert-based automation, select from the drop-down list under **Alert automation** any playbooks you want to run automatically when an alert is generated.
- - For incident-based automation, select or create an automation rule under **Incident automation (preview)**. You can call playbooks (those based on the **incident trigger**) from these automation rules, as well as automate triage, assignment, and closing.
+
+ - For incident-based automation, the grid displayed under **Incident automation** shows the automation rules that already apply to this analytics rule (by virtue of it meeting the conditions defined in those rules). You can edit any of these by selecting the ellipsis at the end of each row. Or, you can [create a new automation rule](create-manage-use-automation-rules.md).
+
+ You can call playbooks (those based on the **incident trigger**) from these automation rules, as well as automate triage, assignment, and closing.
+ - For more information and instructions on creating playbooks and automation rules, see [Automate threat responses](tutorial-respond-threats-playbook.md#automate-threat-responses).+ - For more information about when to use the **alert trigger** or the **incident trigger**, see [Use triggers and actions in Microsoft Sentinel playbooks](playbook-triggers-actions.md#microsoft-sentinel-triggers-summary). :::image type="content" source="media/tutorial-detect-threats-custom/automated-response-tab.png" alt-text="Define the automated response settings":::
sentinel False Positives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/false-positives.md
To add an automation rule to handle a false positive:
1. In Microsoft Sentinel, under **Incidents**, select the incident you want to create an exception for. 1. Select **Create automation rule**. 1. In the **Create new automation rule** sidebar, optionally modify the new rule name to identify the exception, rather than just the alert rule name.
-1. Under **Conditions**, optionally add more **Analytic rule name**s to apply the exception to.
+1. Under **Conditions**, optionally add more **Analytics rule name**s to apply the exception to.
+ Select the drop-down box containing the analytics rule name and select more analytics rules from the list.
1. The sidebar presents the specific entities in the current incident that might have caused the false positive. Keep the automatic suggestions, or modify them to fine-tune the exception. For example, you could change a condition on an IP address to apply to an entire subnet. :::image type="content" source="media/false-positives/create-rule.png" alt-text="Screenshot showing how to create an automation rule for an incident in Microsoft Sentinel.":::
-1. After you define the trigger, you can continue to define what the rule does:
+1. After you're satisfied with the conditions, you can continue to define what the rule does:
:::image type="content" source="media/false-positives/apply-rule.png" alt-text="Screenshot showing how to finish creating and applying an automation rule in Microsoft Sentinel."::: - The rule is already configured to close an incident that meets the exception criteria.
+ - You can keep the specified closing reason as is, or you can change it if another reason is more appropriate.
- You can add a comment to the automatically closed incident that explains the exception. For example, you could specify that the incident originated from known administrative activity. - By default, the rule is set to expire automatically after 24 hours. This expiration might be what you want, and reduces the chance of false negative errors. If you want a longer exception, set **Rule expiration** to a later time.
+1. You can add more actions if you want. For example, you can add a tag to the incident, or you can run a playbook to send an email or a notification or to synchronize with an external system.
+ 1. Select **Apply** to activate the exception. > [!TIP]
-> You can also create an automation rule from scratch, without starting from an incident. Select **Automation** from the Microsoft Sentinel left navigation menu, and then select **Create** > **Add new rule**.
+> You can also create an automation rule from scratch, without starting from an incident. Select **Automation** from the Microsoft Sentinel left navigation menu, and then select **Create** > **Add new rule**. [Learn more about automation rules](automate-incident-handling-with-automation-rules.md).
## Add exceptions by modifying analytics rules
sentinel Migration Arcsight Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-arcsight-automation.md
+
+ Title: Migrate ArcSight SOAR automation to Microsoft Sentinel | Microsoft Docs
+description: Learn how to identify SOAR use cases, and how to migrate your ArcSight SOAR automation to Microsoft Sentinel.
+++ Last updated : 05/03/2022++
+# Migrate ArcSight SOAR automation to Microsoft Sentinel
+
+Microsoft Sentinel provides Security Orchestration, Automation, and Response (SOAR) capabilities with [automation rules](automate-incident-handling-with-automation-rules.md) and [playbooks](tutorial-respond-threats-playbook.md). Automation rules automate incident handling and response, and playbooks run predetermined sequences of actions to response and remediate threats. This article discusses how to identify SOAR use cases, and how to migrate your ArcSight SOAR automation to Microsoft Sentinel.
+
+Automation rules simplify complex workflows for your incident orchestration processes, and allow you to centrally manage your incident handling automation.
+
+With automation rules, you can:
+- Perform simple automation tasks without necessarily using playbooks. For example, you can assign, tag incidents, change status, and close incidents.
+- Automate responses for multiple analytics rules at once.
+- Control the order of actions that are executed.
+- Run playbooks for those cases where more complex automation tasks are necessary.
+
+## Identify SOAR use cases
+
+HereΓÇÖs what you need to think about when migrating SOAR use cases from ArcSight.
+- **Use case quality**. Choose good use cases for automation. Use cases should be based on procedures that are clearly defined, with minimal variation, and a low false-positive rate. Automation should work with efficient use cases.
+- **Manual intervention**. Automated response can have wide ranging effects and high impact automations should have human input to confirm high impact actions before theyΓÇÖre taken.
+- **Binary criteria**. To increase response success, decision points within an automated workflow should be as limited as possible, with binary criteria. Binary criteria reduces the need for human intervention, and enhances outcome predictability.
+- **Accurate alerts or data**. Response actions are dependent on the accuracy of signals such as alerts. Alerts and enrichment sources should be reliable. Microsoft Sentinel resources such as watchlists and reliable threat intelligence can enhance reliability.
+- **Analyst role**. While automation where possible is great, reserve more complex tasks for analysts, and provide them with the opportunity for input into workflows that require validation. In short, response automation should augment and extend analyst capabilities.
+
+## Migrate SOAR workflow
+
+This section shows how key SOAR concepts in ArcSight translate to Microsoft Sentinel components, and provides general guidelines for how to migrate each step or component in the SOAR workflow.
++
+|Step (in diagram) |ArcSight |Microsoft Sentinel |
+||||
+|1 |Ingest events into Enterprise Security Manager (ESM) and trigger correlation events. |Ingest events into the Log Analytics workspace. |
+|2 |Automatically filter alerts for case creation. |Use [analytics rules](detect-threats-built-in.md#use-built-in-analytics-rules) to trigger alerts. Enrich alerts using the [custom details feature](surface-custom-details-in-alerts.md) to create dynamic incident names. |
+|3 |Classify cases. |Use [automation rules](automate-incident-handling-with-automation-rules.md). With automation rules, Microsoft Sentinel treats incidents according to the analytics rule that triggered the incident, and the incident properties that match defined criteria. |
+|4 |Consolidate cases. |You can consolidate several alerts to a single incident according to properties such as matching entities, alert details, or creation timeframe, using the alert grouping feature. |
+|5 |Dispatch cases. |Assign incidents to specific analysts using [an integration](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/automate-incident-assignment-with-shifts-for-teams/ba-p/2297549) between Microsoft Teams, Azure Logic Apps, and Microsoft Sentinel automation rules. |
+
+## Map SOAR components
+
+Review which Microsoft Sentinel or Azure Logic Apps features map to the main ArcSight SOAR components.
+
+|ArcSight |Microsoft Sentinel/Azure Logic Apps |
+|||
+|Trigger |[Trigger](../logic-apps/logic-apps-overview.md) |
+|Automation bit |[Azure Function connector](../logic-apps/logic-apps-azure-functions.md) |
+|Action |[Action](../logic-apps/logic-apps-overview.md) |
+|Scheduled playbooks |Playbooks initiated by the [recurrence trigger](../connectors/connectors-native-recurrence.md) |
+|Workflow playbooks |Playbooks automatically initiated by Microsoft Sentinel [alert or incident triggers](playbook-triggers-actions.md) |
+|Marketplace |ΓÇó [Automation > Templates tab](use-playbook-templates.md)<br>ΓÇó [Content hub catalog](sentinel-solutions-catalog.md)<br>ΓÇó [GitHub](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Block-OnPremADUser) |
+
+## Operationalize playbooks and automation rules in Microsoft Sentinel
+
+Most of the playbooks that you use with Microsoft Sentinel are available in either the [Automation > Templates tab](use-playbook-templates.md), the [Content hub catalog](sentinel-solutions-catalog.md), or [GitHub](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Block-OnPremADUser). In some cases, however, you might need to create playbooks from scratch or from existing templates.
+
+You typically build your custom logic app using the Azure Logic App Designer feature. The logic apps code is based on [Azure Resource Manager (ARM) templates](../azure-resource-manager/templates/overview.md), which facilitate development, deployment and portability of Azure Logic Apps across multiple environments. To convert your custom playbook into a portable ARM template, you can use the [ARM template generator](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/export-microsoft-sentinel-playbooks-or-azure-logic-apps-with/ba-p/3275898).
+
+Use these resources for cases where you need to build your own playbooks either from scratch or from existing templates.
+- [Automate incident handling in Microsoft Sentinel](automate-incident-handling-with-automation-rules.md)
+- [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md)
+- [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](tutorial-respond-threats-playbook.md)
+- [How to use Microsoft Sentinel for Incident Response, Orchestration and Automation](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/how-to-use-azure-sentinel-for-incident-response-orchestration/ba-p/2242397)
+- [Adaptive Cards to enhance incident response in Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/using-microsoft-teams-adaptive-cards-to-enhance-incident/ba-p/3330941)
+
+## SOAR post migration best practices
+
+Here are best practices you should take into account after your SOAR migration:
+
+- After you migrate your playbooks, test the playbooks extensively to ensure that the migrated actions work as expected.
+- Periodically review your automations to explore ways to further simplify or enhance your SOAR. Microsoft Sentinel constantly adds new connectors and actions that can help you to further simplify or increase the effectiveness of your current response implementations.
+- Monitor the performance of your playbooks using the [Playbooks health monitoring workbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/what-s-new-monitoring-your-logic-apps-playbooks-in-azure/ba-p/1873211).
+- Use managed identities and service principals: Authenticate against various Azure services within your Logic Apps, store the secrets in Azure Key Vault, and obscure the output of the flow execution. We also recommend that you [monitor the activities of these service principals](https://techcommunity.microsoft.com/t5/azure-sentinel/non-interactive-logins-minimizing-the-blind-spot/ba-p/2287932).
+
+## Next steps
+
+In this article, you learned how to map your SOAR automation from ArcSight to Microsoft Sentinel.
+
+> [!div class="nextstepaction"]
+> [Export your historical data](migration-arcsight-historical-data.md)
sentinel Migration Arcsight Detection Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-arcsight-detection-rules.md
+
+ Title: Migrate ArcSight detection rules to Microsoft Sentinel | Microsoft Docs
+description: Identify, compare, and migrate your ArcSight detection rules to Microsoft Sentinel built-in rules.
+++ Last updated : 05/03/2022+++
+# Migrate ArcSight detection rules to Microsoft Sentinel
+
+This article describes how to identify, compare, and migrate your ArcSight detection rules to Microsoft Sentinel analytics rules.
+
+## Identify and migrate rules
+
+Microsoft Sentinel uses machine learning analytics to create high-fidelity and actionable incidents, and some of your existing detections may be redundant in Microsoft Sentinel. Therefore, don't migrate all of your detection and analytics rules blindly. Review these considerations as you identify your existing detection rules.
+
+- Make sure to select use cases that justify rule migration, considering business priority and efficiency.
+- Check that you [understand Microsoft Sentinel rule types](detect-threats-built-in.md#view-built-in-detections).
+- Check that you understand the [rule terminology](#compare-rule-terminology).
+- Review any rules that haven't triggered any alerts in the past 6-12 months, and determine whether they're still relevant.
+- Eliminate low-level threats or alerts that you routinely ignore.
+- Use existing functionality, and check whether Microsoft SentinelΓÇÖs [built-in analytics rules](https://github.com/Azure/Azure-Sentinel/tree/master/Detections) might address your current use cases. Because Microsoft Sentinel uses machine learning analytics to produce high-fidelity and actionable incidents, itΓÇÖs likely that some of your existing detections wonΓÇÖt be required anymore.
+- Confirm connected data sources and review your data connection methods. Revisit data collection conversations to ensure data depth and breadth across the use cases you plan to detect.
+- Explore community resources such as the [SOC Prime Threat Detection Marketplace](https://my.socprime.com/tdm/) to check whether your rules are available.
+- Consider whether an online query converter such as Uncoder.io might work for your rules.
+- If rules arenΓÇÖt available or canΓÇÖt be converted, they need to be created manually, using a KQL query. Review the [rules mapping](#map-and-compare-rule-samples) to create new queries.
+
+Learn more about [best practices for migrating detection rules](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/best-practices-for-migrating-detection-rules-from-arcsight/ba-p/2216417).
+
+**To migrate your analytics rules to Microsoft Sentinel**:
+
+1. Verify that you have a testing system in place for each rule you want to migrate.
+
+ 1. **Prepare a validation process** for your migrated rules, including full test scenarios and scripts.
+
+ 1. **Ensure that your team has useful resources** to test your migrated rules.
+
+ 1. **Confirm that you have any required data sources connected,** and review your data connection methods.
+
+1. Verify whether your detections are available as built-in templates in Microsoft Sentinel:
+
+ - **If the built-in rules are sufficient**, use built-in rule templates to create rules for your own workspace.
+
+ In Microsoft Sentinel, go to the **Configuration > Analytics > Rule templates** tab, and create and update each relevant analytics rule.
+
+ For more information, see [Detect threats out-of-the-box](detect-threats-built-in.md).
+
+ - **If you have detections that aren't covered by Microsoft Sentinel's built-in rules**, try an online query converter, such as [Uncoder.io](https://uncoder.io/) to convert your queries to KQL.
+
+ Identify the trigger condition and rule action, and then construct and review your KQL query.
+
+ - **If neither the built-in rules nor an online rule converter is sufficient**, you'll need to create the rule manually. In such cases, use the following steps to start creating your rule:
+
+ 1. **Identify the data sources you want to use in your rule**. You'll want to create a mapping table between data sources and data tables in Microsoft Sentinel to identify the tables you want to query.
+
+ 1. **Identify any attributes, fields, or entities** in your data that you want to use in your rules.
+
+ 1. **Identify your rule criteria and logic**. At this stage, you may want to use rule templates as samples for how to construct your KQL queries.
+
+ Consider filters, correlation rules, active lists, reference sets, watchlists, detection anomalies, aggregations, and so on. You might use references provided by your legacy SIEM to understand [how to best map your query syntax](#map-and-compare-rule-samples).
+
+ 1. **Identify the trigger condition and rule action, and then construct and review your KQL query**. When reviewing your query, consider KQL optimization guidance resources.
+
+1. Test the rule with each of your relevant use cases. If it doesn't provide expected results, you may want to review the KQL and test it again.
+
+1. When you're satisfied, you can consider the rule migrated. Create a playbook for your rule action as needed. For more information, see [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md).
+
+Learn more about analytics rules.
+
+- [**Create custom analytics rules to detect threats**](detect-threats-custom.md). Use [alert grouping](detect-threats-custom.md#alert-grouping) to reduce alert fatigue by grouping alerts that occur within a given timeframe.
+- [**Map data fields to entities in Microsoft Sentinel**](map-data-fields-to-entities.md) to enable SOC engineers to define entities as part of the evidence to track during an investigation. Entity mapping also makes it possible for SOC analysts to take advantage of an intuitive [investigation graph (investigate-cases.md#use-the-investigation-graph-to-deep-dive) that can help reduce time and effort.
+- [**Investigate incidents with UEBA data**](investigate-with-ueba.md), as an example of how to use evidence to surface events, alerts, and any bookmarks associated with a particular incident in the incident preview pane.
+- [**Kusto Query Language (KQL)**](/azure/data-explorer/kusto/query/), which you can use to send read-only requests to your [Log Analytics](../azure-monitor/logs/log-analytics-tutorial.md) database to process data and return results. KQL is also used across other Microsoft services, such as [Microsoft Defender for Endpoint](https://www.microsoft.com/microsoft-365/security/endpoint-defender) and [Application Insights](../azure-monitor/app/app-insights-overview.md).
+
+## Compare rule terminology
+
+This table helps you to clarify the concept of a rule in Microsoft Sentinel compared to ArcSight.
+
+| |ArcSight |Microsoft Sentinel |
+||||
+|**Rule type** |ΓÇó Filter rule<br>ΓÇó Join rule<br>ΓÇó Active list rule<br>ΓÇó And more |ΓÇó Scheduled query<br>ΓÇó Fusion<br>ΓÇó Microsoft Security<br>ΓÇó Machine Learning (ML) Behavior Analytics |
+|**Criteria** |Define in rule conditions |Define in KQL |
+|**Trigger condition** |ΓÇó Define in action<br>ΓÇó Define in aggregation (for event aggregation) |Threshold: Number of query results |
+|**Action** |ΓÇó Set event field<br>ΓÇó Send notification<br>ΓÇó Create new case<br>ΓÇó Add to active list<br>ΓÇó And more |ΓÇó Create alert or incident<br>ΓÇó Integrates with Logic Apps |
+
+## Map and compare rule samples
+
+Use these samples to compare and map rules from ArcSight to Microsoft Sentinel in various scenarios.
+
+|Rule |Description |Sample detection rule (ArcSight) |Sample KQL query |Resources |
+||||||
+|Filter (`AND`) |A sample rule with `AND` conditions. The event must match all conditions. |[Filter (AND) example](#filter-and-example-arcsight) |[Filter (AND) example](#filter-and-example-kql) |String filter:<br>ΓÇó [String operators](/azure/data-explorer/kusto/query/datatypes-string-operators#operators-on-strings)<br><br>Numerical filter:<br>ΓÇó [Numerical operators](/azure/data-explorer/kusto/query/numoperators)<br><br>Datetime filter:<br>ΓÇó [ago](/azure/data-explorer/kusto/query/agofunction)<br>ΓÇó [Datetime](/azure/data-explorer/kusto/query/datetime-timespan-arithmetic)<br>ΓÇó [between](/azure/data-explorer/kusto/query/betweenoperator)<br>ΓÇó [now](/azure/data-explorer/kusto/query/nowfunction)<br><br>Parsing:<br>ΓÇó [parse](/azure/data-explorer/kusto/query/parseoperator)<br>ΓÇó [extract](/azure/data-explorer/kusto/query/extractfunction)<br>ΓÇó [parse_json](/azure/data-explorer/kusto/query/parsejsonfunction)<br>ΓÇó [parse_csv](/azure/data-explorer/kusto/query/parseoperator)<br>ΓÇó [parse_path](/azure/data-explorer/kusto/query/parsepathfunction)<br>ΓÇó [parse_url](/azure/data-explorer/kusto/query/parseurlfunction) |
+|Filter (`OR`) |A sample rule with `OR` conditions. The event can match any of the conditions. |[Filter (OR) example](#filter-or-example-arcsight) |[Filter (OR) example](#filter-or-example-kql) |ΓÇó [String operators](/azure/data-explorer/kusto/query/datatypes-string-operators#operators-on-strings)<br>ΓÇó [in](/azure/data-explorer/kusto/query/inoperator) |
+|Nested filter |A sample rule with nested filtering conditions. The rule includes the `MatchesFilter` statement, which also includes filtering conditions. |[Nested filter example](#nested-filter-example-arcsight) |[Nested filter example](#nested-filter-example-kql) |ΓÇó [Sample KQL function](https://techcommunity.microsoft.com/t5/azure-sentinel/using-kql-functions-to-speed-up-analysis-in-azure-sentinel/ba-p/712381)<br>ΓÇó [Sample parameter function](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/enriching-windows-security-events-with-parameterized-function/ba-p/1712564)<br>ΓÇó [join](/azure/data-explorer/kusto/query/joinoperator?pivots=azuredataexplorer)<br>ΓÇó [where](/azure/data-explorer/kusto/query/whereoperator) |
+|Active list (lookup) |A sample lookup rule that uses the `InActiveList` statement. |[Active list (lookup) example](#active-list-lookup-example-arcsight) |[Active list (lookup) example](#active-list-lookup-example-kql) |ΓÇó A watchlist is the equivalent of the active list feature. Learn more about [watchlists](watchlists.md).<br>ΓÇó [Other ways to implement lookups](https://techcommunity.microsoft.com/t5/azure-sentinel/implementing-lookups-in-azure-sentinel/ba-p/1091306) |
+|Correlation (matching) |A sample rule that defines a condition against a set of base events, using the `Matching Event` statement. |[Correlation (matching) example](#correlation-matching-example-arcsight) |[Correlation (matching) example](#correlation-matching-example-kql) |join operator:<br>ΓÇó [join](/azure/data-explorer/kusto/query/joinoperator?pivots=azuredataexplorer)<br>ΓÇó [join with time window](/azure/data-explorer/kusto/query/join-timewindow)<br>ΓÇó [shuffle](/azure/data-explorer/kusto/query/shufflequery)<br>ΓÇó [Broadcast](/azure/data-explorer/kusto/query/broadcastjoin)<br>ΓÇó [Union](/azure/data-explorer/kusto/query/unionoperator?pivots=azuredataexplorer)<br><br>define statement:<br>ΓÇó [let](/azure/data-explorer/kusto/query/letstatement)<br><br>Aggregation:<br>ΓÇó [make_set](/azure/data-explorer/kusto/query/makeset-aggfunction)<br>ΓÇó [make_list](/azure/data-explorer/kusto/query/makelist-aggfunction)<br>ΓÇó [make_bag](/azure/data-explorer/kusto/query/make-bag-aggfunction)<br>ΓÇó [pack](/azure/data-explorer/kusto/query/packfunction) |
+|Correlation (time window) |A sample rule that defines a condition against a set of base events, using the `Matching Event` statement, and uses the `Wait time` filter condition. |[Correlation (time window) example](#correlation-time-window-example-arcsight) |[Correlation (time window) example](#correlation-time-window-example-kql) |ΓÇó [join](/azure/data-explorer/kusto/query/joinoperator?pivots=azuredataexplorer)<br>ΓÇó [Microsoft Sentinel rules and join statement](https://techcommunity.microsoft.com/t5/azure-sentinel/azure-sentinel-correlation-rules-the-join-kql-operator/ba-p/1041500) |
+
+### Filter (AND) example: ArcSight
+
+Here's a sample filter rule with `AND` conditions in ArcSight.
++
+### Filter (AND) example: KQL
+
+Here's the filter rule with `AND` conditions in KQL.
+
+```kusto
+SecurityEvent
+| where EventID == 4728
+| where SubjectUserName =~ "AutoMatedService"
+| where isnotempty(SubjectDomainName)
+```
+This rule assumes that Microsoft Monitoring Agent (MMA) or Azure Monitoring Agent (AMA) collect the Windows Security Events. Therefore, the rule uses the Microsoft Sentinel SecurityEvent table.
+
+Consider these best practices:
+- To optimize your queries, avoid case-insensitive operators when possible: `=~`.
+- Use `==` if the value isn't case-sensitive.
+- Order the filters by starting with the `where` statement, which filters out the most data.
+
+### Filter (OR) example: ArcSight
+
+Here's a sample filter rule with `OR` conditions in ArcSight.
++
+### Filter (OR) example: KQL
+
+Here are a few ways to write the filter rule with `OR` conditions in KQL.
+
+As a first option, use the `in` statement:
+
+```kusto
+SecurityEvent
+| where SubjectUserName in
+ ("Adm1","ServiceAccount1","AutomationServices")
+```
+As a second option, use the `or` statement:
+
+```kusto
+SecurityEvent
+| where SubjectUserName == "Adm1" or
+SubjectUserName == "ServiceAccount1" or
+SubjectUserName == "AutomationServices"
+```
+While both options are identical in performance, we recommend the first option, which is easier to read.
+
+### Nested filter example: ArcSight
+
+Here's a sample nested filter rule in ArcSight.
++
+Here's a rule for the `/All Filters/Soc Filters/Exclude Valid Users` filter.
++
+### Nested filter example: KQL
+
+Here are a few ways to write the filter rule with `OR` conditions in KQL.
+
+As a first option, use a direct filter with a `where` statement:
+
+```kusto
+SecurityEvent
+| where EventID == 4728
+| where isnotempty(SubjectDomainName) or
+isnotempty(TargetDomainName)
+| where SubjectUserName !~ "AutoMatedService"
+```
+As a second option, use a KQL function:
+
+1. Save the following query as a KQL function with the `ExcludeValidUsers` alias.
+
+ ```kusto
+ SecurityEvent
+ | where EventID == 4728
+ | where isnotempty(SubjectDomainName)
+ | where SubjectUserName =~ "AutoMatedService"
+ | project SubjectUserName
+ ```
+
+1. Use the following query to filter the `ExcludeValidUsers` alias.
+
+ ```kusto
+ SecurityEvent
+ | where EventID == 4728
+ | where isnotempty(SubjectDomainName) or
+ isnotempty(TargetDomainName)
+ | where SubjectUserName !in (ExcludeValidUsers)
+ ```
+
+As a third option, use a parameter function:
+
+1. Create a parameter function with `ExcludeValidUsers` as the name and alias.
+2. Define the parameters of the function. For example:
+
+ ```kusto
+ Tbl: (TimeGenerated:datatime, Computer:string,
+ EventID:string, SubjectDomainName:string,
+ TargetDomainName:string, SubjectUserName:string)
+ ```
+
+1. The `parameter` function has the following query:
+
+ ```kusto
+ Tbl
+ | where SubjectUserName !~ "AutoMatedService"
+ ```
+
+1. Run the following query to invoke the parameter function:
+
+ ```kusto
+ let Events = (
+ SecurityEvent
+ | where EventID == 4728
+ );
+ ExcludeValidUsers(Events)
+ ```
+
+As a fourth option, use the `join` function:
+
+```kusto
+let events = (
+SecurityEvent
+| where EventID == 4728
+| where isnotempty(SubjectDomainName)
+or isnotempty(TargetDomainName)
+);
+let ExcludeValidUsers = (
+SecurityEvent
+| where EventID == 4728
+| where isnotempty(SubjectDomainName)
+| where SubjectUserName =~ "AutoMatedService"
+);
+events
+| join kind=leftanti ExcludeValidUsers on
+$left.SubjectUserName == $right.SubjectUserName
+```
+Considerations:
+- We recommend that you use a direct filter with a `where` statement (first option) due to its simplicity. For optimized performance, avoid using `join` (fourth option).
+- To optimize your queries, avoid the `=~` and `!~` case-insensitive operators when possible. Use the `==` and `!=` operators if the value isn't case-sensitive.
+
+### Active list (lookup) example: ArcSight
+
+Here's an active list (lookup) rule in ArcSight.
++
+### Active list (lookup) example: KQL
+
+This rule assumes that the Cyber-Ark Exception Accounts watchlist exists in Microsoft Sentinel with an Account field.
+
+```kusto
+let Activelist=(
+_GetWatchlist('Cyber-Ark Exception Accounts')
+| project Account );
+CommonSecurityLog
+| where DestinationUserName in (Activelist)
+| where DeviceVendor == "Cyber-Ark"
+| where DeviceAction == "Get File Request"
+| where DeviceCustomNumber1 != ""
+| project DeviceAction, DestinationUserName,
+TimeGenerated,SourceHostName,
+SourceUserName, DeviceEventClassID
+```
+Order the filters by starting with the `where` statement that filters out the most data.
+
+### Correlation (matching) example: ArcSight
+
+Here's a sample ArcSight rule that defines a condition against a set of base events, using the `Matching Event` statement.
++
+### Correlation (matching) example: KQL
+
+```kusto
+let event1 =(
+SecurityEvent
+| where EventID == 4728
+);
+let event2 =(
+SecurityEvent
+| where EventID == 4729
+);
+event1
+| join kind=inner event2
+on $left.TargetUserName==$right.TargetUserName
+```
+Best practices:
+- To optimize your query, ensure that the smaller table is on the left side of the `join` function.
+- If the left side of the table is relatively small (up to 100 K records), add `hint.strategy=broadcast` for better performance.
+
+### Correlation (time window) example: ArcSight
+
+Here's a sample ArcSight rule that defines a condition against a set of base events, using the `Matching Event` statement, and uses the `Wait time` filter condition.
++
+### Correlation (time window) example: KQL
+
+```kusto
+let waittime = 10m;
+let lookback = 1d;
+let event1 = (
+SecurityEvent
+| where TimeGenerated > ago(waittime+lookback)
+| where EventID == 4728
+| project event1_time = TimeGenerated,
+event1_ID = EventID, event1_Activity= Activity,
+event1_Host = Computer, TargetUserName,
+event1_UPN=UserPrincipalName,
+AccountUsedToAdd = SubjectUserName
+);
+let event2 = (
+SecurityEvent
+| where TimeGenerated > ago(waittime)
+| where EventID == 4729
+| project event2_time = TimeGenerated,
+event2_ID = EventID, event2_Activity= Activity,
+event2_Host= Computer, TargetUserName,
+event2_UPN=UserPrincipalName,
+ AccountUsedToRemove = SubjectUserName
+);
+ event1
+| join kind=inner event2 on TargetUserName
+| where event2_time - event1_time < lookback
+| where tolong(event2_time - event1_time ) >=0
+| project delta_time = event2_time - event1_time,
+ event1_time, event2_time,
+ event1_ID,event2_ID,event1_Activity,
+ event2_Activity, TargetUserName, AccountUsedToAdd,
+ AccountUsedToRemove,event1_Host,event2_Host,
+ event1_UPN,event2_UPN
+```
+### Aggregation example: ArcSight
+
+Here's a sample ArcSight rule with aggregation settings: three matches within 10 minutes.
++
+### Aggregation example: KQL
+
+```kusto
+SecurityEvent
+| summarize Count = count() by SubjectUserName,
+SubjectDomainName
+| where Count >3
+```
+
+## Next steps
+
+In this article, you learned how to map your migration rules from ArcSight to Microsoft Sentinel.
+
+> [!div class="nextstepaction"]
+> [Migrate your SOAR automation](migration-arcsight-automation.md)
sentinel Migration Arcsight Historical Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-arcsight-historical-data.md
+
+ Title: "Microsoft Sentinel migration: Export ArcSight data to target platform | Microsoft Docs"
+description: Learn how to export your historical data from ArcSight.
+++ Last updated : 05/03/2022++
+# Export historical data from ArcSight
+
+This article describes how to export your historical data from ArcSight. After you complete the steps in this article, you can [select a target platform](migration-ingestion-target-platform.md) to host the exported data, and then [select an ingestion tool](migration-ingestion-tool.md) to migrate the data.
++
+You can export data from ArcSight in several ways. Your selection of an export method depends on the data volumes and the deployed ArcSight environment. You can export the logs to a local folder on the ArcSight server or to another server accessible by ArcSight.
+
+To export the data, use one of the following methods:
+- [ArcSight Event Data Transfer Tool](#arcsight-event-data-transfer-tool): Use this option for large volumes of data, namely terabytes (TB).
+- [lacat tool](#lacat-utility): Use for volumes of data smaller than a TB.
+
+## ArcSight Event Data Transfer tool
+
+Use the Event Data Transfer tool to export data from ArcSight Enterprise Security Manager (ESM) version 7.x. To export data from ArcSight Logger, use the [lacat utility](#lacat-utility).
+
+The Event Data Transfer tool retrieves event data from ESM, which allows you to combine analysis with unstructured data, in addition to the CEF data. The Event Data Transfer tool exports ESM events in three formats: CEF, CSV, and key-value pairs.
+
+To export data using the Event Data Transfer tool:
+
+1. [Install and configure the Event Transfer Tool](https://www.microfocus.com/documentation/arcsight/arcsight-esm-7.6/ESM_AdminGuide/#ESM_AdminGuide/EventDataTransfer/EventDataTransfer.htm).
+1. Configure the logs export to use a CSV format. For example, this command exports data recorded between 15:45 and 16:45 on May 4, 2016 to a CSV file:
+
+ ```
+ arcsight event_transfer -dtype File -dpath <***path***> -format csv -start "05/04/2016 15:45:00" -end "05/04/2016 16:45:00"
+ ```
+## lacat utility
+
+Use the lacat utility to export data from ArcSight Logger. lacat exports CEF records from a Logger archive file, and prints the records to `stdout`. You can redirect the records to a file, or pipe the file for further manipulation with options such as `grep` or `awk`.
+
+To export data with the lacat utility:
+
+1. [Download the lacat utility](https://github.com/hpsec/lacat).
+1. Follow the examples in the lacat repository on how to run the script.
+
+## Next steps
+
+- [Select a target Azure platform to host the exported historical data](migration-ingestion-target-platform.md)
+- [Select a data ingestion tool](migration-ingestion-tool.md)
+- [Ingest historical data into your target platform](migration-export-ingest.md)
sentinel Migration Convert Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-convert-dashboards.md
+
+ Title: "Convert dashboards to Azure Monitor Workbooks | Microsoft Docs"
+description: Learn how to review, planning and migrate your current workbooks to Azure Workbooks.
+++ Last updated : 05/03/2022+++
+# Convert dashboards to Azure Workbooks
+
+Dashboards in your existing SIEM will convert to [Azure Monitor Workbooks](monitor-your-data.md#use-built-in-workbooks), the Microsoft Sentinel adoption of Azure Monitor Workbooks, which provides versatility in creating custom dashboards.
+
+This article describes how to review, plan, and convert your current workbooks to Azure Monitor Workbooks.
+
+## Review dashboards in your current SIEM
+
+ Review these considerations when designing your migration.
+
+- **Discover dashboards**. Gather information about your dashboards, including design, parameters, data sources, and other details. Identify the purpose or usage of each dashboard.
+- **Select**. DonΓÇÖt migrate all dashboards without consideration. Focus on dashboards that are critical and used regularly.
+- **Consider permissions**. Consider who are the target users for workbooks. Microsoft Sentinel uses Azure Workbooks, and [access is controlled](../azure-monitor/visualize/workbooks-access-control.md) using Azure Role Based Access Control (RBAC). To create dashboards outside Azure, for example for business execs without Azure access, using a reporting tool such as Power BI.
+
+## Prepare for the dashboard conversion
+
+After reviewing your dashboards, do the following to prepare for your dashboard migration:
+
+- Review all of the visualizations in each dashboard. The dashboards in your current SIEM might contain several charts or panels. It's crucial to review the content of your short-listed dashboards to eliminate any unwanted visualizations or data.
+- Capture the dashboard design and interactivity.
+- Identify any design elements that are important to your users. For example, the layout of the dashboard, the arrangement of the charts or even the font size or color of the graphs.
+- Capture any interactivity such as drilldown, filtering, and others that you need to carry over to Azure Monitor Workbooks. We'll also discuss parameters and user inputs in the next step.
+- Identify required parameters or user inputs. In most cases, you need to define parameters for users to perform search, filtering, or scoping the results (for example, date range, account name and others). Hence, it's crucial to capture the details around parameters. Here are some of the key points to help you with collecting the parameter requirements:
+ - The type of parameter for users to perform selection or input. For example, date range, text, or others.
+ - How the parameters are represented, such as drop-down, text box, or others.
+ - The expected value format, for example, time, string, integer, or others.
+ - Other properties, such as the default value, allow multi-select, conditional visibility, or others.
+
+## Convert dashboards
+
+Perform the following tasks in Azure Workbook and Microsoft Sentinel to convert your dashboard.
+
+#### 1. Identify data sources
+
+Azure Monitor workbooks are [compatible with a large number of data sources](../azure-monitor/visualize/workbooks-data-sources.md). In most cases, use the Azure Monitor Logs data source and use Kusto Query Language (KQL) queries to visualize the underlying logs in your Microsoft Sentinel workspace.
+
+#### 2. Construct or review KQL queries
+
+In this step, you mainly work with KQL to visualize your data. You can construct and test your queries in the Microsoft Sentinel Logs page before converting them to Azure Monitor workbooks. Before finalizing your KQL queries, always review and tune the queries to improve query performance. Optimized queries:
+- Run faster, reduce the overall duration of the query execution.
+- Have a smaller chance of being throttled or rejected.
+
+Learn how to optimize KQL queries:
+- [KQL query best practices](/azure/data-explorer/kusto/query/best-practices)
+- [Optimize queries in Azure Monitor Logs](../azure-monitor/logs/query-optimization.md)
+- [Optimizing KQL performance (webinar)](https://youtu.be/jN1Cz0JcLYU)
+
+#### 3. Create or update the workbook
+
+[Create](tutorial-monitor-your-data.md#create-new-workbook) a workbook, update the workbook, or clone an existing workbook so that you donΓÇÖt have to start from scratch. Also, specify how the data or visualizations will be represented, arranged and [grouped](../azure-monitor/visualize/workbooks-groups.md). There are two common designs:
+
+- Vertical workbook
+- Tabbed workbook
+
+#### 4. Create or update workbook parameters or user inputs
+
+By the time you arrive at this stage, you should have [identified the required parameters](#prepare-for-the-dashboard-conversion). With parameters, you can collect input from the consumers and reference the input in other parts of the workbook. This input is typically used to scope the result set, to set the correct visualization, and allows you to build interactive reports and experiences.
+
+Workbooks allow you to control how your parameter controls are presented to consumers. For example, you select whether the controls are presented as a text box vs. drop down, or single- vs. multi-select. You can also select which values to use, from text, JSON, KQL, or Azure Resource Graph, and more.
+
+Review the [supported workbook parameters](../azure-monitor/visualize/workbooks-parameters.md). You can reference these parameter values in other parts of workbooks either via bindings or value expansions.
+
+#### 5. Create or update visualizations
+
+Workbooks provide a rich set of capabilities for visualizing your data. Review these detailed examples of each visualization type.
+
+- [Text](../azure-monitor/visualize/workbooks-text-visualizations.md)
+- [Charts](../azure-monitor/visualize/workbooks-chart-visualizations.md)
+- [Grids](../azure-monitor/visualize/workbooks-grid-visualizations.md)
+- [Tiles](../azure-monitor/visualize/workbooks-tile-visualizations.md)
+- [Trees](../azure-monitor/visualize/workbooks-tree-visualizations.md)
+- [Graphs](../azure-monitor/visualize/workbooks-graph-visualizations.md)
+- [Map](../azure-monitor/visualize/workbooks-map-visualizations.md)
+- [Honey comb](../azure-monitor/visualize/workbooks-honey-comb.md)
+- [Composite bar](../azure-monitor/visualize/workbooks-composite-bar.md)
+
+#### 6. Preview and save the workbook
+
+Once you've saved your workbook, specify the parameters, if any exist, and validate the results. You can also try the [auto refresh](tutorial-monitor-your-data.md#refresh-your-workbook-data) or the print feature to [save as a PDF](monitor-your-data.md#print-a-workbook-or-save-as-pdf).
+
+## Next steps
+
+In this article, you learned how to convert your dashboards to Azure workbooks.
+
+> [!div class="nextstepaction"]
+> [Update SOC processes](migration-security-operations-center-processes.md)
sentinel Migration Export Ingest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-export-ingest.md
+
+ Title: "Microsoft Sentinel migration: Ingest data into target platform | Microsoft Docs"
+description: Learn how to ingest historical data into your selected target platform.
+++ Last updated : 05/03/2022++
+# Ingest historical data into your target platform
+
+In previous articles, you [selected a target platform](migration-ingestion-target-platform.md) for your historical data. You also selected [a tool to transfer your data](migration-ingestion-tool.md) and stored the historical data in a staging location. You can now start to ingest the data into the target platform.
+
+This article describes how to ingest your historical data into your selected target platform.
+
+## Export data from the legacy SIEM
+
+In general, SIEMs can export or dump data to a file in your local file system, so you can use this method to extract the historical data. ItΓÇÖs also important to set up a staging location for your exported files. The tool you use to transfer the data ingestion can copy the files from the staging location to the target platform.
+
+This diagram shows the high-level export and ingestion process.
++
+To export data from your current SIEM, see one of the following sections:
+- [Export data from ArcSight](migration-arcsight-historical-data.md)
+- [Export data from Splunk](migration-splunk-historical-data.md)
+- [Export data from QRadar](migration-qradar-historical-data.md)
+
+## Ingest to Azure Data Explorer
+
+To ingest your historical data into Azure Data Explorer (ADX) (option 1 in the [diagram above](#export-data-from-the-legacy-siem)):
+
+1. [Install and configure LightIngest](/azure/data-explorer/lightingest) on the system where logs are exported, or install LightIngest on another system that has access to the exported logs. LightIngest supports Windows only.
+1. If you don't have an existing ADX cluster, create a new cluster and copy the connection string. Learn how to [set up ADX](/azure/data-explorer/create-cluster-database-portal).
+1. In ADX, create tables and define a schema for the CSV or JSON format (for QRadar). Learn how to create a table and define a schema [with sample data](/azure/data-explorer/ingest-sample-data) or [without sample data](/azure/data-explorer/one-click-table).
+1. [Run LightIngest](/azure/data-explorer/lightingest#run-lightingest) with the folder path that includes the exported logs as the path, and the ADX connection string as the output. When you run LightIngest, ensure that you provide the target ADX table name, that the argument pattern is set to `*.csv`, and the format is set to `.csv` (or `json` for QRadar).
+
+## Ingest data to Microsoft Sentinel Basic Logs
+
+To ingest your historical data into Microsoft Sentinel Basic Logs (option 2 in the [diagram above](#export-data-from-the-legacy-siem)):
+
+1. If you don't have an existing Log Analytics workspace, create a new workspace and [install Microsoft Sentinel](quickstart-onboard.md#enable-microsoft-sentinel-).
+1. [Create an App registration to authenticate against the API](../azure-monitor/logs/tutorial-custom-logs.md#configure-application).
+1. [Create a data collection endpoint](../azure-monitor/logs/tutorial-custom-logs.md#create-data-collection-endpoint). This endpoint acts as the API endpoint that accepts the data.
+1. [Create a custom log table](../azure-monitor/logs/tutorial-custom-logs.md#add-custom-log-table) to store the data, and provide a data sample. In this step, you can also define a transformation before the data is ingested.
+1. [Collect information from the data collection rule](../azure-monitor/logs/tutorial-custom-logs.md#collect-information-from-dcr) and assign permissions to the rule.
+1. [Change the table from Analytics to Basic Logs](../azure-monitor/logs/basic-logs-configure.md).
+1. Run the [Custom Log Ingestion script](https://github.com/Azure/Azure-Sentinel/tree/master/Tools/CustomLogsIngestion-DCE-DCR). The script asks for the following details:
+ - Path to the log files to ingest
+ - Azure AD tenant ID
+ - Application ID
+ - Application secret
+ - DCE endpoint
+ - DCR immutable ID
+ - Data stream name from the DCR
+
+ The script returns the number of events that have been sent to the workspace.
+
+## Ingest to Azure Blob Storage
+
+To ingest your historical data into Azure Blob Storage (option 3 in the [diagram above](#export-data-from-the-legacy-siem)):
+
+1. [Install and configure AzCopy](../storage/common/storage-use-azcopy-v10.md) on the system to which you exported the logs. Alternatively, install AzCopy on another system that has access to the exported logs.
+1. [Create an Azure Blob Storage account](../storage/common/storage-account-create.md) and copy the authorized [Azure Active Directory](../storage/common/storage-use-azcopy-v10.md#option-1-use-azure-active-directory) credentials or [Shared Access Signature](../storage/common/storage-use-azcopy-v10.md#option-2-use-a-sas-token) token.
+1. [Run AzCopy](../storage/common/storage-use-azcopy-v10.md#run-azcopy) with the folder path that includes the exported logs as the source, and the Azure Blob Storage connection string as the output.
+
+## Next steps
+
+In this article, you learned how to ingest your data into the target platform.
+
+> [!div class="nextstepaction"]
+> [Convert your dashboards to workbooks](migration-convert-dashboards.md)
sentinel Migration Ingestion Target Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-ingestion-target-platform.md
+
+ Title: "Microsoft Sentinel migration: Select a target Azure platform to host exported data | Microsoft Docs"
+description: Select a target Azure platform to host the exported historical data
+++ Last updated : 05/03/2022++
+# Select a target Azure platform to host the exported historical data
+
+One of the important decisions you make during your migration process is where to store your historical data. To make this decision, you need to understand and be able to compare the various target platforms.
+
+This article compares target platforms in terms of performance, cost, usability and management overhead.
+
+> [!NOTE]
+> The considerations in this table only apply to historical log migration, and don't apply in other scenarios, such as long-term retention.
+
+| |[Basic Logs/Archive](../azure-monitor/logs/basic-logs-configure.md) |[Azure Data Explorer (ADX)](/azure/data-explorer/data-explorer-overview) |[Azure Blob Storage](../storage/blobs/storage-blobs-overview.md) |[ADX + Azure Blob Storage](../azure-monitor/logs/azure-data-explorer-query-storage.md) |
+||||||
+|**Capabilities**: |ΓÇó Apply most of the existing Azure Monitor Logs experiences at a lower cost.<br>ΓÇó Basic Logs are retained for eight days, and are then automatically transferred to the archive (according to the original retention period).<br>ΓÇó Use [search jobs](../azure-monitor/logs/search-jobs.md) to search across petabytes of data and find specific events.<br>ΓÇó For deep investigations on a specific time range, [restore data from the archive](../azure-monitor/logs/restore.md). The data is then available in the hot cache for further analytics. |ΓÇó Both ADX and Microsoft Sentinel use the Kusto Query Language (KQL), allowing you to query, aggregate, or correlate data in both platforms. For example, you can run a KQL query from Microsoft Sentinel to [join data stored in ADX with data stored in Log Analytics](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md).<br>ΓÇó With ADX, you have substantial control over the cluster size and configuration. For example, you can create a larger cluster to achieve higher ingestion throughput, or create a smaller cluster to control your costs. |ΓÇó Blob storage is optimized for storing massive amounts of unstructured data.<br>ΓÇó Offers competitive costs.<br>ΓÇó Suitable for a scenario where your organization doesn't prioritize accessibility or performance, such as when there the organization must align with compliance or audit requirements. |ΓÇó Data is stored in a blob storage, which is low in costs.<br>ΓÇó You use ADX to query the data in KQL, allowing you to easily access the data. [Learn how to query Azure Monitor data with ADX](../azure-monitor/logs/azure-data-explorer-query-storage.md) |
+|**Usability**: |**Great**<br><br>The archive and search options are simple to use and accessible from the Microsoft Sentinel portal. However, the data isn't immediately available for queries. You need to perform a search to retrieve the data, which might take some time, depending on the amount of data being scanned and returned. |**Good**<br><br>Fairly easy to use in the context of Microsoft Sentinel. For example, you can use an Azure workbook to visualize data spread across both Microsoft Sentinel and ADX. You can also query ADX data from the Microsoft Sentinel portal using the [ADX proxy](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md). |**Poor**<br><br>With historical data migrations, you might have to deal with millions of files, and exploring the data becomes a challenge. |**Fair**<br><br>While using the `externaldata` operator is very challenging with large numbers of blobs to reference, using external ADX tables eliminates this issue. The external table definition understands the blob storage folder structure, and allows you to transparently query the data contained in many different blobs and folders. |
+|**Management overhead**: |**Fully managed**<br><br>The search and archive options are fully managed and don't add management overhead. |**High**<br><br>ADX is external to Microsoft Sentinel, which requires monitoring and maintenance. |**Low**<br><br>While this platform requires little maintenance, selecting this platform adds monitoring and configuration tasks, such as setting up lifecycle management. |**Medium**<br><br>With this option, you maintain and monitor ADX and Azure Blob Storage, both of which are external components to Microsoft Sentinel. While ADX can be shut down at times, consider the extra management overhead with this option. |
+|**Performance**: |**Medium**<br><br>You typically interact with basic logs within the archive using [search jobs](../azure-monitor/logs/search-jobs.md), which are suitable when you want to maintain access to the data, but don't need immediate access to the data. |**High to low**<br><br>ΓÇó The query performance of an ADX cluster depends on the number of nodes in the cluster, the cluster virtual machine SKU, data partitioning, and more.<br>ΓÇó As you add nodes to the cluster, the performance improves, with added cost.<br>ΓÇó If you use ADX, we recommend that you configure your cluster size to balance performance and cost. This configuration depends on your organization's needs, including how fast your migration needs to complete, how often the data is accessed, and the expected response time. |**Low**<br><br>Offers two performance tiers: Premium or Standard. Although both tiers are an option for long-term storage, Standard is more cost-efficient. Learn about [performance and scalability limits](../storage/common/scalability-targets-standard-account.md). |**Low**<br><br>Because the data resides in the Blob Storage, the performance is limited by that platform. |
+|**Cost**: |**High**<br><br>The cost is composed of two components:<br>ΓÇó **Ingestion cost**. Every GB of data ingested into Basic Logs is subject to Microsoft Sentinel and Azure Monitor Logs ingestion costs, which sum up to approximately $1/GB. See the [pricing details](https://azure.microsoft.com/pricing/details/microsoft-sentinel/).<br>ΓÇó **Archival cost**. The cost for data in the archive tier sums up to approximately $0.02/GB per month. See the [pricing details](https://azure.microsoft.com/pricing/details/monitor/).<br>In addition to these two cost components, if you need frequent access to the data, extra costs apply when you access data via search jobs. |**High to low**<br><br>ΓÇó Because ADX is a cluster of virtual machines, you're charged based on compute, storage and networking usage, plus an ADX markup (see the [pricing details](https://azure.microsoft.com/pricing/details/data-explorer/). Therefore, the more nodes you add to your cluster and the more data you store, the higher the cost.<br>ΓÇó ADX also offers autoscaling capabilities to adapt to workload on demand. ADX can also benefit from Reserved Instance pricing. You can run your own cost calculations in the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/). |**Low**<br><br>With optimal setup, Azure Blob Storage has the lowest costs. In addition, the data works in an automatic lifecycle, so older blobs move into lower-cost access tiers. |**Low**<br><br>The cluster size doesn't affect the cost, because ADX only acts as a proxy. In addition, you need to run the cluster only when you need quick and simple access to the data. |
+|**How to access data**: |[Search jobs](search-jobs.md) |Direct KQL queries |[externaldata](/azure/data-explorer/kusto/query/externaldata-operator) |Modified KQL data |
+|**Scenario**: |**Occasional access**<br><br>Relevant in scenarios where you donΓÇÖt need to run heavy analytics or trigger analytics rules. |**Frequent access**<br><br>Relevant in scenarios where you need to access the data frequently, and need to control how the cluster is sized and configured. |**Compliance/audit**<br><br>ΓÇó Optimal for storing massive amounts of unstructured data.<br>ΓÇó Relevant in scenarios where you don't need quick access to the data or high performance, such as for compliance or audits. |**Occasional access**<br><br>Relevant in scenarios where you want to benefit from the low cost of Azure Blob Storage, and maintain relatively quick access to the data. |
+|**Complexity**: |Very low |Medium |Low |High |
+|**Readiness**: |Public Preview |GA |GA |GA |
+
+## General considerations
+
+Now that you know more about the available target platforms, review these main factors to finalize your decision.
+
+- [How will your organization use the ingested logs?](#use-of-ingested-logs)
+- [How fast does the migration need to run?](#migration-speed)
+- [What is the amount of data to ingest?](#amount-of-data)
+- What are the estimated migration costs, during and after migration? See the [platform comparison](#select-a-target-azure-platform-to-host-the-exported-historical-data) to compare the costs.
+
+### Use of ingested logs
+
+Define how your organization will use the ingested logs to guide your selection of the ingestion platform.
+
+Consider these three general scenarios:
+
+- Your organization needs to keep the logs only for compliance or audit purposes. In this case, your organization will rarely access the data. Even if your organization accesses the data, high performance or ease of use aren't a priority.
+- Your organization needs to retain the logs so that your teams can access the logs easily and fairly quickly.
+- Your organization needs to retain the logs so that your teams can access the logs occasionally. Performance and ease of use are secondary.
+
+See the [platform comparison](#select-a-target-azure-platform-to-host-the-exported-historical-data) to understand which platform suits each of these scenarios.
+
+### Migration speed
+
+In some scenarios, you might need to meet a tight deadline, for example, your organization might need to urgently move from the previous SIEM due to a license expiration event.
+
+Review the components and factors that determine the speed of your migration.
+- [Data source](#data-source)
+- [Compute power](#compute-power)
+- [Target platform](#target-platform)
+
+#### Data source
+
+The data source is typically a local file system or cloud storage, for example, S3. A server's storage performance depends on multiple factors, such as disk technology (SSD vs HDD), the nature of the IO requests, and the size of each request.
+
+For example, Azure virtual machine performance ranges from 30 MB per second on smaller VM SKUs, to 20 GB per second for some of the storage-optimized SKUs using NVM Express (NVMe) disks. Learn how to [design your Azure VM for high storage performance](/azure/virtual-machines/premium-storage-performance). You can also apply most concepts to on-premises servers.
+
+#### Compute power
+
+In some cases, even if your disk is capable of copying your data quickly, compute power is the bottleneck in the copy process. In these cases, you can choose one these scaling options:
+
+- **Scale vertically**. You increase the power of a single server by adding more CPUs, or increase the CPU speed.
+- **Scale horizontally**. You add more servers, which increases the parallelism of the copy process.
+
+#### Target platform
+
+Each of the target platforms discussed in this section has a different performance profile.
+
+- **Azure Monitor Basic logs**. By default, Basic logs can be pushed to Azure Monitor at a rate of approximately 1 GB per minute. This rate allows you to ingest approximately 1.5 TB per day or 43 TB per month.
+- **Azure Data Explorer**. Ingestion performance varies, depending on the size of the cluster you provision, and the batching settings you apply. [Learn about ingestion best practices](/azure/data-explorer/kusto/management/ingestion-faq), including performance and monitoring.
+- **Azure Blob Storage**. The performance of an Azure Blob Storage account can greatly vary depending on the number and size of the files, job size, concurrency, and so in. [Learn how to optimize AzCopy performance with Azure Storage](/azure/data-explorer/kusto/management/ingestion-faq).
+
+### Amount of data
+
+The amount of data is the main factor that affects the duration of the migration process. You should therefore consider how to set up your environment depending on your data set.
+
+To determine the minimum duration of the migration and where the bottleneck could be, consider the amount of data and the ingestion speed of the target platform. For example, you select a target platform that can ingest 1 GB per second, and you have to migrate 100 TB. In this case, your migration will take a minimum of 100,000 GB, multiplied by the 1 GB per second speed. Divide the result by 3600, which calculates to 27 hours. This calculation is correct if the rest of the components in the pipeline, such as the local disk, the network, and the virtual machines, can perform at a speed of 1 GB per second.
+
+## Next steps
+
+In this article, you learned how to map your migration rules from QRadar to Microsoft Sentinel.
+
+> [!div class="nextstepaction"]
+> [Select a data ingestion tool](migration-ingestion-tool.md)
sentinel Migration Ingestion Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-ingestion-tool.md
+
+ Title: "Microsoft Sentinel migration: Select a data ingestion tool | Microsoft Docs"
+description: Select a tool to transfer your historical data to the selected target platform.
+++ Last updated : 05/03/2022++
+# Select a data ingestion tool
+
+After you [select a target platform](migration-ingestion-target-platform.md) for your historical data, the next step is to select a tool to transfer your data.
+
+This article describes a set of different tools used to transfer your historical data to the selected target platform. This table lists the tools available for each target platform, and general tools to help you with the ingestion process.
+
+|Azure Monitor Basic Logs/Archive |Azure Data Explorer |Azure Blob Storage |General tools |
+|||||
+|ΓÇó [Azure Monitor custom log ingestion tool](#azure-monitor-custom-log-ingestion-tool)<br>ΓÇó [Direct API](#direct-api) |ΓÇó [LightIngest](#lightingest)<br>ΓÇó [Logstash](#logstash) |ΓÇó [Azure Data Factory or Azure Synapse](#azure-data-factory-or-azure-synapse)<br>ΓÇó [AzCopy](#azcopy) |ΓÇó [Azure Data Box](#azure-data-box)<br> ΓÇó [SIEM data migration accelerator](#siem-data-migration-accelerator) |
+
+## Azure Monitor Basic Logs/Archive
+
+Before you ingest data to Azure Monitor Basic Logs or Archive, for lower ingestion prices, ensure that the table you're writing to is [configured as Basic Logs](../azure-monitor/logs/basic-logs-configure.md#check-table-configuration). Review the [Azure Monitor custom log ingestion tool](#azure-monitor-custom-log-ingestion-tool) and the [direct API](#direct-api) method for Azure Monitor Basic Logs.
+
+### Azure Monitor custom log ingestion tool
+
+The [custom log ingestion tool](https://github.com/Azure/Azure-Sentinel/tree/master/Tools/CustomLogsIngestion-DCE-DCR) is a PowerShell script that sends custom data to an Azure Monitor Logs workspace. You can point the script to the folder where all your log files reside, and the script pushes the files to that folder. The script accepts a CSV or JSON format for log files.
+
+### Direct API
+
+With this option, you [ingest your custom logs into Azure Monitor Logs](../azure-monitor/logs/tutorial-custom-logs.md). You ingest the logs with a PowerShell script that uses a REST API. Alternatively, you can use any other programming language to perform the ingestion, and you can use other Azure services to abstract the compute layer, such as Azure Functions or Azure Logic Apps.
+
+## Azure Data Explorer
+
+You can [ingest data to Azure Data Explorer](/azure/data-explorer/ingest-data-overview) (ADX) in several ways.
+
+The ingestion methods that ADX accepts are based on different components:
+- SDKs for different languages, such as .NET, Go, Python, Java, NodeJS, and APIs.
+- Managed pipelines, such as Event Grid or Storage Blob Event Hubs, and Azure Data Factory.
+- Connectors or plugins, such as Logstash, Kafka, Power Automate, and Apache Spark.
+
+Review the [LightIngest](#lightingest) and [Logstash](#logstash), two methods that are better tailored to the data migration use case.
+
+### LightIngest
+
+ADX has developed the [LightIngest utility](/azure/data-explorer/lightingest) specifically for the historical data migration use case. You can use LightIngest to copy data from a local file system or Azure Blob Storage to ADX.
+
+Here are a few main benefits and capabilities of LightIngest:
+
+- Because there's no time constraint on ingestion duration, LightIngest is most useful when you want to ingest large amounts of data.
+- LightIngest is useful when you want to query records according to the time they were created, and not the time they were ingested.
+- You don't need to deal with complex sizing for LightIngest, because the utility doesn't perform the actual copy. LightIngest informs ADX about the blobs that need to be copied, and ADX copies the data.
+
+If you choose LightIngest, review these tips and best practices.
+
+- To speed up your migration and reduce costs, increase the size of your ADX cluster to create more available nodes for ingestion. Decrease the size once the migration is over.
+- For more efficient queries after you ingest the data to ADX, ensure that the copied data uses the timestamp for the original events. The data shouldn't use the timestamp from when the data is copied to ADX. You provide the timestamp to LightIngest as the path of file name as part of the [CreationTime property](/azure/data-explorer/lightingest#how-to-ingest-data-using-creationtime).
+- If your path or file names don't include a timestamp, you can still instruct ADX to organize the data using a [partitioning policy](/azure/data-explorer/kusto/management/partitioningpolicy).
+
+### Logstash
+
+[Logstash](https://www.elastic.co/products/logstash) is an open source, server-side data processing pipeline that ingests data from many sources simultaneously, transforms the data, and then sends the data to your favorite "stash". Learn how to [ingest data from Logstash to Azure Data Explorer](/azure/data-explorer/ingest-data-logstash). Logstash runs only on Windows machines.
+
+To optimize performance, [configure the Logstash tier size](https://www.elastic.co/guide/en/logstash/current/deploying-and-scaling.html) according to the events per second. We recommend that you use [LightIngest](#lightingest) wherever possible, because LightIngest relies on the ADX cluster computing to perform the copy.
+
+## Azure Blob Storage
+
+You can ingest data to Azure Blob Storage in several ways.
+- [Azure Data Factory or Azure Synapse](../data-factory/connector-azure-blob-storage.md)
+- [AzCopy](../storage/common/storage-use-azcopy-v10.md)
+- [Azure Storage Explorer](/architecture/data-science-process/move-data-to-azure-blob-using-azure-storage-explorer)
+- [Python](../storage/blobs/storage-quickstart-blobs-python.md)
+- [SSIS](/azure/architecture/data-science-process/move-data-to-azure-blob-using-ssis)
+
+Review the Azure Data Factory (ADF) and Azure Synapse methods, which are better tailored to the data migration use case.
+
+### Azure Data Factory or Azure Synapse
+
+To use the Copy activity in Azure Data Factory (ADF) or Synapse pipelines:
+1. Create and configure a self-hosted integration runtime. This component is responsible for copying the data from your on-premises host.
+1. Create linked services for the source data store ([filesystem](../data-factory/connector-file-system.md?tabs=data-factory#create-a-file-system-linked-service-using-ui) and the sink data store [blob storage](../data-factory/connector-azure-blob-storage.md?tabs=data-factory#create-an-azure-blob-storage-linked-service-using-ui).
+3. To copy the data, use the [Copy data tool](../data-factory/quickstart-create-data-factory-copy-data-tool.md). Alternatively, you can use method such as PowerShell, Azure portal, a .NET SDK, and so on.
+
+### AzCopy
+
+[AzCopy](../storage/common/storage-use-azcopy-v10.md) is a simple command-line utility that copies files to or from storage accounts. AzCopy is available for Windows, Linux, and macOS. Learn how to [copy on-premises data to Azure Blob storage with AzCopy](../storage/common/storage-use-azcopy-v10.md).
+
+You can also use these options to copy the data:
+- Learn how to [optimize the performance](../storage/common/storage-use-azcopy-optimize.md) of AzCopy.
+- Learn how to [configure AzCopy](../storage/common/storage-ref-azcopy-configuration-settings.md).
+- Learn how to use the [copy command](../storage/common/storage-ref-azcopy-copy.md).
+
+## Azure Data Box
+
+In a scenario where the source SIEM doesn't have good connectivity to Azure, ingesting the data using the tools reviewed in this section might be slow or even impossible. To address this scenario, you can use [Azure Data Box](../databox/data-box-overview.md) to copy the data locally from the customer's data center into an appliance, and then ship that appliance to an Azure data center. While Azure Data Box isn't a replacement for AzCopy or LightIngest, you can use this tool to accelerate the data transfer between the customer data center and Azure.
+
+Azure Data Box offers three different SKUs, depending on the amount of data to migrate:
+
+- [Data Box Disk](../databox/data-box-disk-overview.md)
+- [Data Box](../databox/data-box-overview.md)
+- [Data Box Heavy](../databox/data-box-heavy-overview.md)
+
+After you complete the migration, the data is available in a storage account under one of your Azure subscriptions. You can then use [AzCopy](#azcopy), [LightIngest](#lightingest), or [ADF](#azure-data-factory-or-azure-synapse) to ingest data from the storage account.
+
+## SIEM data migration accelerator
+
+In addition to selecting an ingestion tool, your team needs to invest time in setting up the foundation environment. To ease this process, you can use the [SIEM data migration accelerator](https://aka.ms/siemdatamigration), which automates the following tasks:
+
+- Deploys a Windows virtual machine that will be used to move the logs from the source to the target platform
+- Downloads and extracts the following tools into the virtual machine desktop:
+ - [LightIngest](#lightingest): Used to migrate data to ADX
+ - [Azure Monitor Custom log ingestion tool](#azure-monitor-custom-log-ingestion-tool): Used to migrate data to Log Analytics
+ - [AzCopy](#azcopy): Used to migrate data to Azure Blob Storage
+- Deploys the target platform that will host your historical logs:
+ - Azure Storage account (Azure Blob Storage)
+ - Azure Data Explorer cluster and database
+ - Azure Monitor Logs workspace (Basic Logs; enabled with Microsoft Sentinel)
+
+To use the SIEM data migration accelerator:
+
+1. From the [SIEM data migration accelerator page](https://aka.ms/siemdatamigration), click **Deploy to Azure** at the bottom of the page, and authenticate.
+1. Select **Basics**, select your resource group and location, and then select **Next**.
+1. Select **Migration VM**, and do the following:
+ - Type the virtual machine name, username and password.
+ - Select an existing vNet or create a new vNet for the virtual machine connection.
+ - Select the virtual machine size.
+1. Select **Target platform** and do one of the following:
+ - Skip this step.
+ - Provide the ADX cluster and database name, SKU, and number of nodes.
+ - For Azure Blob Storage accounts, select an existing account. If you don't have an account, provide a new account name, type, and redundancy.
+ - For Azure Monitor Logs, type the name of the new workspace.
+
+## Next steps
+
+In this article, you learned how to select a tool to ingest your data into the target platform.
+
+> [!div class="nextstepaction"]
+> [Ingest your data](migration-export-ingest.md)
sentinel Migration Qradar Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-qradar-automation.md
+
+ Title: Migrate IBM Security QRadar SOAR automation to Microsoft Sentinel | Microsoft Docs
+description: Learn how to identify SOAR use cases, and how to migrate your QRadar SOAR automation to Microsoft Sentinel.
+++ Last updated : 05/03/2022++
+# Migrate IBM Security QRadar SOAR automation to Microsoft Sentinel
+
+Microsoft Sentinel provides Security Orchestration, Automation, and Response (SOAR) capabilities with [automation rules](automate-incident-handling-with-automation-rules.md) and [playbooks](tutorial-respond-threats-playbook.md). Automation rules automate incident handling and response, and playbooks run predetermined sequences of actions to response and remediate threats. This article discusses how to identify SOAR use cases, and how to migrate your IBM Security QRadar SOAR automation to Microsoft Sentinel.
+
+Automation rules simplify complex workflows for your incident orchestration processes, and allow you to centrally manage your incident handling automation.
+
+With automation rules, you can:
+- Perform simple automation tasks without necessarily using playbooks. For example, you can assign, tag incidents, change status, and close incidents.
+- Automate responses for multiple analytics rules at once.
+- Control the order of actions that are executed.
+- Run playbooks for those cases where more complex automation tasks are necessary.
+
+## Identify SOAR use cases
+
+HereΓÇÖs what you need to think about when migrating SOAR use cases from IBM Security QRadar SOAR.
+- **Use case quality**. Choose good use cases for automation. Use cases should be based on procedures that are clearly defined, with minimal variation, and a low false-positive rate. Automation should work with efficient use cases.
+- **Manual intervention**. Automated response can have wide ranging effects and high impact automations should have human input to confirm high impact actions before theyΓÇÖre taken.
+- **Binary criteria**. To increase response success, decision points within an automated workflow should be as limited as possible, with binary criteria. Binary criteria reduces the need for human intervention, and enhances outcome predictability.
+- **Accurate alerts or data**. Response actions are dependent on the accuracy of signals such as alerts. Alerts and enrichment sources should be reliable. Microsoft Sentinel resources such as watchlists and reliable threat intelligence can enhance reliability.
+- **Analyst role**. While automation where possible is great, reserve more complex tasks for analysts, and provide them with the opportunity for input into workflows that require validation. In short, response automation should augment and extend analyst capabilities.
+
+## Migrate SOAR workflow
+
+This section shows how key SOAR concepts in IBM Security QRadar SOAR translate to Microsoft Sentinel components. The section also provides general guidelines for how to migrate each step or component in the SOAR workflow.
++
+|Step (in diagram) |IBM Security QRadar SOAR |Microsoft Sentinel |
+||||
+|1 |Define rules and conditions. |Define automation rules. |
+|2 |Execute ordered activities. |Execute automation rules containing multiple playbooks. |
+|3 |Execute selected workflows. |Execute other playbooks according to tags applied by playbooks that were executed previously. |
+|4 |Post data to message destinations. |Execute code snippets using inline actions in Logic Apps. |
+
+## Map SOAR components
+
+Review which Microsoft Sentinel or Azure Logic Apps features map to the main QRadar SOAR components.
+
+|QRadar |Microsoft Sentinel/Azure Logic Apps |
+|||
+|Rules |[Analytics rules](detect-threats-built-in.md#use-built-in-analytics-rules) attached to playbooks or automation rules |
+|Gateway |[Condition control](../logic-apps/logic-apps-control-flow-conditional-statement.md) |
+|Scripts |[Inline code](../logic-apps/logic-apps-add-run-inline-code.md) |
+|Custom action processors |[Custom API calls](../logic-apps/logic-apps-create-api-app.md) in Azure Logic Apps or third party connectors |
+|Functions |[Azure Function connector](../logic-apps/logic-apps-azure-functions.md) |
+|Message destinations |[Azure Logic Apps with Azure Service Bus](../connectors/connectors-create-api-servicebus.md) |
+|IBM X-Force Exchange |ΓÇó [Automation > Templates tab](use-playbook-templates.md)<br>ΓÇó [Content hub catalog](sentinel-solutions-catalog.md)<br>ΓÇó [GitHub](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Block-OnPremADUser) |
+
+## Operationalize playbooks and automation rules in Microsoft Sentinel
+
+Most of the playbooks that you use with Microsoft Sentinel are available in either the [Automation > Templates tab](use-playbook-templates.md), the [Content hub catalog](sentinel-solutions-catalog.md), or [GitHub](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Block-OnPremADUser). In some cases, however, you might need to create playbooks from scratch or from existing templates.
+
+You typically build your custom logic app using the Azure Logic App Designer feature. The logic apps code is based on [Azure Resource Manager (ARM) templates](../azure-resource-manager/templates/overview.md), which facilitate development, deployment and portability of Azure Logic Apps across multiple environments. To convert your custom playbook into a portable ARM template, you can use the [ARM template generator](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/export-microsoft-sentinel-playbooks-or-azure-logic-apps-with/ba-p/3275898).
+
+Use these resources for cases where you need to build your own playbooks either from scratch or from existing templates.
+- [Automate incident handling in Microsoft Sentinel](automate-incident-handling-with-automation-rules.md)
+- [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md)
+- [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](tutorial-respond-threats-playbook.md)
+- [How to use Microsoft Sentinel for Incident Response, Orchestration and Automation](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/how-to-use-azure-sentinel-for-incident-response-orchestration/ba-p/2242397)
+- [Adaptive Cards to enhance incident response in Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/using-microsoft-teams-adaptive-cards-to-enhance-incident/ba-p/3330941)
+
+## SOAR post migration best practices
+
+Here are best practices you should take into account after your SOAR migration:
+
+- After you migrate your playbooks, test the playbooks extensively to ensure that the migrated actions work as expected.
+- Periodically review your automations to explore ways to further simplify or enhance your SOAR. Microsoft Sentinel constantly adds new connectors and actions that can help you to further simplify or increase the effectiveness of your current response implementations.
+- Monitor the performance of your playbooks using the [Playbooks health monitoring workbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/what-s-new-monitoring-your-logic-apps-playbooks-in-azure/ba-p/1873211).
+- Use managed identities and service principals: Authenticate against various Azure services within your Logic Apps, store the secrets in Azure Key Vault, and obscure the output of the flow execution. We also recommend that you [monitor the activities of these service principals](https://techcommunity.microsoft.com/t5/azure-sentinel/non-interactive-logins-minimizing-the-blind-spot/ba-p/2287932).
+
+## Next steps
+
+In this article, you learned how to map your SOAR automation from IBM Security QRadar SOAR to Microsoft Sentinel.
+
+> [!div class="nextstepaction"]
+> [Export your historical data](migration-qradar-historical-data.md)
sentinel Migration Qradar Detection Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-qradar-detection-rules.md
+
+ Title: Migrate QRadar detection rules to Microsoft Sentinel | Microsoft Docs
+description: Identify, compare, and migrate your QRadar detection rules to Microsoft Sentinel built-in rules.
+++ Last updated : 05/03/2022++
+# Migrate QRadar detection rules to Microsoft Sentinel
+
+This article describes how to identify, compare, and migrate your QRadar detection rules to Microsoft Sentinel built-in rules.
+
+## Identify and migrate rules
+
+Microsoft Sentinel uses machine learning analytics to create high-fidelity and actionable incidents, and some of your existing detections may be redundant in Microsoft Sentinel. Therefore, don't migrate all of your detection and analytics rules blindly. Review these considerations as you identify your existing detection rules.
+
+- Make sure to select use cases that justify rule migration, considering business priority and efficiency.
+- Check that you [understand Microsoft Sentinel rule types](detect-threats-built-in.md#view-built-in-detections).
+- Check that you understand the [rule terminology](#compare-rule-terminology).
+- Review any rules that haven't triggered any alerts in the past 6-12 months, and determine whether they're still relevant.
+- Eliminate low-level threats or alerts that you routinely ignore.
+- Use existing functionality, and check whether Microsoft SentinelΓÇÖs [built-in analytics rules](https://github.com/Azure/Azure-Sentinel/tree/master/Detections) might address your current use cases. Because Microsoft Sentinel uses machine learning analytics to produce high-fidelity and actionable incidents, itΓÇÖs likely that some of your existing detections wonΓÇÖt be required anymore.
+- Confirm connected data sources and review your data connection methods. Revisit data collection conversations to ensure data depth and breadth across the use cases you plan to detect.
+- Explore community resources such as the [SOC Prime Threat Detection Marketplace](https://my.socprime.com/tdm/) to check whether your rules are available.
+- Consider whether an online query converter such as Uncoder.io might work for your rules.
+- If rules arenΓÇÖt available or canΓÇÖt be converted, they need to be created manually, using a KQL query. Review the [rules mapping](#map-and-compare-rule-samples) to create new queries.
+
+Learn more about [best practices for migrating detection rules](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/best-practices-for-migrating-detection-rules-from-arcsight/ba-p/2216417).
+
+**To migrate your analytics rules to Microsoft Sentinel**:
+
+1. Verify that you have a testing system in place for each rule you want to migrate.
+
+ 1. **Prepare a validation process** for your migrated rules, including full test scenarios and scripts.
+
+ 1. **Ensure that your team has useful resources** to test your migrated rules.
+
+ 1. **Confirm that you have any required data sources connected,** and review your data connection methods.
+
+1. Verify whether your detections are available as built-in templates in Microsoft Sentinel:
+
+ - **If the built-in rules are sufficient**, use built-in rule templates to create rules for your own workspace.
+
+ In Microsoft Sentinel, go to the **Configuration > Analytics > Rule templates** tab, and create and update each relevant analytics rule.
+
+ For more information, see [Detect threats out-of-the-box](detect-threats-built-in.md).
+
+ - **If you have detections that aren't covered by Microsoft Sentinel's built-in rules**, try an online query converter, such as [Uncoder.io](https://uncoder.io/) to convert your queries to KQL.
+
+ Identify the trigger condition and rule action, and then construct and review your KQL query.
+
+ - **If neither the built-in rules nor an online rule converter is sufficient**, you'll need to create the rule manually. In such cases, use the following steps to start creating your rule:
+
+ 1. **Identify the data sources you want to use in your rule**. You'll want to create a mapping table between data sources and data tables in Microsoft Sentinel to identify the tables you want to query.
+
+ 1. **Identify any attributes, fields, or entities** in your data that you want to use in your rules.
+
+ 1. **Identify your rule criteria and logic**. At this stage, you may want to use rule templates as samples for how to construct your KQL queries.
+
+ Consider filters, correlation rules, active lists, reference sets, watchlists, detection anomalies, aggregations, and so on. You might use references provided by your legacy SIEM to understand [how to best map your query syntax](#map-and-compare-rule-samples).
+
+ 1. **Identify the trigger condition and rule action, and then construct and review your KQL query**. When reviewing your query, consider KQL optimization guidance resources.
+
+1. Test the rule with each of your relevant use cases. If it doesn't provide expected results, you may want to review the KQL and test it again.
+
+1. When you're satisfied, you can consider the rule migrated. Create a playbook for your rule action as needed. For more information, see [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md).
+
+Learn more about analytics rules.
+
+- [**Create custom analytics rules to detect threats**](detect-threats-custom.md). Use [alert grouping](detect-threats-custom.md#alert-grouping) to reduce alert fatigue by grouping alerts that occur within a given timeframe.
+- [**Map data fields to entities in Microsoft Sentinel**](map-data-fields-to-entities.md) to enable SOC engineers to define entities as part of the evidence to track during an investigation. Entity mapping also makes it possible for SOC analysts to take advantage of an intuitive [investigation graph (investigate-cases.md#use-the-investigation-graph-to-deep-dive) that can help reduce time and effort.
+- [**Investigate incidents with UEBA data**](investigate-with-ueba.md), as an example of how to use evidence to surface events, alerts, and any bookmarks associated with a particular incident in the incident preview pane.
+- [**Kusto Query Language (KQL)**](/azure/data-explorer/kusto/query/), which you can use to send read-only requests to your [Log Analytics](../azure-monitor/logs/log-analytics-tutorial.md) database to process data and return results. KQL is also used across other Microsoft services, such as [Microsoft Defender for Endpoint](https://www.microsoft.com/microsoft-365/security/endpoint-defender) and [Application Insights](../azure-monitor/app/app-insights-overview.md).
+
+## Compare rule terminology
+
+This table helps you to clarify the concept of a rule in Microsoft Sentinel compared to QRadar.
+
+| |QRadar |Microsoft Sentinel |
+||||
+|**Rule type** |ΓÇó Events<br>ΓÇó Flow<br>ΓÇó Common<br>ΓÇó Offense<br>ΓÇó Anomaly detection rules |ΓÇó Scheduled query<br>ΓÇó Fusion<br>ΓÇó Microsoft Security<br>ΓÇó Machine Learning (ML) Behavior Analytics |
+|**Criteria** |Define in test condition |Define in KQL |
+|**Trigger condition** |Define in rule |Threshold: Number of query results |
+|**Action** |ΓÇó Create offense<br>ΓÇó Dispatch new event<br>ΓÇó Add to reference set or data<br>ΓÇó And more |ΓÇó Create alert or incident<br>ΓÇó Integrates with Logic Apps |
+
+## Map and compare rule samples
+
+Use these samples to compare and map rules from QRadar to Microsoft Sentinel in various scenarios.
+
+|Rule |Syntax |Sample detection rule (QRadar) |Sample KQL query |Resources |
+||||||
+|Common property tests |[QRadar syntax](#common-property-tests-syntax) |ΓÇó [Regular expression example](#common-property-tests-regular-expression-example-qradar)<br>ΓÇó [AQL filter query example](#common-property-tests-aql-filter-query-example-qradar)<br>ΓÇó [equals/not equals example](#common-property-tests-equalsnot-equals-example-qradar) |ΓÇó [Regular expression example](#common-property-tests-regular-expression-example-kql)<br>ΓÇó [AQL filter query example](#common-property-tests-aql-filter-query-example-kql)<br>ΓÇó [equals/not equals example](#common-property-tests-equalsnot-equals-example-kql) |ΓÇó Regular expression: [matches regex](/azure/data-explorer/kusto/query/re2)<br>ΓÇó AQL filter query: [string operators](/azure/data-explorer/kusto/query/datatypes-string-operators#operators-on-strings)<br>ΓÇó equals/not equals: [String operators](/azure/data-explorer/kusto/query/datatypes-string-operators#operators-on-strings) |
+|Date/time tests |[QRadar syntax](#datetime-tests-syntax) |ΓÇó [Selected day of the month example](#datetime-tests-selected-day-of-the-month-example-qradar)<br>ΓÇó [Selected day of the week example](#datetime-tests-selected-day-of-the-week-example-qradar)<br>ΓÇó [after/before/at example](#datetime-tests-afterbeforeat-example-qradar) |ΓÇó [Selected day of the month example](#datetime-tests-selected-day-of-the-month-example-kql)<br>ΓÇó [Selected day of the week example](#datetime-tests-selected-day-of-the-week-example-kql)<br>ΓÇó [after/before/at example](#datetime-tests-afterbeforeat-example-kql) |ΓÇó [Date and time operators](/azure/data-explorer/kusto/query/samples?pivots=azuremonitor#date-and-time-operations)<br>ΓÇó Selected day of the month: [dayofmonth()](/azure/data-explorer/kusto/query/dayofmonthfunction)<br>ΓÇó Selected day of the week: [dayofweek()](/azure/data-explorer/kusto/query/dayofweekfunction)<br>ΓÇó after/before/at: [format_datetime()](/azure/data-explorer/kusto/query/format-datetimefunction) |
+|Event property tests |[QRadar syntax](#event-property-tests-syntax) |ΓÇó [IP protocol example](#event-property-tests-ip-protocol-example-qradar)<br>ΓÇó [Event Payload string example](#event-property-tests-event-payload-string-example-qradar)<br> |ΓÇó [IP protocol example](#event-property-tests-ip-protocol-example-kql)<br>ΓÇó [Event Payload string example](#event-property-tests-event-payload-string-example-kql)<br> |ΓÇó IP protocol: [String operators](/azure/data-explorer/kusto/query/datatypes-string-operators#operators-on-strings)<br>ΓÇó Event Payload string: [has](/azure/data-explorer/kusto/query/datatypes-string-operators) |
+|Functions: counters |[QRadar syntax](#functions-counters-syntax) |[Event property and time example](#counters-event-property-and-time-example-qradar) |[Event property and time example](#counters-event-property-and-time-example-kql) |[summarize](/azure/data-explorer/kusto/query/summarizeoperator) |
+|Functions: negative conditions |[QRadar syntax](#functions-negative-conditions-syntax) |[Negative conditions example](#negative-conditions-example-qradar) |[Negative conditions example](#negative-conditions-example-kql) |ΓÇó [join()](/azure/data-explorer/kusto/query/joinoperator?pivots=azuredataexplorer)<br>ΓÇó [String operators](/azure/data-explorer/kusto/query/datatypes-string-operators#operators-on-strings)<br>ΓÇó [Numerical operators](/azure/data-explorer/kusto/query/numoperators) |
+|Functions: simple |[QRadar syntax](#functions-simple-conditions-syntax) |[Simple conditions example](#simple-conditions-example-qradar) |[Simple conditions example](#simple-conditions-example-kql) |[or](/azure/data-explorer/kusto/query/logicaloperators) |
+|IP/port tests |[QRadar syntax](#ipport-tests-syntax) |ΓÇó [Source port example](#ipport-tests-source-port-example-qradar)<br>ΓÇó [Source IP example](#ipport-tests-source-ip-example-qradar) |ΓÇó [Source port example](#ipport-tests-source-port-example-kql)<br>ΓÇó [Source IP example](#ipport-tests-source-ip-example-kql) | |
+|Log source tests |[QRadar syntax](#log-source-tests-syntax) |[Log source example](#log-source-example-qradar) |[Log source example](#log-source-example-kql) | |
+
+### Common property tests syntax
+
+Here's the QRadar syntax for a common property tests rule.
++
+### Common property tests: Regular expression example (QRadar)
+
+Here's the syntax for a sample QRadar common property tests rule that uses a regular expression:
+
+```
+when any of <these properties> match <this regular expression>
+```
+Here's the sample rule in QRadar.
++
+### Common property tests: Regular expression example (KQL)
+
+Here's the common property tests rule with a regular expression in KQL.
+
+```kusto
+CommonSecurityLog
+| where tostring(SourcePort) matches regex @"\d{1,5}" or tostring(DestinationPort) matches regex @"\d{1,5}"
+```
+### Common property tests: AQL filter query example (QRadar)
+
+Here's the syntax for a sample QRadar common property tests rule that uses an AQL filter query.
+
+```
+when the event matches <this> AQL filter query
+```
+Here's the sample rule in QRadar.
++
+### Common property tests: AQL filter query example (KQL)
+
+Here's the common property tests rule with an AQL filter query in KQL.
+
+```kusto
+CommonSecurityLog
+| where SourceIP == '10.1.1.10'
+```
+### Common property tests: equals/not equals example (QRadar)
+
+Here's the syntax for a sample QRadar common property tests rule that uses the `equals` or `not equals` operator.
+
+```
+and when <this property> <equals/not equals> <this property>
+```
+Here's the sample rule in QRadar.
++
+### Common property tests: equals/not equals example (KQL)
+
+Here's the common property tests rule with the `equals` or `not equals` operator in KQL.
+
+```kusto
+CommonSecurityLog
+| where SourceIP == DestinationIP
+```
+### Date/time tests syntax
+
+Here's the QRadar syntax for a date/time tests rule.
++
+### Date/time tests: Selected day of the month example (QRadar)
+
+Here's the syntax for a sample QRadar date/time tests rule that uses a selected day of the month.
+
+```
+and when the event(s) occur <on/after/before> the <selected> day of the month
+```
+Here's the sample rule in QRadar.
++
+### Date/time tests: Selected day of the month example (KQL)
+
+Here's the date/time tests rule with a selected day of the month in KQL.
+
+```kusto
+SecurityEvent
+ | where dayofmonth(TimeGenerated) < 4
+```
+### Date/time tests: Selected day of the week example (QRadar)
+
+Here's the syntax for a sample QRadar date/time tests rule that uses a selected day of the week:
+
+```
+and when the event(s) occur on any of <these days of the week{Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday}>
+```
+Here's the sample rule in QRadar.
++
+### Date/time tests: Selected day of the week example (KQL)
+
+Here's the date/time tests rule with a selected day of the week in KQL.
+
+```kusto
+SecurityEvent
+ | where dayofweek(TimeGenerated) between (3d .. 5d)
+```
+### Date/time tests: after/before/at example (QRadar)
+
+Here's the syntax for a sample QRadar date/time tests rule that uses the `after`, `before`, or `at` operator.
+
+```
+and when the event(s) occur <after/before/at> <this time{12.00AM, 12.05AM, ...11.50PM, 11.55PM}>
+```
+Here's the sample rule in QRadar.
++
+### Date/time tests: after/before/at example (KQL)
+
+Here's the date/time tests rule that uses the `after`, `before`, or `at` operator in KQL.
+
+```kusto
+SecurityEvent
+| where format_datetime(TimeGenerated,'HH:mm')=="23:55"
+```
+`TimeGenerated` is in UTC/GMT.
+
+### Event property tests syntax
+
+Here's the QRadar syntax for an event property tests rule.
++
+### Event property tests: IP protocol example (QRadar)
+
+Here's the syntax for a sample QRadar event property tests rule that uses an IP protocol.
+
+```
+and when the IP protocol is one of the following <protocols>
+```
+Here's the sample rule in QRadar.
++
+### Event property tests: IP protocol example (KQL)
+
+```kusto
+CommonSecurityLog
+| where Protocol in ("UDP","ICMP")
+```
+### Event property tests: Event Payload string example (QRadar)
+
+Here's the syntax for a sample QRadar event property tests rule that uses an `Event Payload` string value.
+
+```
+and when the Event Payload contains <this string>
+```
+Here's the sample rule in QRadar.
++
+### Event property tests: Event Payload string example (KQL)
+
+```kusto
+CommonSecurityLog
+| where DeviceVendor has "Palo Alto"
+
+search "Palo Alto"
+```
+To optimize performance, avoid using the `search` command if you already know the table name.
+
+### Functions: counters syntax
+
+Here's the QRadar syntax for a functions rule that uses counters.
++
+### Counters: Event property and time example (QRadar)
+
+Here's the syntax for a sample QRadar functions rule that uses a defined number of event properties in a defined number of minutes.
+
+```
+and when at least <this many> events are seen with the same <event properties> in <this many> <minutes>
+```
+Here's the sample rule in QRadar.
++
+### Counters: Event property and time example (KQL)
+
+```kusto
+CommonSecurityLog
+| summarize Count = count() by SourceIP, DestinationIP
+| where Count >= 5
+```
+### Functions: negative conditions syntax
+
+Here's the QRadar syntax for a functions rule that uses negative conditions.
++
+### Negative conditions example (QRadar)
+
+Here's the syntax for a sample QRadar functions rule that uses negative conditions.
+
+```
+and when none of <these rules> match in <this many> <minutes> after <these rules> match with the same <event properties>
+```
+Here are two defined rules in QRadar. The negative conditions will be based on these rules.
+++
+Here's a sample of the negative conditions rule based on the rules above.
++
+### Negative conditions example (KQL)
+
+```kusto
+let spanoftime = 10m;
+let Test2 = (
+CommonSecurityLog
+| where Protocol !in ("UDP","ICMP")
+| where TimeGenerated > ago(spanoftime)
+);
+let Test6 = (
+CommonSecurityLog
+| where SourceIP == DestinationIP
+);
+Test2
+| join kind=rightanti Test6 on $left. SourceIP == $right. SourceIP and $left. Protocol ==$right. Protocol
+```
+### Functions: simple conditions syntax
+
+Here's the QRadar syntax for a functions rule that uses simple conditions.
++
+### Simple conditions example (QRadar)
+
+Here's the syntax for a sample QRadar functions rule that uses simple conditions.
+
+```
+and when an event matches <any|all> of the following <rules>
+```
+Here's the sample rule in QRadar.
++
+### Simple conditions example (KQL)
+
+```kusto
+CommonSecurityLog
+| where Protocol !in ("UDP","ICMP") or SourceIP == DestinationIP
+```
+### IP/port tests syntax
+
+Here's the QRadar syntax for an IP/port tests rule.
++
+### IP/port tests: Source port example (QRadar)
+
+Here's the syntax for a sample QRadar rule specifying a source port.
+
+```
+and when the source port is one of the following <ports>
+```
+Here's the sample rule in QRadar.
++
+### IP/port tests: Source port example (KQL)
+
+```kusto
+CommonSecurityLog
+| where SourcePort == 20
+```
+### IP/port tests: Source IP example (QRadar)
+
+Here's the syntax for a sample QRadar rule specifying a source IP.
+
+```
+and when the source IP is one of the following <IP addresses>
+```
+Here's the sample rule in QRadar.
++
+### IP/port tests: Source IP example (KQL)
+
+```kusto
+CommonSecurityLog
+| where SourceIP in (ΓÇ£10.1.1.1ΓÇ¥,ΓÇ¥10.2.2.2ΓÇ¥)
+```
+### Log source tests syntax
+
+Here's the QRadar syntax for a log source tests rule.
++
+#### Log source example (QRadar)
+
+Here's the syntax for a sample QRadar rule specifying log sources.
+
+```
+and when the event(s) were detected by one or more of these <log source types>
+```
+Here's the sample rule in QRadar.
++
+#### Log source example (KQL)
+
+```kusto
+OfficeActivity
+| where OfficeWorkload == "Exchange"
+```
+## Next steps
+
+In this article, you learned how to map your migration rules from QRadar to Microsoft Sentinel.
+
+> [!div class="nextstepaction"]
+> [Migrate your SOAR automation](migration-qradar-automation.md)
sentinel Migration Qradar Historical Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-qradar-historical-data.md
+
+ Title: "Microsoft Sentinel migration: Export QRadar data to target platform | Microsoft Docs"
+description: Learn how to export your historical data from QRadar.
+++ Last updated : 05/03/2022+++
+# Export historical data from QRadar
+
+This article describes how to export your historical data from QRadar. After you complete the steps in this article, you can [select a target platform](migration-ingestion-target-platform.md) to host the exported data, and then [select an ingestion tool](migration-ingestion-tool.md) to migrate the data.
++
+Follow the steps in these sections to export your historical data using [QRadar forwarding destination](https://www.ibm.com/docs/en/qsip/7.5?topic=administration-forward-data-other-systems).
+
+## Configure QRadar forwarding destination
+
+Configure the QRadar forwarding destination, including your profile, rules, and destination address:
+
+1. [Configure a forwarding profile](https://www.ibm.com/docs/en/qsip/7.5?topic=systems-configuring-forwarding-profiles).
+1. [Add a forwarding destination](https://www.ibm.com/docs/en/qsip/7.5?topic=systems-adding-forwarding-destinations):
+ 1. Set the **Event Format** to **JSON**.
+ 2. Set the **Destination Address** to a server that has syslog running on TCP port 5141 and stores the ingested logs to a local folder path.
+ 3. Select the forwarding profile created in step 1.
+ 4. Enable the forwarding destination configuration.
+
+## Configure routing rules
+
+Configure routing rules:
+
+1. [Configure routing rules to forward data](https://www.ibm.com/docs/en/qsip/7.5?topic=systems-configuring-routing-rules-forward-data).
+1. Set the **Mode** to **Offline**.
+1. Select the relevant **Forwarding Event Processor**.
+1. Set the **Data Source** to **Events**.
+1. Select **Add Filter** to add filter criteria for data that needs to be exported. For example, use the **Log Source Time** field to set a timestamp range.
+1. Select **Forward** and select the forwarding destination created when you [configured the QRadar forwarding destination](#configure-qradar-forwarding-destination) in step 2.
+1. [Enable the routing rule configuration](https://www.ibm.com/docs/en/qsip/7.5?topic=systems-viewing-managing-routing-rules).
+1. Repeat steps 1-7 for each event processor from which you need to export data.
+
+## Next steps
+
+- [Select a target Azure platform to host the exported historical data](migration-ingestion-target-platform.md)
+- [Select a data ingestion tool](migration-ingestion-tool.md)
+- [Ingest historical data into your target platform](migration-export-ingest.md)
sentinel Migration Security Operations Center Processes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-security-operations-center-processes.md
+
+ Title: "Microsoft Sentinel migration: Update SOC and analyst processes | Microsoft Docs"
+description: Learn how to update your SOC and analyst processes as part of your migration to Microsoft Sentinel.
+++ Last updated : 05/03/2022++
+# Update SOC processes
+
+A security operations center (SOC) is a centralized function within an organization that integrates people, processes, and technology. A SOC implements the organization's overall cybersecurity framework. The SOC collaborates the organizational efforts to monitor, alert, prevent, detect, analyze, and respond to cybersecurity incidents. SOC teams, led by a SOC manager, may include incident responders, SOC analysts at levels 1, 2, and 3, threat hunters, and incident response managers.
+
+SOC teams use telemetry from across the organization's IT infrastructure, including networks, devices, applications, behaviors, appliances, and information stores. The teams then co-relate and analyze the data, to determine how to manage the data and which actions to take.
+
+To successfully migrate to Microsoft Sentinel, you need to update not only the technology that the SOC uses, but also the SOC tasks and processes. This article describes how to update your SOC and analyst processes as part of your migration to Microsoft Sentinel.
+
+## Update analyst workflow
+
+Microsoft Sentinel offers a range of tools that map to a typical analyst workflow, from incident assignment to closure. Analysts can flexibly use some or all of the available tools to triage and investigate incidents. As your organization migrates to Microsoft Sentinel, your analysts need to adapt to these new toolsets, features, and workflows.
+
+### Incidents in Microsoft Sentinel
+
+In Microsoft Sentinel, an incident is a collection of alerts that Microsoft Sentinel determines have sufficient fidelity to trigger the incident. Hence, with Microsoft Sentinel, the analyst triages incidents in the **Incidents** page first, and then proceeds to analyze alerts, if a deeper dive is needed. [Compare your SIEM's incident terminology and management areas](#compare-siem-concepts) with Microsoft Sentinel.
+
+### Analyst workflow stages
+
+This table describes the key stages in the analyst workflow, and highlights the specific tools relevant to each activity in the workflow.
+
+|Assign |Triage |Investigate |Respond |
+|||||
+|**[Assign incidents](#assign)**:<br>ΓÇó Manually, in the **Incidents** page <br>ΓÇó Automatically, using playbooks or automation rules |**[Triage incidents](#triage)** using:<br>ΓÇó The incident details in the **Incident** page<br>ΓÇó Entity information in the **Incident page**, under the **Entities** tab<br>ΓÇó Jupyter Notebooks |**[Investigate incidents](#investigate)** using:<br>ΓÇó The investigation graph<br>ΓÇó Microsoft Sentinel Workbooks<br>ΓÇó The Log Analytics query window |**[Respond to incidents](#respond)** using:<br>ΓÇó Playbooks and automation rules<br>ΓÇó Microsoft Teams War Room |
+
+The next sections map both the terminology and analyst workflow to specific Microsoft Sentinel features.
+
+#### Assign
+
+Use the Microsoft Sentinel **Incidents** page to assign incidents. The **Incidents** page includes an incident preview, and a detailed view for single incidents.
++
+To assign an incident:
+- **Manually**. Set the **Owner** field to the relevant user name.
+- **Automatically**. [Use a custom solution based on Microsoft Teams and Logic Apps](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/automate-incident-assignment-with-shifts-for-teams/ba-p/2297549), [or an automation rule](automate-incident-handling-with-automation-rules.md).
++
+#### Triage
+
+To conduct a triage exercise in Microsoft Sentinel, you can start with various Microsoft Sentinel features, depending on your level of expertise and the nature of the incident under investigation. As a typical starting point, select **View full details** in the **Incident** page. You can now examine the alerts that comprise the incident, review bookmarks, select entities to drill down further into specific entities, or add comments.
++
+Here are suggested actions to continue your incident review:
+- Select **Investigation** for a visual representation of the relationships between the incidents and the relevant entities.
+- Use a [Jupyter notebook](notebooks.md) to perform an in-depth triage exercise for a particular entity. You can use the **Incident triage** notebook for this exercise.
++
+##### Expedite triage
+
+Use these features and capabilities to expedite triage:
+
+- For quick filtering, in the **Incidents** page, [search for incidents](investigate-cases.md#search-for-incidents) associated to a specific entity. Filtering by entity in the **Incidents** page is faster than filtering by the entity column in legacy SIEM incident queues.
+- For faster triage, use the **[Alert details](customize-alert-details.md)** screen to include key incident information in the incident name and description, such as the related user name, IP address, or host. For example, an incident could be dynamically renamed to `Ransomware activity detected in DC01`, where `DC01` is a critical asset, dynamically identified via the customizable alert properties.
+- For deeper analysis, in the **Incidents page**, select an incident and select **Events** under **Evidence** to view specific events that triggered the incident. The event data is visible as the output of the query associated with the analytics rule, rather than the raw event. The rule migration engineer can use this output to ensure that the analyst gets the correct data.
+- For detailed entity information, in the **Incidents page**, select an incident and select an entity name under **Entities** to view the entity's directory information, timeline, and insights. Learn how to [map entities](map-data-fields-to-entities.md).
+- To link to relevant workbooks, select **Incident preview**. You can customize the workbook to display additional information about the incident, or associated entities and custom fields.
+
+#### Investigate
+
+Use the investigation graph to deeply investigate incidents. From the **Incidents** page, select an incident and select **Investigate** to view the [investigation graph](investigate-cases.md#use-the-investigation-graph-to-deep-dive).
++
+With the investigation graph, you can:
+- Understand the scope and identify the root cause of potential security threats by correlating relevant data with any involved entity.
+- Dive deeper into entities, and choose between different expansion options.
+- Easily see connections across different data sources by viewing relationships extracted automatically from the raw data.
+- Expand your investigation scope using built-in exploration queries to surface the full scope of a threat.
+- Use predefined exploration options to help you ask the right questions while investigating a threat.
+
+From the investigation graph, you can also open workbooks to further support your investigation efforts. Microsoft Sentinel includes several workbook templates that you can customize to suit your specific use case.
++
+#### Respond
+
+Use Microsoft Sentinel automated response capabilities to respond to complex threats and reduce alert fatigue. Microsoft Sentinel provides automated response using [Logic Apps playbooks and automation rules](automate-responses-with-playbooks.md).
++
+Use one of the following options to access playbooks:
+- The [Automation > Playbook templates tab](use-playbook-templates.md)
+- The Microsoft Sentinel [Content hub](sentinel-solutions-deploy.md)
+- The Microsoft Sentinel [GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks)
+
+These sources include a wide range of security-oriented playbooks to cover a substantial portion of use cases of varying complexity. To streamline your work with playbooks, use the templates under **Automation > Playbook templates**. Templates allow you to easily deploy playbooks into the Microsoft Sentinel instance, and then modify the playbooks to suit your organization's needs.
+
+See the [SOC Process Framework](https://github.com/Azure/Azure-Sentinel/wiki/SOC-Process-Framework) to map your SOC process to Microsoft Sentinel capabilities.
+
+## Compare SIEM concepts
+
+Use this table to compare the main concepts of your legacy SIEM to Microsoft Sentinel concepts.
+
+| ArcSight | QRadar | Splunk | Microsoft Sentinel |
+|--|--|--|--|
+| Event | Event | Event | Event |
+| Correlation Event | Correlation Event | Notable Event | Alert |
+| Incident | Offense | Notable Event | Incident |
+| | List of offenses | Tags | Incidents page |
+| Labels | Custom field in SOAR | Tags | Tags |
+| | Jupyter Notebooks | Jupyter Notebooks | Microsoft Sentinel notebooks |
+| Dashboards | Dashboards | Dashboards | Workbooks |
+| Correlation rules | Building blocks | Correlation rules | Analytics rules |
+|Incident queue |Offences tab |Incident review |**Incident** page |
+
+## Next steps
+
+After migration, explore Microsoft's Microsoft Sentinel resources to expand your skills and get the most out of Microsoft Sentinel.
+
+Also consider increasing your threat protection by using Microsoft Sentinel alongside [Microsoft 365 Defender](./microsoft-365-defender-sentinel-integration.md) and [Microsoft Defender for Cloud](../security-center/azure-defender.md) for [integrated threat protection](https://www.microsoft.com/security/business/threat-protection). Benefit from the breadth of visibility that Microsoft Sentinel delivers, while diving deeper into detailed threat analysis.
+
+For more information, see:
+
+- [Rule migration best practices](https://techcommunity.microsoft.com/t5/azure-sentinel/best-practices-for-migrating-detection-rules-from-arcsight/ba-p/2216417)
+- [Webinar: Best Practices for Converting Detection Rules](https://www.youtube.com/watch?v=njXK1h9lfR4)
+- [Security Orchestration, Automation, and Response (SOAR) in Microsoft Sentinel](automation.md)
+- [Manage your SOC better with incident metrics](manage-soc-with-incident-metrics.md)
+- [Microsoft Sentinel learning path](/learn/paths/security-ops-sentinel/)
+- [SC-200 Microsoft Security Operations Analyst certification](/learn/certifications/exams/sc-200)
+- [Microsoft Sentinel Ninja training](https://techcommunity.microsoft.com/t5/azure-sentinel/become-an-azure-sentinel-ninja-the-complete-level-400-training/ba-p/1246310)
+- [Investigate an attack on a hybrid environment with Microsoft Sentinel](https://mslearn.cloudguides.com/guides/Investigate%20an%20attack%20on%20a%20hybrid%20environment%20with%20Azure%20Sentinel)
sentinel Migration Splunk Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-splunk-automation.md
+
+ Title: Migrate Splunk SOAR automation to Microsoft Sentinel | Microsoft Docs
+description: Learn how to identify SOAR use cases, and how to migrate your Splunk SOAR automation to Microsoft Sentinel.
+++ Last updated : 05/03/2022++
+# Migrate Splunk SOAR automation to Microsoft Sentinel
+
+Microsoft Sentinel provides Security Orchestration, Automation, and Response (SOAR) capabilities with [automation rules](automate-incident-handling-with-automation-rules.md) and [playbooks](tutorial-respond-threats-playbook.md). Automation rules automate incident handling and response, and playbooks run predetermined sequences of actions to response and remediate threats. This article discusses how to identify SOAR use cases, and how to migrate your Splunk SOAR automation to Microsoft Sentinel.
+
+Automation rules simplify complex workflows for your incident orchestration processes, and allow you to centrally manage your incident handling automation.
+
+With automation rules, you can:
+- Perform simple automation tasks without necessarily using playbooks. For example, you can assign, tag incidents, change status, and close incidents.
+- Automate responses for multiple analytics rules at once.
+- Control the order of actions that are executed.
+- Run playbooks for those cases where more complex automation tasks are necessary.
+
+## Identify SOAR use cases
+
+HereΓÇÖs what you need to think about when migrating SOAR use cases from Splunk.
+- **Use case quality**. Choose good use cases for automation. Use cases should be based on procedures that are clearly defined, with minimal variation, and a low false-positive rate. Automation should work with efficient use cases.
+- **Manual intervention**. Automated response can have wide ranging effects and high impact automations should have human input to confirm high impact actions before theyΓÇÖre taken.
+- **Binary criteria**. To increase response success, decision points within an automated workflow should be as limited as possible, with binary criteria. Binary criteria reduces the need for human intervention, and enhances outcome predictability.
+- **Accurate alerts or data**. Response actions are dependent on the accuracy of signals such as alerts. Alerts and enrichment sources should be reliable. Microsoft Sentinel resources such as watchlists and reliable threat intelligence can enhance reliability.
+- **Analyst role**. While automation where possible is great, reserve more complex tasks for analysts, and provide them with the opportunity for input into workflows that require validation. In short, response automation should augment and extend analyst capabilities.
+
+## Migrate SOAR workflow
+
+This section shows how key Splunk SOAR concepts translate to Microsoft Sentinel components, and provides general guidelines for how to migrate each step or component in the SOAR workflow.
++
+|Step (in diagram) |Splunk |Microsoft Sentinel |
+|||
+|1 |Ingest events into main index. |Ingest events into the Log Analytics workspace. |
+|2 |Create containers. |Tag incidents using the [custom details feature](surface-custom-details-in-alerts.md). |
+|3 |Create cases. |Microsoft Sentinel can automatically group incidents according to user-defined criteria, such as shared entities or severity. These alerts then generate incidents. |
+|4 |Create playbooks. |Azure Logic Apps uses several connectors to orchestrate activities across Microsoft Sentinel, Azure, third party and hybrid cloud environments. |
+|4 |Create workbooks. |Microsoft Sentinel executes playbooks either in isolation or as part of an ordered automation rule. You can also execute playbooks manually against alerts or incidents, according to a predefined Security Operations Center (SOC) procedure. |
+
+## Map SOAR components
+
+Review which Microsoft Sentinel or Azure Logic Apps features map to the main Splunk SOAR components.
+
+|Splunk |Microsoft Sentinel/Azure Logic Apps |
+|||
+|Playbook editor |[Logic App designer](../logic-apps/logic-apps-overview.md) |
+|Trigger |[Trigger](../logic-apps/logic-apps-overview.md) |
+|ΓÇó Connectors<br>ΓÇó App<br>ΓÇó Automation broker |ΓÇó [Connector](tutorial-respond-threats-playbook.md)<br>ΓÇó [Hybrid Runbook Worker](../automation/automation-hybrid-runbook-worker.md) |
+|Action blocks |[Action](../logic-apps/logic-apps-overview.md) |
+|Connectivity broker |[Hybrid Runbook Worker](../automation/automation-hybrid-runbook-worker.md) |
+|Community |ΓÇó [Automation > Templates tab](use-playbook-templates.md)<br>ΓÇó [Content hub catalog](sentinel-solutions-catalog.md)<br>ΓÇó [GitHub](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Block-OnPremADUser) |
+|Decision |[Conditional control](../logic-apps/logic-apps-control-flow-conditional-statement.md) |
+|Code |[Azure Function connector](../logic-apps/logic-apps-azure-functions.md) |
+|Prompt |[Send approval email](../logic-apps/tutorial-process-mailing-list-subscriptions-workflow.md) |
+|Format |[Data operations](../logic-apps/logic-apps-perform-data-operations.md) |
+|Input playbooks |Obtain variable inputs from results of previously executed steps or explicitly declared [variables](../logic-apps/logic-apps-create-variables-store-values.md) |
+|Set parameters with Utility block API utility |Manage Incidents with the [API](/rest/api/securityinsights/stable/incidents/get) |
+
+## Operationalize playbooks and automation rules in Microsoft Sentinel
+
+Most of the playbooks that you use with Microsoft Sentinel are available in either the [Automation > Templates tab](use-playbook-templates.md), the [Content hub catalog](sentinel-solutions-catalog.md), or [GitHub](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Block-OnPremADUser). In some cases, however, you might need to create playbooks from scratch or from existing templates.
+
+You typically build your custom logic app using the Azure Logic App Designer feature. The logic apps code is based on [Azure Resource Manager (ARM) templates](../azure-resource-manager/templates/overview.md), which facilitate development, deployment and portability of Azure Logic Apps across multiple environments. To convert your custom playbook into a portable ARM template, you can use the [ARM template generator](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/export-microsoft-sentinel-playbooks-or-azure-logic-apps-with/ba-p/3275898).
+
+Use these resources for cases where you need to build your own playbooks either from scratch or from existing templates.
+- [Automate incident handling in Microsoft Sentinel](automate-incident-handling-with-automation-rules.md)
+- [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md)
+- [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](tutorial-respond-threats-playbook.md)
+- [How to use Microsoft Sentinel for Incident Response, Orchestration and Automation](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/how-to-use-azure-sentinel-for-incident-response-orchestration/ba-p/2242397)
+- [Adaptive Cards to enhance incident response in Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/using-microsoft-teams-adaptive-cards-to-enhance-incident/ba-p/3330941)
+
+## SOAR post migration best practices
+
+Here are best practices you should take into account after your SOAR migration:
+
+- After you migrate your playbooks, test the playbooks extensively to ensure that the migrated actions work as expected.
+- Periodically review your automations to explore ways to further simplify or enhance your SOAR. Microsoft Sentinel constantly adds new connectors and actions that can help you to further simplify or increase the effectiveness of your current response implementations.
+- Monitor the performance of your playbooks using the [Playbooks health monitoring workbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/what-s-new-monitoring-your-logic-apps-playbooks-in-azure/ba-p/1873211).
+- Use managed identities and service principals: Authenticate against various Azure services within your Logic Apps, store the secrets in Azure Key Vault, and obscure the flow execution output. We also recommend that you [monitor the activities of these service principals](https://techcommunity.microsoft.com/t5/azure-sentinel/non-interactive-logins-minimizing-the-blind-spot/ba-p/2287932).
+
+## Next steps
+
+In this article, you learned how to map your SOAR automation from Splunk to Microsoft Sentinel.
+
+> [!div class="nextstepaction"]
+> [Export your historical data](migration-splunk-historical-data.md)
sentinel Migration Splunk Detection Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-splunk-detection-rules.md
+
+ Title: Migrate Splunk detection rules to Microsoft Sentinel | Microsoft Docs
+description: Learn how to identify, compare, and migrate your Splunk detection rules to Microsoft Sentinel built-in rules.
+++ Last updated : 05/03/2022++
+# Migrate Splunk detection rules to Microsoft Sentinel
+
+This article describes how to identify, compare, and migrate your Splunk detection rules to Microsoft Sentinel built-in rules.
+
+## Identify and migrate rules
+
+Microsoft Sentinel uses machine learning analytics to create high-fidelity and actionable incidents, and some of your existing detections may be redundant in Microsoft Sentinel. Therefore, don't migrate all of your detection and analytics rules blindly. Review these considerations as you identify your existing detection rules.
+
+- Make sure to select use cases that justify rule migration, considering business priority and efficiency.
+- Check that you [understand Microsoft Sentinel rule types](detect-threats-built-in.md#view-built-in-detections).
+- Check that you understand the [rule terminology](#compare-rule-terminology).
+- Review any rules that haven't triggered any alerts in the past 6-12 months, and determine whether they're still relevant.
+- Eliminate low-level threats or alerts that you routinely ignore.
+- Use existing functionality, and check whether Microsoft SentinelΓÇÖs [built-in analytics rules](https://github.com/Azure/Azure-Sentinel/tree/master/Detections) might address your current use cases. Because Microsoft Sentinel uses machine learning analytics to produce high-fidelity and actionable incidents, itΓÇÖs likely that some of your existing detections wonΓÇÖt be required anymore.
+- Confirm connected data sources and review your data connection methods. Revisit data collection conversations to ensure data depth and breadth across the use cases you plan to detect.
+- Explore community resources such as the [SOC Prime Threat Detection Marketplace](https://my.socprime.com/tdm/) to check whether your rules are available.
+- Consider whether an online query converter such as Uncoder.io might work for your rules.
+- If rules arenΓÇÖt available or canΓÇÖt be converted, they need to be created manually, using a KQL query. Review the [rules mapping](#map-and-compare-rule-samples) to create new queries.
+
+Learn more about [best practices for migrating detection rules](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/best-practices-for-migrating-detection-rules-from-arcsight/ba-p/2216417).
+
+**To migrate your analytics rules to Microsoft Sentinel**:
+
+1. Verify that you have a testing system in place for each rule you want to migrate.
+
+ 1. **Prepare a validation process** for your migrated rules, including full test scenarios and scripts.
+
+ 1. **Ensure that your team has useful resources** to test your migrated rules.
+
+ 1. **Confirm that you have any required data sources connected,** and review your data connection methods.
+
+1. Verify whether your detections are available as built-in templates in Microsoft Sentinel:
+
+ - **If the built-in rules are sufficient**, use built-in rule templates to create rules for your own workspace.
+
+ In Microsoft Sentinel, go to the **Configuration > Analytics > Rule templates** tab, and create and update each relevant analytics rule.
+
+ For more information, see [Detect threats out-of-the-box](detect-threats-built-in.md).
+
+ - **If you have detections that aren't covered by Microsoft Sentinel's built-in rules**, try an online query converter, such as [Uncoder.io](https://uncoder.io/) to convert your queries to KQL.
+
+ Identify the trigger condition and rule action, and then construct and review your KQL query.
+
+ - **If neither the built-in rules nor an online rule converter is sufficient**, you'll need to create the rule manually. In such cases, use the following steps to start creating your rule:
+
+ 1. **Identify the data sources you want to use in your rule**. You'll want to create a mapping table between data sources and data tables in Microsoft Sentinel to identify the tables you want to query.
+
+ 1. **Identify any attributes, fields, or entities** in your data that you want to use in your rules.
+
+ 1. **Identify your rule criteria and logic**. At this stage, you may want to use rule templates as samples for how to construct your KQL queries.
+
+ Consider filters, correlation rules, active lists, reference sets, watchlists, detection anomalies, aggregations, and so on. You might use references provided by your legacy SIEM to understand [how to best map your query syntax](#map-and-compare-rule-samples).
+
+ 1. **Identify the trigger condition and rule action, and then construct and review your KQL query**. When reviewing your query, consider KQL optimization guidance resources.
+
+1. Test the rule with each of your relevant use cases. If it doesn't provide expected results, you may want to review the KQL and test it again.
+
+1. When you're satisfied, you can consider the rule migrated. Create a playbook for your rule action as needed. For more information, see [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md).
+
+Learn more about analytics rules.
+
+- [**Create custom analytics rules to detect threats**](detect-threats-custom.md). Use [alert grouping](detect-threats-custom.md#alert-grouping) to reduce alert fatigue by grouping alerts that occur within a given timeframe.
+- [**Map data fields to entities in Microsoft Sentinel**](map-data-fields-to-entities.md) to enable SOC engineers to define entities as part of the evidence to track during an investigation. Entity mapping also makes it possible for SOC analysts to take advantage of an intuitive [investigation graph (investigate-cases.md#use-the-investigation-graph-to-deep-dive) that can help reduce time and effort.
+- [**Investigate incidents with UEBA data**](investigate-with-ueba.md), as an example of how to use evidence to surface events, alerts, and any bookmarks associated with a particular incident in the incident preview pane.
+- [**Kusto Query Language (KQL)**](/azure/data-explorer/kusto/query/), which you can use to send read-only requests to your [Log Analytics](../azure-monitor/logs/log-analytics-tutorial.md) database to process data and return results. KQL is also used across other Microsoft services, such as [Microsoft Defender for Endpoint](https://www.microsoft.com/microsoft-365/security/endpoint-defender) and [Application Insights](../azure-monitor/app/app-insights-overview.md).
+
+## Compare rule terminology
+
+This table helps you to clarify the concept of a rule in Microsoft Sentinel compared to Splunk.
+
+| |Splunk |Microsoft Sentinel |
+||||
+|**Rule type** |ΓÇó Scheduled<br>ΓÇó Real-time |ΓÇó Scheduled query<br>ΓÇó Fusion<br>ΓÇó Microsoft Security<br>ΓÇó Machine Learning (ML) Behavior Analytics |
+|**Criteria** |Define in SPL |Define in KQL |
+|**Trigger condition** |ΓÇó Number of results<br>ΓÇó Number of hosts<br>ΓÇó Number of sources<br>ΓÇó Custom |Threshold: Number of query results |
+|**Action** |ΓÇó Add to triggered alerts<br>ΓÇó Log Event<br>ΓÇó Output results to lookup<br>ΓÇó And more |ΓÇó Create alert or incident<br>ΓÇó Integrates with Logic Apps |
+
+## Map and compare rule samples
+
+Use these samples to compare and map rules from Splunk to Microsoft Sentinel in various scenarios.
+
+### Common search commands
+
+|SPL command |Description |KQL operator |KQL example |
+|||||
+|`chart/ timechart` |Returns results in a tabular output for time-series charting. |[render operator](/azure/data-explorer/kusto/query/renderoperator?pivots=azuredataexplorer) |`… | render timechart` |
+|`dedup` |Removes subsequent results that match a specified criterion. |• [distinct](/azure/data-explorer/kusto/query/distinctoperator)<br>• [summarize](/azure/data-explorer/kusto/query/summarizeoperator) |`… | summarize by Computer, EventID` |
+|`eval` |Calculates an expression. Learn about [common eval commands](https://github.com/Azure/Azure-Sentinel/blob/master/Tools/RuleMigration/SPL%20to%20KQL.md#common-eval-commands). |[extend](/azure/data-explorer/kusto/query/extendoperator) |`T | extend duration = endTime - startTime` |
+|`fields` |Removes fields from search results. |ΓÇó [project](/azure/data-explorer/kusto/query/projectoperator)<br>ΓÇó [project-away](/azure/data-explorer/kusto/query/projectawayoperator) |`T | project cost=price*quantity, price` |
+|`head/tail` |Returns the first or last N results. |[top](/azure/data-explorer/kusto/query/topoperator) |`T | top 5 by Name desc nulls last` |
+|`lookup` |Adds field values from an external source. |ΓÇó [externaldata](/azure/data-explorer/kusto/query/externaldata-operator?pivots=azuredataexplorer)<br>ΓÇó [lookup](/azure/data-explorer/kusto/query/lookupoperator) |[KQL example](#lookup-command-kql-example) |
+|`rename` |Renames a field. Use wildcards to specify multiple fields. |[project-rename](/azure/data-explorer/kusto/query/projectrenameoperator) |`T | project-rename new_column_name = column_name` |
+|`rex` |Specifies group names using regular expressions to extract fields. |[matches regex](/azure/data-explorer/kusto/query/re2) |`… | where field matches regex "^addr.*"` |
+|`search` |Filters results to results that match the search expression. |[search](/azure/data-explorer/kusto/query/searchoperator?pivots=azuredataexplorer) |`search "X"` |
+|`sort` |Sorts the search results by the specified fields. |[sort](/azure/data-explorer/kusto/query/sortoperator) |`T | sort by strlen(country) asc, price desc` |
+|`stats` |Provides statistics, optionally grouped by fields. Learn more about [common stats commands](https://github.com/Azure/Azure-Sentinel/blob/master/Tools/RuleMigration/SPL%20to%20KQL.md#common-stats-commands). |[summarize](/azure/data-explorer/kusto/query/summarizeoperator) |[KQL example](#stats-command-kql-example) |
+|`mstats` |Similar to stats, used on metrics instead of events. |[summarize](/azure/data-explorer/kusto/query/summarizeoperator) |[KQL example](#mstats-command-kql-example) |
+|`table` |Specifies which fields to keep in the result set, and retains data in tabular format. |[project](/azure/data-explorer/kusto/query/projectoperator) |`T | project columnA, columnB` |
+|`top/rare` |Displays the most or least common values of a field. |[top](/azure/data-explorer/kusto/query/topoperator) |`T | top 5 by Name desc nulls last` |
+|`transaction` |Groups search results into transactions.<br><br>[SPL example](#transaction-command-spl-example) |Example: [row_window_session](/azure/data-explorer/kusto/query/row-window-session-function) |[KQL example](#transaction-command-kql-example) |
+|`eventstats` |Generates summary statistics from fields in your events and saves those statistics in a new field.<br><br>[SPL example](#eventstats-command-spl-example) |Examples:<br>ΓÇó [join](/azure/data-explorer/kusto/query/joinoperator?pivots=azuredataexplorer)<br>ΓÇó [make_list](/azure/data-explorer/kusto/query/makelist-aggfunction)<br>ΓÇó [mv-expand](/azure/data-explorer/kusto/query/mvexpandoperator) |[KQL example](#eventstats-command-kql-example) |
+|`streamstats` |Find the cumulative sum of a field.<br><br>SPL example:<br>`... | streamstats sum(bytes) as bytes _ total \| timechart` |[row_cumsum](/azure/data-explorer/kusto/query/rowcumsumfunction) |`...\| serialize cs=row_cumsum(bytes)` |
+|`anomalydetection` |Find anomalies in the specified field.<br><br>[SPL example](#anomalydetection-command-spl-example) |[series_decompose_anomalies()](/azure/data-explorer/kusto/query/series-decompose-anomaliesfunction) |[KQL example](#anomalydetection-command-kql-example) |
+|`where` |Filters search results using `eval` expressions. Used to compare two different fields. |[where](/azure/data-explorer/kusto/query/whereoperator) |`T | where fruit=="apple"` |
+
+#### lookup command: KQL example
+
+```kusto
+Users
+| where UserID in ((externaldata (UserID:string) [
+@"https://storageaccount.blob.core.windows.net/storagecontainer/users.txt"
+h@"?...SAS..." // Secret token to access the blob
+])) | ...
+```
+#### stats command: KQL example
+
+```kusto
+Sales
+| summarize NumTransactions=count(),
+Total=sum(UnitPrice * NumUnits) by Fruit,
+StartOfMonth=startofmonth(SellDateTime)
+```
+#### mstats command: KQL example
+
+```kusto
+T | summarize count() by price_range=bin(price, 10.0)
+```
+
+#### transaction command: SPL example
+
+```spl
+sourcetype=MyLogTable type=Event
+| transaction ActivityId startswith="Start" endswith="Stop"
+| Rename timestamp as StartTime
+| Table City, ActivityId, StartTime, Duration
+```
+#### transaction command: KQL example
+
+```kusto
+let Events = MyLogTable | where type=="Event";
+Events
+| where Name == "Start"
+| project Name, City, ActivityId, StartTime=timestamp
+| join (Events
+| where Name == "Stop"
+| project StopTime=timestamp, ActivityId)
+on ActivityId
+| project City, ActivityId, StartTime,
+Duration = StopTime ΓÇô StartTime
+```
+
+Use `row_window_session()` to the calculate session start values for a column in a serialized row set.
+
+```kusto
+...| extend SessionStarted = row_window_session(
+Timestamp, 1h, 5m, ID != prev(ID))
+```
+#### eventstats command: SPL example
+
+```spl
+… | bin span=1m _time
+|stats count AS count_i by _time, category
+| eventstats sum(count_i) as count_total by _time
+```
+#### eventstats command: KQL example
+
+Here's an example with the `join` statement:
+
+```kusto
+let binSize = 1h;
+let detail = SecurityEvent
+| summarize detail_count = count() by EventID,
+tbin = bin(TimeGenerated, binSize);
+let summary = SecurityEvent
+| summarize sum_count = count() by
+tbin = bin(TimeGenerated, binSize);
+detail
+| join kind=leftouter (summary) on tbin
+| project-away tbin1
+```
+Here's an example with the `make_list` statement:
+
+```kusto
+let binSize = 1m;
+SecurityEvent
+| where TimeGenerated >= ago(24h)
+| summarize TotalEvents = count() by EventID,
+groupBin =bin(TimeGenerated, binSize)
+|summarize make_list(EventID), make_list(TotalEvents),
+sum(TotalEvents) by groupBin
+| mvexpand list_EventID, list_TotalEvents
+```
+#### anomalydetection command: SPL example
+
+```spl
+sourcetype=nasdaq earliest=-10y
+| anomalydetection Close _ Price
+```
+#### anomalydetection command: KQL example
+
+```kusto
+let LookBackPeriod= 7d;
+let disableAccountLogon=SignIn
+| where ResultType == "50057"
+| where ResultDescription has "account is disabled";
+disableAccountLogon
+| make-series Trend=count() default=0 on TimeGenerated
+in range(startofday(ago(LookBackPeriod)), now(), 1d)
+| extend (RSquare,Slope,Variance,RVariance,Interception,
+LineFit)=series_fit_line(Trend)
+| extend (anomalies,score) =
+series_decompose_anomalies(Trend)
+```
+### Common eval commands
+
+|SPL command |Description |SPL example |KQL command |KQL example |
+||||||
+|`abs(X)` |Returns the absolute value of X. |`abs(number)` |[abs()](/azure/data-explorer/kusto/query/abs-function) |`abs(X)` |
+|`case(X,"Y",…)` |Takes pairs of `X` and `Y` arguments, where the `X` arguments are boolean expressions. When evaluated to `TRUE`, the arguments return the corresponding `Y` argument. |[SPL example](#casexy-spl-example) |[case](/azure/data-explorer/kusto/query/casefunction) |[KQL example](#casexy-kql-example) |
+|`ceil(X)` |Ceiling of a number X. |`ceil(1.9)` |[ceiling()](/azure/data-explorer/kusto/query/ceilingfunction) |`ceiling(1.9)` |
+|`cidrmatch("X",Y)` |Identifies IP addresses that belong to a particular subnet. |`cidrmatch`<br>`("123.132.32.0/25",ip)` |ΓÇó [ipv4_is_match()](/azure/data-explorer/kusto/query/ipv4-is-matchfunction)<br>ΓÇó [ipv6_is_match()](/azure/data-explorer/kusto/query/ipv6-is-matchfunction) |`ipv4_is_match('192.168.1.1', '192.168.1.255')`<br>`== false` |
+|`coalesce(X,…)` |Returns the first value that isn't null. |`coalesce(null(), "Returned val", null())` |[coalesce()](/azure/data-explorer/kusto/query/coalescefunction) |`coalesce(tolong("not a number"),`<br> `tolong("42"), 33) == 42` |
+|`cos(X)` |Calculates the cosine of X. |`n=cos(0)` |[cos()](/azure/data-explorer/kusto/query/cosfunction) |`cos(X)` |
+|`exact(X)` |Evaluates an expression X using double precision floating point arithmetic. |`exact(3.14*num)` |[todecimal()](/azure/data-explorer/kusto/query/todecimalfunction) |`todecimal(3.14*2)` |
+|`exp(X)` |Returns eX. |`exp(3)` |[exp()](/azure/data-explorer/kusto/query/exp-function) |`exp(3)` |
+|`if(X,Y,Z)` |If `X` evaluates to `TRUE`, the result is the second argument `Y`. If `X` evaluates to `FALSE`, the result evaluates to the third argument `Z`. |`if(error==200,`<br> `"OK", "Error")` |[iif()](/azure/data-explorer/kusto/query/iiffunction) |[KQL example](#ifxyz-kql-example) |
+|`isbool(X)` |Returns `TRUE` if `X` is boolean. |`isbool(field)` |ΓÇó [iif()](/azure/data-explorer/kusto/query/iiffunction)<br>ΓÇó [gettype](/azure/data-explorer/kusto/query/gettypefunction) |`iif(gettype(X) =="bool","TRUE","FALSE")` |
+|`isint(X)` |Returns `TRUE` if `X` is an integer. |`isint(field)` |ΓÇó [iif()](/azure/data-explorer/kusto/query/iiffunction)<br>ΓÇó [gettype](/azure/data-explorer/kusto/query/gettypefunction) |[KQL example](#isintx-kql-example) |
+|`isnull(X)` |Returns `TRUE` if `X` is null. |`isnull(field)` |[isnull()](/azure/data-explorer/kusto/query/isnullfunction) |`isnull(field)` |
+|`isstr(X)` |Returns `TRUE` if `X` is a string. |`isstr(field)` |ΓÇó [iif()](/azure/data-explorer/kusto/query/iiffunction)<br>ΓÇó [gettype](/azure/data-explorer/kusto/query/gettypefunction) |[KQL example](#isstrx-kql-example) |
+|`len(X)` |This function returns the character length of a string `X`. |`len(field)` |[strlen()](/azure/data-explorer/kusto/query/strlenfunction) |`strlen(field)` |
+|`like(X,"y")` |Returns `TRUE` if and only if `X` is like the SQLite pattern in `Y`. |`like(field, "addr%")` |ΓÇó [has](/azure/data-explorer/kusto/query/has-anyoperator)<br>ΓÇó [contains](/azure/data-explorer/kusto/query/datatypes-string-operators)<br>ΓÇó [startswith](/azure/data-explorer/kusto/query/datatypes-string-operators)<br>ΓÇó [matches regex](/azure/data-explorer/kusto/query/re2) |[KQL example](#likexy-example) |
+|`log(X,Y)` |Returns the log of the first argument `X` using the second argument `Y` as the base. The default value of `Y` is `10`. |`log(number,2)` |ΓÇó [log](/azure/data-explorer/kusto/query/log-function)<br>ΓÇó [log2](/azure/data-explorer/kusto/query/log2-function)<br>ΓÇó [log10](/azure/data-explorer/kusto/query/log10-function) |`log(X)`<br><br>`log2(X)`<br><br>`log10(X)` |
+|`lower(X)` |Returns the lowercase value of `X`. |`lower(username)` |[tolower](/azure/data-explorer/kusto/query/tolowerfunction) |`tolower(username)` |
+|`ltrim(X,Y)` |Returns `X` with the characters in parameter `Y` trimmed from the left side. The default output of `Y` is spaces and tabs. |`ltrim(" ZZZabcZZ ", " Z")` |[trim_start()](/azure/data-explorer/kusto/query/trimstartfunction) |`trim_start(ΓÇ£ ZZZabcZZΓÇ¥,ΓÇ¥ ZZZΓÇ¥)` |
+|`match(X,Y)` |Returns if X matches the regex pattern Y. |`match(field, "^\d{1,3}.\d$")` |[matches regex](/azure/data-explorer/kusto/query/re2) |`… | where field matches regex @"^\d{1,3}.\d$")` |
+|`max(X,…)` |Returns the maximum value in a column. |`max(delay, mydelay)` |• [max()](/azure/data-explorer/kusto/query/max-aggfunction)<br>• [arg_max()](/azure/data-explorer/kusto/query/arg-max-aggfunction) |`… | summarize max(field)` |
+|`md5(X)` |Returns the MD5 hash of a string value `X`. |`md5(field)` |[hash_md5](/azure/data-explorer/kusto/query/md5hashfunction) |`hash_md5("X")` |
+|`min(X,…)` |Returns the minimum value in a column. |`min(delay, mydelay)` |• [min_of()](/azure/data-explorer/kusto/query/min-offunction)<br>• [min()](/azure/data-explorer/kusto/query/min-aggfunction)<br>• [arg_min](/azure/data-explorer/kusto/query/arg-min-%3Csub%3E**aggfunction) |[KQL example](#minx-kql-example) |
+|`mvcount(X)` |Returns the number (total) of `X` values. |`mvcount(multifield)` |[dcount](/azure/data-explorer/kusto/query/dcount-aggfunction) |`…| summarize dcount(X) by Y` |
+|`mvfilter(X)` |Filters a multi-valued field based on the boolean `X` expression. |`mvfilter(match(email, "net$"))` |[mv-apply](/azure/data-explorer/kusto/query/mv-applyoperator) |[KQL example](#mvfilterx-kql-example) |
+|`mvindex(X,Y,Z)` |Returns a subset of the multi-valued `X` argument from a start position (zero-based) `Y` to `Z` (optional). |`mvindex( multifield, 2)` |[array_slice](/azure/data-explorer/kusto/query/arrayslicefunction) |`array_slice(arr, 1, 2)` |
+|`mvjoin(X,Y)` |Given a multi-valued field `X` and string delimiter `Y`, and joins the individual values of `X` using `Y`. |`mvjoin(address, ";")` |[strcat_array](/azure/data-explorer/kusto/query/strcat-arrayfunction) |[KQL example](#mvjoinxy-kql-example) |
+|`now()` |Returns the current time, represented in Unix time. |`now()` |[now()](/azure/data-explorer/kusto/query/nowfunction) |`now()`<br><br>`now(-2d)` |
+|`null()` |Doesn't accept arguments and returns `NULL`. |`null()` |[null](/azure/data-explorer/kusto/query/scalar-data-types/null-values?pivots=azuredataexplorer) |`null`
+|`nullif(X,Y)` |Includes two arguments, `X` and `Y`, and returns `X` if the arguments are different. Otherwise, returns `NULL`. |`nullif(fieldA, fieldB)` |[iif](/azure/data-explorer/kusto/query/iiffunction) |`iif(fieldA==fieldB, null, fieldA)` |
+|`random()` |Returns a pseudo-random number between `0` to `2147483647`. |`random()` |[rand()](/azure/data-explorer/kusto/query/randfunction) |`rand()` |
+|`relative_ time(X,Y)` |Given an epoch time `X` and relative time specifier `Y`, returns the epoch time value of `Y` applied to `X`. |`relative_time(now(),"-1d@d")` |[unix time](/azure/data-explorer/kusto/query/datetime-timespan-arithmetic#example-unix-time) |[KQL example](#relative-timexy-kql-example) |
+|`replace(X,Y,Z)` |Returns a string formed by substituting string `Z` for every occurrence of regular expression string `Y` in string `X`. |Returns date with the month and day numbers switched.<br>For example, for the `4/30/2015` input, the output is `30/4/2009`:<br><br>`replace(date, "^(\d{1,2})/ (\d{1,2})/", "\2/\1/")` |[replace()](/azure/data-explorer/kusto/query/replacefunction) |[KQL example](#replacexyz-kql-example) |
+|`round(X,Y)` |Returns `X` rounded to the number of decimal places specified by `Y`. The default is to round to an integer. |`round(3.5)` |[round](/azure/data-explorer/kusto/query/roundfunction) |`round(3.5)` |
+|`rtrim(X,Y)` |Returns `X` with the characters of `Y` trimmed from the right side. If `Y` isn't specified, spaces and tabs are trimmed. |`rtrim(" ZZZZabcZZ ", " Z")` |[trim_end()](/azure/data-explorer/kusto/query/trimendfunction) |`trim_end(@"[ Z]+",A)` |
+|`searchmatch(X)` |Returns `TRUE` if the event matches the search string `X`. |`searchmatch("foo AND bar")` |[iif()](/azure/data-explorer/kusto/query/iiffunction) |`iif(field has "X","Yes","No")` |
+| `split(X,"Y")` |Returns `X` as a multi-valued field, split by delimiter `Y`. |`split(address, ";")` |[split()](/azure/data-explorer/kusto/query/splitfunction) |`split(address, ";")` |
+|`sqrt(X)` |Returns the square root of `X`. |`sqrt(9)` |[sqrt()](/azure/data-explorer/kusto/query/sqrtfunction) |`sqrt(9)` |
+|`strftime(X,Y)` |Returns the epoch time value `X` rendered using the format specified by `Y`. |`strftime(_time, "%H:%M")` |[format_datetime()](/azure/data-explorer/kusto/query/format-datetimefunction) |`format_datetime(time,'HH:mm')` |
+| `strptime(X,Y)` |Given a time represented by a string `X`, returns value parsed from format `Y`. |`strptime(timeStr, "%H:%M")` |[format_datetime()](/azure/data-explorer/kusto/query/format-datetimefunction) |[KQL example](#strptimexy-kql-example) |
+|`substr(X,Y,Z)` |Returns a substring field `X` from start position (one-based) `Y` for `Z` (optional) characters. |`substr("string", 1, 3)` |[substring()](/azure/data-explorer/kusto/query/substringfunction) |`substring("string", 0, 3)` |
+|`time()` |Returns the wall-clock time with microsecond resolution. |`time()` |[format_datetime()](/azure/data-explorer/kusto/query/format-datetimefunction) |[KQL example](#time-kql-example) |
+|`tonumber(X,Y)` |Converts input string `X` to a number, where `Y` (optional, default value is `10`) defines the base of the number to convert to. |`tonumber("0A4",16)` |[toint()](/azure/data-explorer/kusto/query/tointfunction) |`toint("123")` |
+|`tostring(X,Y)` |[Description](#tostringxy) |[SPL example](#tostringxy-spl-example) |[tostring()](/azure/data-explorer/kusto/query/tostringfunction) |`tostring(123)` |
+|`typeof(X)` |Returns a string representation of the field type. |`typeof(12)` |[gettype()](/azure/data-explorer/kusto/query/gettypefunction) |`gettype(12)` |
+|`urldecode(X)` |Returns the URL `X` decoded. |[SPL example](#urldecodex-spl-example) |[url_decode](/azure/data-explorer/kusto/query/urldecodefunction) |[KQL example](#urldecodex-spl-example) |
+
+#### case(X,"Y",…) SPL example
+
+```SPL
+case(error == 404, "Not found",
+error == 500,"Internal Server Error",
+error == 200, "OK")
+```
+#### case(X,"Y",…) KQL example
+
+```kusto
+T
+| extend Message = case(error == 404, "Not found",
+error == 500,"Internal Server Error", "OK")
+```
+#### if(X,Y,Z) KQL example
+
+```kusto
+iif(floor(Timestamp, 1d)==floor(now(), 1d),
+"today", "anotherday")
+```
+#### isint(X) KQL example
+
+```kusto
+iif(gettype(X) =="long","TRUE","FALSE")
+```
+#### isstr(X) KQL example
+
+```kusto
+iif(gettype(X) =="string","TRUE","FALSE")
+```
+#### like(X,"y") example
+
+```kusto
+… | where field has "addr"
+
+… | where field contains "addr"
+
+… | where field startswith "addr"
+
+… | where field matches regex "^addr.*"
+```
+#### min(X,…) KQL example
+
+```kusto
+min_of (expr_1, expr_2 ...)
+
+…|summarize min(expr)
+
+…| summarize arg_min(Price,*) by Product
+```
+#### mvfilter(X) KQL example
+
+```kusto
+T | mv-apply Metric to typeof(real) on
+(
+ top 2 by Metric desc
+)
+```
+#### mvjoin(X,Y) KQL example
+
+```kusto
+strcat_array(dynamic([1, 2, 3]), "->")
+```
+#### relative time(X,Y) KQL example
+
+```kusto
+let toUnixTime = (dt:datetime)
+{
+(dt - datetime(1970-01-01))/1s
+};
+```
+#### replace(X,Y,Z) KQL example
+
+```kusto
+replace( @'^(\d{1,2})/(\d{1,2})/', @'\2/\1/',date)
+```
+#### strptime(X,Y) KQL example
+
+```kusto
+format_datetime(datetime('2017-08-16 11:25:10'),
+'HH:mm')
+```
+#### time() KQL example
+
+```kusto
+format_datetime(datetime(2015-12-14 02:03:04),
+'h:m:s')
+```
+#### tostring(X,Y)
+
+Returns a field value of `X` as a string.
+- If the value of `X` is a number, `X` is reformatted to a string value.
+- If `X` is a boolean value, `X` is reformatted to `TRUE` or `FALSE`.
+- If `X` is a number, the second argument `Y` is optional and can either be `hex` (converts `X` to a hexadecimal), `commas` (formats `X` with commas and two decimal places), or `duration` (converts `X` from a time format in seconds to a readable time format: `HH:MM:SS`).
+
+##### tostring(X,Y) SPL example
+
+This example returns:
+
+```SPL
+foo=615 and foo2=00:10:15:
+
+… | eval foo=615 | eval foo2 = tostring(
+foo, "duration")
+```
+#### urldecode(X) SPL example
+
+```SPL
+urldecode("http%3A%2F%2Fwww.splunk.com%2Fdownload%3Fr%3Dheader")
+```
+### Common stats commands KQL example
+
+|SPL command |Description |KQL command |KQL example |
+|||||
+|`avg(X)` |Returns the average of the values of field `X`. |[avg()](/azure/data-explorer/kusto/query/avg-aggfunction) |`avg(X)` |
+|`count(X)` |Returns the number of occurrences of the field `X`. To indicate a specific field value to match, format `X` as `eval(field="value")`. |[count()](/azure/data-explorer/kusto/query/count-aggfunction) |`summarize count()` |
+|`dc(X)` |Returns the count of distinct values of the field `X`. |[dcount()](/azure/data-explorer/kusto/query/dcount-aggfunction) |`…\| summarize countries=dcount(country) by continent` |
+|`earliest(X)` |Returns the chronologically earliest seen value of `X`. |[arg_min()](/azure/data-explorer/kusto/query/arg-min-aggfunction) |`… \| summarize arg_min(TimeGenerated, *) by X` |
+|`latest(X)` |Returns the chronologically latest seen value of `X`. |[arg_max()](/azure/data-explorer/kusto/query/arg-max-aggfunction) |`… \| summarize arg_max(TimeGenerated, *) by X` |
+|`max(X)` |Returns the maximum value of the field `X`. If the values of `X` are non-numeric, the maximum value is found via alphabetical ordering. |[max()](/azure/data-explorer/kusto/query/max-aggfunction) |`…\| summarize max(X)` |
+|`median(X)` |Returns the middle-most value of the field `X`. |[percentile()](/azure/data-explorer/kusto/query/percentiles-aggfunction) |`…\| summarize percentile(X, 50)` |
+|`min(X)` |Returns the minimum value of the field `X`. If the values of `X` are non-numeric, the minimum value is found via alphabetical ordering. |[min()](/azure/data-explorer/kusto/query/min-aggfunction) |`…\| summarize min(X)` |
+|`mode(X)` |Returns the most frequent value of the field `X`. |[top-hitters()](/azure/data-explorer/kusto/query/tophittersoperator) |`…\| top-hitters 1 of Y by X` |
+|`perc(Y)` |Returns the percentile `X` value of the field `Y`. For example, `perc5(total)` returns the fifth percentile value of a field `total`. |[percentile()](/azure/data-explorer/kusto/query/percentiles-aggfunction) |`…\| summarize percentile(Y, 5)` |
+|`range(X)` |Returns the difference between the maximum and minimum values of the field `X`. |[range()](/azure/data-explorer/kusto/query/rangefunction) |`range(1, 3)` |
+|`stdev(X)` |Returns the sample standard deviation of the field `X`. |[stdev](/azure/data-explorer/kusto/query/stdev-aggfunction) |`stdev()` |
+|`stdevp(X)` |Returns the population standard deviation of the field `X`. |[stdevp()](/azure/data-explorer/kusto/query/stdevp-aggfunction) |`stdevp()` |
+|`sum(X)` |Returns the sum of the values of the field `X`. |[sum()](/azure/data-explorer/kusto/query/sum-aggfunction) |`sum(X)` |
+|`sumsq(X)` |Returns the sum of the squares of the values of the field `X`. | | |
+|`values(X)` |Returns the list of all distinct values of the field `X` as a multi-value entry. The order of the values is alphabetical. |[make_set()](/azure/data-explorer/kusto/query/makeset-aggfunction) |`…\| summarize r = make_set(X)` |
+|`var(X)` |Returns the sample variance of the field `X`. |[variance](/azure/data-explorer/kusto/query/variance-aggfunction) |`variance(X)` |
+
+## Next steps
+
+In this article, you learned how to map your migration rules from Splunk to Microsoft Sentinel.
+
+> [!div class="nextstepaction"]
+> [Migrate your SOAR automation](migration-splunk-automation.md)
sentinel Migration Splunk Historical Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-splunk-historical-data.md
+
+ Title: "Microsoft Sentinel migration: Export Splunk data to target platform | Microsoft Docs"
+description: Learn how to export your historical data from Splunk.
+++ Last updated : 05/03/2022+++
+# Export historical data from Splunk
+
+This article describes how to export your historical data from Splunk. After you complete the steps in this article, you can [select a target platform](migration-ingestion-target-platform.md) to host the exported data, and then [select an ingestion tool](migration-ingestion-tool.md) to migrate the data.
++
+You can export data from Splunk in several ways. Your selection of an export method depends on the data volumes involved and your level of interactivity. For example, exporting a single, on-demand search via Splunk Web might be appropriate for a low-volume export. Alternatively, if you want to set up a higher-volume, scheduled export, the SDK and REST options work best.
+
+For large exports, the most stable method for data retrieval is `dump` or the Command Line Interface (CLI). You can export the logs to a local folder on the Splunk server or to another server accessible by Splunk.
+
+To export your historical data from Splunk, use one of the [Splunk export methods](https://docs.splunk.com/Documentation/Splunk/8.2.5/Search/Exportsearchresults). The output format should be CSV.
+
+## CLI example
+
+This CLI example searches for events from the `_internal` index that occur during the time window that the search string specifies. The example then specifies to output the events in a CSV format to the **data.csv** file.You can export a maximum of 100 events by default. To increase this number, set the `-maxout` argument. For example, if you set `-maxout` to `0`, you can export an unlimited number of events.
+
+This CLI command exports data recorded between 23:59 and 01:00 on September 14, 2021 to a CSV file:
+
+```
+splunk search "index=_internal earliest=09/14/2021:23:59:00 latest=09/16/2021:01:00:00 " -output csv > c:/data.csv
+```
+## dump example
+
+This `dump` command exports all events from the `bigdata` index to the `YYYYmmdd/HH/host` location under the `$SPLUNK_HOME/var/run/splunk/dispatch/<sid>/dump/` directory on a local disk. The command uses `MyExport` as the prefix for export filenames, and outputs the results to a CSV file. The command partitions the exported data using the `eval` function before the `dump` command.
+
+```
+index=bigdata | eval _dstpath=strftime(_time, "%Y%m%d/%H") + "/" + host | dump basefilename=MyExport format=csv
+```
+## Next steps
+
+- [Select a target Azure platform to host the exported historical data](migration-ingestion-target-platform.md)
+- [Select a data ingestion tool](migration-ingestion-tool.md)
+- [Ingest historical data into your target platform](migration-export-ingest.md)
sentinel Migration Track https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-track.md
+
+ Title: Track your Microsoft Sentinel migration with a workbook | Microsoft Docs
+description: Learn how to track your migration with a workbook, how to customize and manage the workbook, and how to use the workbook tabs for useful Microsoft Sentinel actions.
+++ Last updated : 05/03/2022++
+# Track your Microsoft Sentinel migration with a workbook
+
+As your organization's Security Operations Center (SOC) handles growing amounts of data, it's essential to plan and monitor your deployment status. While you can track your migration process using generic tools such as Microsoft Project, Microsoft Excel, Teams, or Azure DevOps, these tools arenΓÇÖt specific to SIEM migration tracking. To help you with tracking, we provide a dedicated workbook in Microsoft Sentinel named **Microsoft Sentinel Deployment and Migration**.
+
+The workbook helps you to:
+- Visualize migration progress
+- Deploy and track data sources
+- Deploy and monitor analytics rules and incidents
+- Deploy and utilize workbooks
+- Deploy and perform automation
+- Deploy and customize user and entity behavioral analytics (U E B A)
+
+This article describes how to track your migration with the **Microsoft Sentinel Deployment and Migration** workbook, how to customize and manage the workbook, and how to use the workbook tabs to deploy and monitor data connectors, analytics, incidents, playbooks, automation rules, U E B A, and data management. Learn more about how to use [Azure Monitor workbooks](monitor-your-data.md) in Microsoft Sentinel.
+
+## Deploy the workbook content
+
+1. In the Azure portal, select Microsoft Sentinel and then select **Workbooks**.
+1. From the search bar, search for `migration`.
+1. From the search results, select the **Microsoft Sentinel Deployment and Migration** workbook and select **Save**.
+ Microsoft Sentinel deploys the workbook and saves the workbook in your environment.
+1. To view the workbook, select **Open saved workbook**.
+
+## Deploy the watchlist
+
+1. In the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Watchlists), select the **DeploymentandMigration** folder, and select **Deploy to Azure** to begin the template deployment in Azure.
+1. Provide the Microsoft Sentinel resource group and workspace name.
+ :::image type="content" source="media/migration-track/migration-track-azure-deployment.png" alt-text="Screenshot of deploying the watchlist to Azure.":::
+1. Select **Review and create**.
+1. After the information is validated, select **Create**.
+
+## Update the watchlist with deployment and migration actions
+
+This step is crucial to the tracking setup process. If you skip this step, the workbook won't reflect the items for tracking.
+
+To update the watchlist with deployment and migration actions:
+
+1. In the Azure portal, select Microsoft Sentinel and then select **Watchlist**.
+1. Locate the watchlist with the **Deployment** alias.
+1. Select the watchlist, and then select **Update watchlist > edit watchlist items** on the bottom right.
+ :::image type="content" source="media/migration-track/migration-track-update-watchlist.png" alt-text="Screenshot of updating watchlist items with deployment and migration actions." lightbox="media/migration-track/migration-track-update-watchlist.png":::
+1. Provide the information for the actions needed for the deployment and migration, and select **Save**.
+
+You can now view the watchlist within the migration tracker workbook. Learn how to [manage watchlists](watchlists-manage.md).
+
+In addition, your team might update or complete tasks during the deployment process. To address these changes, you can update existing actions or add new actions as you identify new use cases or set new requirements. To update or add actions, edit the **Deployment** watchlist that you [deployed previously](#deploy-the-watchlist). To simplify the process, select **Edit Deployment Watchlist** on the bottom left to open the watchlist directly from the workbook.
+
+## View deployment status
+
+To quickly view the deployment progress, in the **Microsoft Sentinel Deployment and Migration** workbook, select **Deployment** and scroll down to locate the **Summary of progress**. This area displays the deployment status, including the following information:
+
+- Tables reporting data
+- Number of tables reporting data
+- Number of reported logs and which tables report the log data
+- Number of enabled rules vs. undeployed rules
+- Recommended workbooks deployed
+- Total number of workbooks deployed
+- Total number of playbooks deployed
+
+## Deploy and monitor data connectors
+
+To monitor deployed resources and deploy new connectors, in the **Microsoft Sentinel Deployment and Migration** workbook, select **Data Connectors > Monitor**. The **Monitor** view lists:
+- Current ingestion trends
+- Tables ingesting data
+- How much data each table is reporting
+- Endpoints reporting with Microsoft Monitoring Agent (MMA)
+- Endpoints reporting with Azure Monitoring Agent (AMA)
+- Endpoints reporting with both the MMA and AMA agents
+- Data collection rules in the resource group and the devices linked to the rules
+- Data connector health (changes and failures)
+- Health logs within the specified time range
++
+To configure a data connector:
+1. Select the **Configure** view.
+1. Select the button with the name of the connector you want to configure.
+1. Configure the connector in the connector status screen that opens. If you can't find a connector you need, select the connector name to open the connector gallery or solution gallery.
+ :::image type="content" source="media/migration-track/migration-track-configure-data-connectors.png" alt-text="Screenshot of the workbook's Configure view.":::
+
+## Deploy and monitor analytics and incidents
+
+Once the data is reported in the workspace, you can now configure and monitor analytics rules. In the **Microsoft Sentinel Deployment and Migration** workbook, select **Analytics** to view all deployed rule templates and lists. This view indicates which rules are currently in use and how often the rules generate incidents.
++
+If you need more coverage, select **Review MITRE coverage** below the table on the left. Use this option to define which areas receive more coverage and which rules are deployed, at any stage of the migration project.
++
+Once the desired analytics rules are deployed and the Defender product connector is configured to send the alerts, you can monitor incident creation and frequency under **Deployment > Summary of progress**. This area displays metrics regarding alert generation by product, title, and classification, to indicate the health of the SOC and which alerts require the most attention. If alerts are generating too much volume, return to the **Analytics** tab to modify the logic.
++
+## Deploy and utilize workbooks
+
+To visualize information regarding the data ingestion and detections that Microsoft Sentinel performs, in the **Microsoft Sentinel Deployment and Migration** workbook, select **Workbooks**. Similar to the **Data Connectors** tab, you can use the **Monitor** and **Configure** views to view monitoring and configuration information.
+
+Here are some useful tasks you can perform in the **Workbooks** tab:
+
+- To view a list of all workbooks in the environment and how many workbooks are deployed, select **Monitor**.
+- To view a specific workbook within the **Microsoft Sentinel Deployment and Migration** workbook, select a workbook and then select **Open Selected Workbook**.
+
+ :::image type="content" source="media/migration-track/migration-track-workbook.png" alt-text="Screenshot of selecting a workbook in the Workbook tab." lightbox="media/migration-track/migration-track-workbook.png":::
+
+- If you haven't yet deployed workbooks, select **Configure** to view a list of commonly used and recommended workbooks. If a workbook isn't listed, select **Go to Workbook Gallery** or **Go to Content Hub** to deploy the relevant workbook.
+
+ :::image type="content" source="media/migration-track/migration-track-view-workbooks.png" alt-text="Screenshot of viewing a workbook from the Workbook tab.":::
+
+## Deploy and monitor playbooks and automation rules
+
+Once you configure data ingestion, detections, and visualizations, you can now look into automation. In the **Microsoft Sentinel Deployment and Migration** workbook, select **Automation** to view deployed playbooks, and to see which playbooks are currently connected to an automation rule. If automation rules exist, the workbook highlights the following information regarding each rule:
+- Name
+- Status
+- Action or actions of the rule
+- The last date the rule was modified and the user that modified the rule
+- The date the rule was created
+
+To view, deploy, and test automation within the current section of the workbook, select **Deploy automation resources** on the bottom left.
+
+Learn about Microsoft Sentinel SOAR capabilities [for playbooks](automate-responses-with-playbooks.md) and [for automation rules](automate-incident-handling-with-automation-rules.md).
++
+## Deploy and monitor U E B A
+
+Because data reporting and detections happen at the entity level, it's essential to monitor entity behavior and trends. To enable the U E B A feature within Microsoft Sentinel, in the **Microsoft Sentinel Deployment and Migration** workbook, select **UEBA**. Here you can customize the entity timelines for entity pages, and view which entity related tables are populated with data.
++
+To enable U E B A:
+1. Select **Enable UEBA** above the list of tables.
+1. To enable U E B A, select **On**.
+1. Select the data sources you want to use to generate insights.
+1. Select **Apply**.
+
+After you enable U E B A, you can monitor and ensure that Microsoft Sentinel is generating U E B A data.
+
+To customize the timeline:
+1. Select **Customize Entity Timeline** above the list of tables.
+1. Create a custom item, or select one of the out-of-the-box templates.
+1. To deploy the template and complete the wizard, select **Create**.
+
+Learn more about [U E B A](identify-threats-with-entity-behavior-analytics.md) or learn how to [customize the timeline](customize-entity-activities.md).
+
+## Configure and manage the data lifecycle
+
+When you deploy or migrate to Microsoft Sentinel, it's essential to manage the usage and lifecycle of the incoming logs. To assist with this, in the **Microsoft Sentinel Deployment and Migration** workbook, select **Data Management** to view and configure table retention and archival.
++
+You can view information regarding:
+
+- Tables configured for basic log ingestion
+- Tables configured for analytics tier ingestion
+- Tables configured to be archived
+- Tables on the default workspace retention
+
+To modify the existing retention policy for tables:
+1. Select the **Default Retention Tables** view.
+1. Select the table you want to modify, and select **Update Retention**. You can edit the following information:
+ - Current retention in the workspace
+ - Current retention in the archive
+ - Total number of days the data will live in the environment
+1. Edit the **TotalRetention** value to set a new total number of days that the data should exist within the environment.
+
+The **ArchiveRetention** value is calculated by subtracting the **TotalRetention** value from the **InteractiveRetention** value. If you need to adjust the workspace retention, the change doesn't impact tables that include configured archives and data isn't lost. If you edit the **InteractiveRetention** value and the **TotalRetention** value doesn't change, Azure Log Analytics adjusts the archive retention to compensate the change.
+
+If you prefer to make changes in the UI, select **Update Retention in UI** to open the relevant blade.
+
+Learn about [data lifecycle management](../azure-monitor/logs/data-retention-archive.md).
+
+## Enable migration tips and instructions
+
+To assist with the deployment and migration process, the workbook includes tips that explain how to use the different tabs, and links to relevant resources. The tips are based on Microsoft Sentinel migration documentation and are relevant to your current SIEM. To enable tips and instructions, in the **Microsoft Sentinel Deployment and Migration** workbook, on the top right, set **MigrationTips** and **Instruction** to **Yes**.
++
+## Next steps
+
+In this article, you learned how to track your migration with the **Microsoft Sentinel Deployment and Migration** workbook.
+
+- [Migrate ArcSight detection rules](migration-arcsight-detection-rules.md)
+- [Migrate Splunk detection rules](migration-splunk-detection-rules.md)
+- [Migrate QRadar detection rules](migration-qradar-detection-rules.md)
sentinel Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration.md
Title: Migrate to Microsoft Sentinel from an existing SIEM.
-description: Learn how to best migrate from an existing SIEM to Microsoft Sentinel, for scalable, intelligent security analytics across your organization.
-- Previously updated : 11/09/2021--
+ Title: Plan your migration to Microsoft Sentinel | Microsoft Docs
+description: Discover the reasons for migrating from a legacy SIEM, and learn how to plan out the different phases of your migration.
+++ Last updated : 05/03/2022
-# Migrate to Microsoft Sentinel from an existing SIEM
+# Plan your migration to Microsoft Sentinel
+Security operations center (SOC) teams use centralized security information and event management (SIEM) and security orchestration, automation, and response (SOAR) solutions to protect their increasingly decentralized digital estate. While legacy SIEMs can maintain good coverage of on-premises assets, on-premises architectures may have insufficient coverage for cloud assets, such as in Azure, Microsoft 365, AWS, or Google Cloud Platform (GCP). In contrast, Microsoft Sentinel can ingest data from both on-premises and cloud assets, ensuring coverage over the entire estate.
-Your security operations center (SOC) team will use centralized security information and event management (SIEM) and security orchestration, automation, and response (SOAR) solutions to protect your increasingly decentralized digital estate.
+This article discusses the reasons for migrating from a legacy SIEM, and describes how to plan out the different phases of your migration.
-Legacy SIEMs are often on-premises, and can maintain good coverage of your on-premises assets. However, on-premises architectures may have insufficient coverage for your cloud assets, such as in Azure, Microsoft 365, AWS, or Google Cloud Platform (GCP). In contrast, Microsoft Sentinel can ingest data from both on-premises and cloud assets, ensuring coverage over your entire estate.
+## Migration steps
-This article describes how to migrate from an existing, legacy SIEM to Microsoft Sentinel, either in a side-by-side configuration or by transitioning to a full Microsoft Sentinel deployment.
+In this guide, you learn how to migrate your legacy SIEM to Microsoft Sentinel. Follow your migration process through this series of articles, in which you'll learn how to navigate different steps in the process.
-## Plan your migration
-
-You may have decided to start a direct or gradual transition to Microsoft Sentinel, depending on your business needs and available resources.
-
-You'll want to plan your migration properly to ensure that transition doesn't introduce gaps in coverage, which could put your organization's security in jeopardy.
-
-To start, identify your key core capabilities and first-priority requirements. Evaluate the key use cases your current SIEM covers, and decide which detections and capabilities where Microsoft Sentinel needs to continue providing coverage.
-
-You'll add more in-process planning at each step of your migration process, as you consider the exact data sources and detection rules you want to migrate. For more information, see [Migrate your data](#migrate-your-data) and [Migrate analytics rules](#migrate-analytics-rules).
-
-> [!TIP]
-> Your current SIEM may have an overwhelming number of detections and use cases. Decide which ones are most useful to your business and determine which ones may not need to be migrated. For example, check to see which detections produced results within the past year.
->
-
-### Compare your legacy SIEM to Microsoft Sentinel
-
-Compare your legacy SIEM to Microsoft Sentinel to help refine your migration completion criteria, and understand where you can extract more value with Microsoft Sentinel.
-
-For example, evaluate the following key areas:
-
-|Evaluation area |Description |
-|||
-|**Attack detection coverage.** | Compare how well each SIEM can detect the full range of attacks, using [MITRE ATT&CK](https://attack.mitre.org/) or a similar framework. |
-|**Responsiveness.** | Measure the mean time to acknowledge (MTTA), which is the time between an alert appearing in the SIEM and an analyst starting work on it. This time will probably be similar between SIEMs. |
-|**Mean time to remediate (MTTR).** | Compare the MTTR for incidents investigated by each SIEM, assuming analysts at equivalent skill levels. |
-|**Hunting speed and agility.** | Measure how fast teams can hunt, starting from a fully formed hypothesis, to querying the data, to getting the results on each SIEM platform. |
-|**Capacity growth friction.** | Compare the level of difficulty in adding capacity as usage grows. Keep in mind that cloud services and applications tend to generate more log data than traditional on-premises workloads. |
--
-If you have limited or no investment in an existing on-premises SIEM, moving to Microsoft Sentinel can be a straightforward, direct deployment. However, enterprises that are heavily invested in a legacy SIEM typically require a multi-stage process to accommodate transition tasks.
-
-Although Microsoft Sentinel provides extended data and response for both on-premises the cloud, you may want to start your migration slowly, by running Microsoft Sentinel and your legacy SIEM [side-by-side](#select-a-side-by-side-approach-and-method). In a side-by-side architecture local resources can use the on-premises SIEM and cloud resources and new workloads use cloud-based analytics.
-
-Unless you choose a long-term side-by-side configuration, complete your migration to a full Microsoft Sentinel deployment to access lower infrastructure costs, real-time threat analysis, and cloud-scalability.
-
-## Select a side-by-side approach and method
-
-Use a side-by-side architecture either as a short-term, transitional phase that leads to a completely cloud-hosted SIEM, or as a medium- to long-term operational model, depending on the SIEM needs of your organization.
-
-For example, while the recommended architecture is to use a side-by-side architecture just long enough to complete the migration, your organization may want stay with your side-by-side configuration for longer, such as if you aren't ready to move away from your legacy SIEM. Typically, organizations who use a long-term, side-by-side configuration use Microsoft Sentinel to analyze only their cloud data.
-
-Consider the pros and cons for each approach when deciding which one to use in your migration.
-
-> [!NOTE]
-> Many organizations avoid running multiple on-premises analytics solutions because of cost and complexity.
->
-> Microsoft Sentinel provides [pay-as-you-go pricing](billing.md) and flexible infrastructure, giving SOC teams time to adapt to the change. Migrate and test your content at a pace that works best for your organization.
->
-### Short-term approach
-
- :::column span="":::
- **Pros**
-
- - Gives SOC staff time to adapt to new processes as workloads and analytics migrate.
-
- - Gains deep correlation across all data sources for hunting scenarios.
-
- - Eliminates having to do analytics between SIEMs, create forwarding rules, and close investigations in two places.
-
- - Enables your SOC team to quickly downgrade legacy SIEM solutions, eliminating infrastructure and licensing costs.
- :::column-end:::
- :::column span="":::
- **Cons**
-
- - Can require a steep learning curve for SOC staff.
- :::column-end:::
-
-### Medium- to long-term approach
-
- :::column span="":::
- **Pros**
-
- - Lets you use key Microsoft Sentinel benefits, like AI, ML, and investigation capabilities, without moving completely away from your legacy SIEM.
-
- - Saves money compared to your legacy SIEM, by analyzing cloud or Microsoft data in Microsoft Sentinel.
- :::column-end:::
- :::column span="":::
- **Cons**
-
- - Increases complexity by separating analytics across different databases.
-
- - Splits case management and investigations for multi-environment incidents.
-
- - Incurs greater staff and infrastructure costs.
-
- - Requires SOC staff to be knowledgeable about two different SIEM solutions.
- :::column-end:::
---
-### Send alerts from a legacy SIEM to Microsoft Sentinel (Recommended)
-
-Send alerts, or indicators of anomalous activity, from your legacy SIEM to Microsoft Sentinel.
--- Ingest and analyze cloud data in Microsoft Sentinel-- Use your legacy SIEM to analyze on-premises data and generate alerts.-- Forward the alerts from your on-premises SIEM into Microsoft Sentinel to establish a single interface.-
-For example, forward alerts using [Logstash](connect-logstash.md), [APIs](/rest/api/securityinsights/), or [Syslog](connect-syslog.md), and store them in [JSON](https://techcommunity.microsoft.com/t5/azure-sentinel/tip-easily-use-json-fields-in-sentinel/ba-p/768747) format in your Microsoft Sentinel [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md).
-
-By sending alerts from your legacy SIEM to Microsoft Sentinel, your team can cross-correlate and investigate those alerts in Microsoft Sentinel. The team can still access the legacy SIEM for deeper investigation if needed. Meanwhile, you can continue migrating data sources over an extended transition period.
-
-This recommended, side-by-side migration method provides you with full value from Microsoft Sentinel and the ability to migrate data sources at the pace that's right for your organization. This approach avoids duplicating costs for data storage and ingestion while you move your data sources over.
-
-For more information, see:
--- [Migrate QRadar offenses to Microsoft Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/migrating-qradar-offenses-to-azure-sentinel/ba-p/2102043)-- [Export data from Splunk to Microsoft Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/how-to-export-data-from-splunk-to-azure-sentinel/ba-p/1891237).--
-### Send alerts and enriched incidents from Microsoft Sentinel to a legacy SIEM
-
-Analyze some data in Microsoft Sentinel, such as cloud data, and then send the generated alerts to a legacy SIEM. Use the *legacy* SIEM as your single interface to do cross-correlation with the alerts that Microsoft Sentinel generated. You can still use Microsoft Sentinel for deeper investigation of the Microsoft Sentinel-generated alerts.
-
-This configuration is cost effective, as you can move your cloud data analysis to Microsoft Sentinel without duplicating costs or paying for data twice. You still have the freedom to migrate at your own pace. As you continue to shift data sources and detections over to Microsoft Sentinel, it becomes easier to migrate to Microsoft Sentinel as your primary interface. However, simply forwarding enriched incidents to a legacy SIEM limits the value you get from Microsoft Sentinel's investigation, hunting, and automation capabilities.
-
-For more information, see:
--- [Send enriched Microsoft Sentinel alerts to your legacy SIEM](https://techcommunity.microsoft.com/t5/azure-sentinel/sending-enriched-azure-sentinel-alerts-to-3rd-party-siem-and/ba-p/1456976)-- [Send enriched Microsoft Sentinel alerts to IBM QRadar](https://techcommunity.microsoft.com/t5/azure-sentinel/azure-sentinel-side-by-side-with-qradar/ba-p/1488333)-- [Ingest Microsoft Sentinel alerts into Splunk](https://techcommunity.microsoft.com/t5/azure-sentinel/azure-sentinel-side-by-side-with-splunk/ba-p/1211266)-
-### Other methods
-
-The following table describes side-by-side configurations that are *not* recommended, with details as to why:
-
-|Method |Description |
+|Step |Article |
|||
-|**Send Microsoft Sentinel logs to your legacy SIEM** | With this method, you'll continue to experience the cost and scale challenges of your on-premises SIEM. <br><br>You'll pay for data ingestion in Microsoft Sentinel, along with storage costs in your legacy SIEM, and you can't take advantage of Microsoft Sentinel's SIEM and SOAR detections, analytics, User Entity Behavior Analytics (UEBA), AI, or investigation and automation tools. |
-|**Send logs from a legacy SIEM to Microsoft Sentinel** | While this method provides you with the full functionality of Microsoft Sentinel, your organization still pays for two different data ingestion sources. Besides adding architectural complexity, this model can result in higher costs. |
-|**Use Microsoft Sentinel and your legacy SIEM as two fully separate solutions** | You could use Microsoft Sentinel to analyze some data sources, like your cloud data, and continue to use your on-premises SIEM for other sources. This setup allows for clear boundaries for when to use each solution, and avoids duplication of costs. <br><br>However, cross-correlation becomes difficult, and you can't fully diagnose attacks that cross both sets of data sources. In today's landscape, where threats often move laterally across an organization, such visibility gaps can pose significant security risks. |
----
-## Migrate your data
-
-Make sure that you migrate only the data that represents your current key use cases.
-
-1. Determine the data that's needed to support each of your use cases.
-
-1. Determine whether your current data sources provide valuable data.
-
-1. Identify any visibility gaps in your current SIEM, and how you can close them.
-
-1. For each data source, consider whether you need to ingest raw logs, which can be costly, or whether enriched alerts provide enough context for your key use cases.
-
- For example, you can ingest enriched data from security products across the organization, and use Microsoft Sentinel to correlate across them, without having to ingest raw logs from the data sources themselves.
-
-1. Use any of the following resources to ingest data:
-
- - Use **Microsoft Sentinel's [built-in data connectors](connect-data-sources.md)** to start ingesting data. For example, you may want to start a [free trial](billing.md#free-trial) with your cloud data, or use [free data connectors](billing.md#free-data-sources) to ingest data from other Microsoft products.
-
- - Use **[Syslog](connect-data-sources.md#syslog), [Common Event Format (CEF)](connect-data-sources.md#common-event-format-cef), or [REST APIs](connect-data-sources.md#rest-api-integration)** to connect other data sources.
-
- For more information, see [Microsoft Sentinel data connectors reference](data-connectors-reference.md) and the [Microsoft Sentinel solutions catalog](sentinel-solutions-catalog.md).
-
-> [!TIP]
-> - Limiting yourself to only free data sources may limit your ability to test with data that's important to you. When testing, consider limited data ingestion from both free and paid data connectors to get the most out of your test results.
->
-> - As you migrate detections and build use cases in Microsoft Sentinel, stay mindful of the data you ingest, and verify its value to your key priorities. Revisit data collection conversations to ensure data depth and breadth across your use cases.
->
-
-## Migrate analytics rules
-
-Microsoft Sentinel uses machine learning analytics to create high-fidelity and actionable incidents, and some of your existing detections may be redundant in Microsoft Sentinel. Therefore, do not migrate all of your detection and analytics rules blindly:
--- Make sure to select use cases that justify rule migration, considering business priority and efficiency.
+|Plan your migration |**You are here** |
+|Track migration with a workbook |[Track your Microsoft Sentinel migration with a workbook](migration-track.md) |
+|Migrate from ArcSight |ΓÇó [Migrate detection rules](migration-arcsight-detection-rules.md)<br>ΓÇó [Migrate SOAR automation](migration-arcsight-automation.md)<br>ΓÇó [Export historical data](migration-arcsight-historical-data.md) |
+|Migrate from Splunk |ΓÇó [Migrate detection rules](migration-splunk-detection-rules.md)<br>ΓÇó [Migrate SOAR automation](migration-splunk-automation.md)<br>ΓÇó [Export historical data](migration-splunk-historical-data.md) |
+|Migrate from QRadar |ΓÇó [Migrate detection rules](migration-qradar-detection-rules.md)<br>ΓÇó [Migrate SOAR automation](migration-qradar-automation.md)<br>ΓÇó [Export historical data](migration-qradar-historical-data.md) |
+|Ingest historical data |ΓÇó [Select a target Azure platform to host the exported historical data](migration-ingestion-target-platform.md)<br>ΓÇó [Select a data ingestion tool](migration-ingestion-tool.md)<br>ΓÇó [Ingest historical data into your target platform](migration-export-ingest.md) |
+|Convert dashboards to workbooks |[Convert dashboards to Azure Workbooks](migration-convert-dashboards.md) |
+|Update SOC processes |[Update SOC processes](migration-security-operations-center-processes.md) |
-- Review [built-in analytics rules](detect-threats-built-in.md) that may already address your use cases. In Microsoft Sentinel, go to the **Configuration > Analytics > Rule templates** tab to create rules based on built-in templates.
+## What is Microsoft Sentinel?
-- Review any rules that haven't triggered any alerts in the past 6-12 months, and determine whether they're still relevant.
+Microsoft Sentinel is a scalable, cloud-native, security information and event management (SIEM) and security orchestration, automation, and response (SOAR) solution. Microsoft Sentinel delivers intelligent security analytics and threat intelligence across the enterprise. Microsoft Sentinel provides a single solution for attack detection, threat visibility, proactive hunting, and threat response. Learn more about [Microsoft Sentinel](overview.md).
-- Eliminate low-level threats or alerts that you routinely ignore.
+## Why migrate from a legacy SIEM?
-**To migrate your analytics rules to Microsoft Sentinel**:
+SOC teams face a set of challenges when managing a legacy SIEM:
-1. Verify that your have a testing system in place for each rule you want to migrate.
+- **Slow response to threats**. Legacy SIEMs use correlation rules, which are difficult to maintain and ineffective for identifying emerging threats. In addition, SOC analysts are faced with large amounts of false positives, many alerts from many different security components, and increasingly high volumes of logs. Analyzing this data slows down SOC teams in their efforts to respond to critical threats in the environment.
+- **Scaling challenges**. As data ingestion rates grow, SOC teams are challenged with scaling their SIEM. Instead of focusing on protecting the organization, SOC teams must invest in infrastructure setup and maintenance, and are bound by storage or query limits.
+- **Manual analysis and response**. SOC teams need highly skilled analysts to manually process large amounts of alerts. SOC teams are overworked and new analysts are hard to find.
+- **Complex and inefficient management**. SOC teams typically oversee orchestration and infrastructure, manage connections between the SIEM and various data sources, and perform updates and patches. These tasks are often at the expense of critical triage and analysis.
- 1. **Prepare a validation process** for your migrated rules, including full test scenarios and scripts.
+A cloud-native SIEM addresses these challenges. Microsoft Sentinel collects data automatically and at scale, detects unknown threats, investigates threats with artificial intelligence, and responds to incidents rapidly with built-in automation.
- 1. **Ensure that your team has useful resources** to test your migrated rules.
-
- 1. **Confirm that you have any required data sources connected,** and review your data connection methods.
-
-1. Verify whether your detections are available as built-in templates in Microsoft Sentinel:
-
- - **If the built-in rules are sufficient**, use built-in rule templates to create rules for your own workspace.
-
- In Microsoft Sentinel, go to the **Configuration > Analytics > Rule templates** tab, and create and update each relevant analytics rule.
-
- For more information, see [Detect threats out-of-the-box](detect-threats-built-in.md).
-
- - **If you have detections that aren't covered by Microsoft Sentinel's built-in rules**, try an online query converter, such as [Uncoder.io](https://uncoder.io/) to convert your queries to KQL.
-
- Identify the trigger condition and rule action, and then construct and review your KQL query.
-
- - **If neither the built-in rules nor an online rule converter is sufficient**, you'll need to create the rule manually. In such cases, use the following steps to start creating your rule:
-
- 1. **Identify the data sources you want to use in your rule**. You'll want to create a mapping table between data sources and data tables in Microsoft Sentinel to identify the tables you want to query.
-
- 1. **Identify any attributes, fields, or entities** in your data that you want to use in your rules.
-
- 1. **Identify your rule criteria and logic**. At this stage, you may want to use rule templates as samples for how to construct your KQL queries.
-
- Consider filters, correlation rules, active lists, reference sets, watchlists, detection anomalies, aggregations, and so on. You might use references provided by your legacy SIEM to understand how to best map your query syntax.
-
- For example, see:
-
- - [Sample rule mapping between ArcSight/QRadar and Microsoft Sentinel](https://github.com/Azure/Azure-Sentinel/blob/master/Tools/RuleMigration/Rule%20Logic%20Mappings.md)
- - [SPL to KQL mapping samples](https://github.com/Azure/Azure-Sentinel/blob/master/Tools/RuleMigration/Rule%20Logic%20Mappings.md)
-
- 1. **Identify the trigger condition and rule action, and then construct and review your KQL query**. When reviewing your query, consider KQL optimization guidance resources.
-
-1. Test the rule with each of your relevant use cases. If it doesn't provided expected results, you may want to review the KQL and test it again.
-
-1. When you're satisfied, you can consider the rule migrated. Create a playbook for your rule action as needed. For more information, see [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md).
-
-**For more information, see**:
--- [**Create custom analytics rules to detect threats**](detect-threats-custom.md). Use [alert grouping](detect-threats-custom.md#alert-grouping) to reduce alert fatigue by grouping alerts that occur within a given timeframe.-- [**Map data fields to entities in Microsoft Sentinel**](map-data-fields-to-entities.md) to enable SOC engineers to define entities as part of the evidence to track during an investigation. Entity mapping also makes it possible for SOC analysts to take advantage of an intuitive [investigation graph (investigate-cases.md#use-the-investigation-graph-to-deep-dive) that can help reduce time and effort.-- [**Investigate incidents with UEBA data**](investigate-with-ueba.md), as an example of how to use evidence to surface events, alerts, and any bookmarks associated with a particular incident in the incident preview pane.-- [**Kusto Query Language (KQL)**](/azure/data-explorer/kusto/query/), which you can use to send read-only requests to your [Log Analytics](../azure-monitor/logs/log-analytics-tutorial.md) database to process data and return results. KQL is also used across other Microsoft services, such as [Microsoft Defender for Endpoint](https://www.microsoft.com/microsoft-365/security/endpoint-defender) and [Application Insights](../azure-monitor/app/app-insights-overview.md).-
-## Use automation to streamline processes
+## Plan your migration
-Use automated workflows to group and prioritize alerts into a common incident, and modify its priority.
+During the planning phase, you identify your existing SIEM components, your existing SOC processes, and you design and plan new use cases. Thorough planning allows you to maintain protection for both your cloud-based assetsΓÇöMicrosoft Azure, AWS, or GCPΓÇöand your SaaS solutions, such as Microsoft Office 365.
-For more information, see:
+This diagram describes the high-level phases that a typical migration includes. Each phase includes clear goals, key activities, and specified outcomes and deliverables.
-- [Security Orchestration, Automation, and Response (SOAR) in Microsoft Sentinel](automation.md).-- [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md)-- [Automate incident handling in Microsoft Sentinel with automation rules](automate-incident-handling-with-automation-rules.md)
+The phases in this diagram are a guideline for how to complete a typical migration procedure. An actual migration may not include some phases or may include more phases. Rather than reviewing the full set of phases, [the articles in this guide](#migration-steps) review specific tasks and steps that are especially important to a Microsoft Sentinel migration.
-## Retire your legacy SIEM
-Use the following checklist to make sure that you're fully migrated to Microsoft Sentinel and are ready to retire your legacy SIEM:
+### Considerations
+Review these key considerations for each phase.
-|Readiness area |Details |
+|Phase |Consideration |
|||
-|**Technology readiness** | **Check critical data**: Make sure all sources and alerts are available in Microsoft Sentinel. <br><br>**Archive all records**: Save critical past incident and case records, raw data optional, to retain institutional history. |
-|**Process readiness** | **Playbooks**: Update [investigation and hunting processes](investigate-cases.md) to Microsoft Sentinel.<br><br>**Metrics**: Ensure that you can get all key metrics from Microsoft Sentinel.<br><br>**Workbooks**: Create [custom workbooks](monitor-your-data.md) or use built-in workbook templates to quickly gain insights as soon as you [connect to data sources](connect-data-sources.md).<br><br>**Incidents**: Make sure to transfer all current incidents to the new system, including required source data. |
-|**People readiness** | **SOC analysts**: Make sure everyone on your team is trained on Microsoft Sentinel and is comfortable leaving the legacy SIEM. |
+|Discover |[Identify use cases](#identify-use-cases) and [migration priorities](#identify-your-migration-priorities) as part of this phase. |
+|Design |Define a detailed design and architecture for your Microsoft Sentinel implementation. You'll use this information to get approval from the relevant stakeholders before you start the implementation phase. |
+|Implement |As you implement Microsoft Sentinel components according to the design phase, and before you convert your entire infrastructure, consider whether you can use Microsoft Sentinel out-of-the-box content instead of migrating all components. You can begin using Microsoft Sentinel gradually, starting with a minimum viable product (MVP) for several use cases. As you add more use cases, you can use this Microsoft Sentinel instance as a user acceptance testing (UAT) environment to validate the use cases. |
+|Operationalize |You [migrate your content and SOC processes](migration-security-operations-center-processes.md) to ensure that the existing analyst experience isn't disrupted. |
+
+#### Identify your migration priorities
+
+Use these questions to pin down your migration priorities:
+- What are the most critical infrastructure components, systems, apps, and data in your business?
+- Who are your stakeholders in the migration? SIEM migration is likely to touch many areas of your business.
+- What drives your priorities? For example, greatest business risk, compliance requirements, business priorities, and so on.
+- What is your migration scale and timeline? What factors affect your dates and deadlines. Are you migrating an entire legacy system?
+- Do you have the skills you need? Is your security staff trained and ready for the migration?
+- Are there any specific blockers in your organization? Do any issues affect migration planning and scheduling? For example, issues such as staffing and training requirements, license dates, hard stops, specific business needs, and so on.
+
+Before you begin migration, identify key use cases, detection rules, data, and automation in your current SIEM. Approach your migration as a gradual process. Be intentional and thoughtful about what you migrate first, what you deprioritize, and what doesnΓÇÖt actually need to be migrated. Your team might have an overwhelming number of detections and use cases running in your current SIEM. Before beginning migration, decide which ones are actively useful to your business.
+
+#### Identify use cases
+
+When planning the discover phase, use the following guidance to identify your use cases.
+- Identify and analyze your current use cases by threat, operating system, product, and so on.
+- WhatΓÇÖs the scope? Do you want to migrate all use cases, or use some prioritization criteria?
+- Conduct a [Crown Jewel Analysis](https://www.mitre.org/research/technology-transfer/technology-licensing/crown-jewels-analysis).
+- What use cases are effective? A good starting place is to look at which detections have produced results within the last year (false positive versus positive rate).
+- What are the business priorities that affect use case migration? What are the biggest risks to your business? What type of issues put your business most at risk?
+- Prioritize by use case characteristics.
+ - Consider setting lower and higher priorities. We recommend that you focus on detections that would enforce 90 percent true positive on alert feeds. Use cases that cause a high false positive rate might be a lower priority for your business.
+ - Select use cases that justify rule migration in terms of business priority and efficacy:
+ - Review rules that havenΓÇÖt triggered any alerts in the last 6 to 12 months.
+ - Eliminate low-level threats or alerts you routinely ignore.
+- Prepare a validation process. Define test scenarios and build a test script.
+- Can you apply a methodology to prioritize use cases? You can follow a methodology such as MoSCoW to prioritize a leaner set of use cases for migration.
## Next steps
-After migration, explore Microsoft's Microsoft Sentinel resources to expand your skills and get the most out of Microsoft Sentinel.
-
-Also consider increasing your threat protection by using Microsoft Sentinel alongside [Microsoft 365 Defender](./microsoft-365-defender-sentinel-integration.md) and [Microsoft Defender for Cloud](../security-center/azure-defender.md) for [integrated threat protection](https://www.microsoft.com/security/business/threat-protection). Benefit from the breadth of visibility that Microsoft Sentinel delivers, while diving deeper into detailed threat analysis.
-
-For more information, see:
+In this article, you learned how to plan and prepare for your migration.
-- [Rule migration best practices](https://techcommunity.microsoft.com/t5/azure-sentinel/best-practices-for-migrating-detection-rules-from-arcsight/ba-p/2216417)-- [Webinar: Best Practices for Converting Detection Rules](https://www.youtube.com/watch?v=njXK1h9lfR4)-- [Security Orchestration, Automation, and Response (SOAR) in Microsoft Sentinel](automation.md)-- [Manage your SOC better with incident metrics](manage-soc-with-incident-metrics.md)-- [Microsoft Sentinel learning path](/learn/paths/security-ops-sentinel/)-- [SC-200 Microsoft Security Operations Analyst certification](/learn/certifications/exams/sc-200)-- [Microsoft Sentinel Ninja training](https://techcommunity.microsoft.com/t5/azure-sentinel/become-an-azure-sentinel-ninja-the-complete-level-400-training/ba-p/1246310)-- [Investigate an attack on a hybrid environment with Microsoft Sentinel](https://mslearn.cloudguides.com/guides/Investigate%20an%20attack%20on%20a%20hybrid%20environment%20with%20Azure%20Sentinel)
+> [!div class="nextstepaction"]
+> [Track your migration with a workbook](migration-track.md)
sentinel Playbook Triggers Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/playbook-triggers-actions.md
For the complete specification of the Microsoft Sentinel connector, see the [Log
| - | :--: | :: | :--: | | **[Microsoft Sentinel Reader](../role-based-access-control/built-in-roles.md#microsoft-sentinel-reader)** | &#10003; | &#10003; | &#10007; | | **Microsoft Sentinel [Responder](../role-based-access-control/built-in-roles.md#microsoft-sentinel-responder)/[Contributor](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor)** | &#10003; | &#10003; | &#10003; |
-|
[Learn more about permissions in Microsoft Sentinel](./roles.md).
Though the Microsoft Sentinel connector can be used in a variety of ways, the co
| Trigger | Full trigger name in<br>Logic Apps Designer | When to use it | Known limitations | | | -- | -- |
-| **Incident trigger** | "Microsoft Sentinel incident (Preview)" | Recommended for most incident automation scenarios.<br><br>The playbook receives incident objects, including entities and alerts. Using this trigger allows the playbook to be attached to an **Automation rule**, so it can be triggered when an incident is created in Microsoft Sentinel, and all the [benefits of automation rules](./automate-incident-handling-with-automation-rules.md) can be applied to the incident. | Playbooks with this trigger do not support alert grouping, meaning they will receive only the first alert sent with each incident.
+| **Incident trigger** | "Microsoft Sentinel incident (Preview)" | Recommended for most incident automation scenarios.<br><br>The playbook receives incident objects, including entities and alerts. Using this trigger allows the playbook to be attached to an **Automation rule**, so it can be triggered when an incident is created (and now, updated as well) in Microsoft Sentinel, and all the [benefits of automation rules](./automate-incident-handling-with-automation-rules.md) can be applied to the incident. | Playbooks with this trigger do not support alert grouping, meaning they will receive only the first alert sent with each incident.
| **Alert trigger** | "Microsoft Sentinel alert" | Advisable for playbooks that need to be run on alerts manually from the Microsoft Sentinel portal, or for **scheduled** analytics rules that don't generate incidents for their alerts. | This trigger cannot be used to automate responses for alerts generated by **Microsoft security** analytics rules.<br><br>Playbooks using this trigger cannot be called by **automation rules**. |
-|
The schemas used by these two flows are not identical. The recommended practice is to use the **Microsoft Sentinel incident trigger** flow, which is applicable to most scenarios.
service-bus-messaging Duplicate Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/duplicate-detection.md
Title: Azure Service Bus duplicate message detection | Microsoft Docs description: This article explains how you can detect duplicates in Azure Service Bus messages. The duplicate message can be ignored and dropped. Previously updated : 04/19/2021 Last updated : 05/31/2022 # Duplicate detection
-If an application fails due to a fatal error immediately after it sends a message, and the restarted application instance erroneously believes that the prior message delivery did not occur, a subsequent send causes the same message to appear in the system twice.
+If an application fails due to a fatal error immediately after it sends a message, and the restarted application instance erroneously believes that the prior message delivery didn't occur, a subsequent send causes the same message to appear in the system twice.
It's also possible for an error at the client or network level to occur a moment earlier, and for a sent message to be committed into the queue, with the acknowledgment not successfully returned to the client. This scenario leaves the client in doubt about the outcome of the send operation.
Keeping the window small means that fewer message-ids must be retained and match
## Next steps You can enable duplicate message detection using Azure portal, PowerShell, CLI, Resource Manager template, .NET, Java, Python, and JavaScript. For more information, see [Enable duplicate message detection](enable-duplicate-detection.md).
-In scenarios where client code is unable to resubmit a message with the same *MessageId* as before, it is important to design messages that can be safely reprocessed. This [blog post about idempotence](https://particular.net/blog/what-does-idempotent-mean) describes various techniques for how to do that.
+In scenarios where client code is unable to resubmit a message with the same *MessageId* as before, it's important to design messages that can be safely reprocessed. This [blog post about idempotence](https://particular.net/blog/what-does-idempotent-mean) describes various techniques for how to do that.
Try the samples in the language of your choice to explore Azure Service Bus features.
service-bus-messaging Message Browsing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-browsing.md
Title: Azure Service Bus - message browsing description: Browse and peek Service Bus messages enables an Azure Service Bus client to enumerate all messages in a queue or subscription. Previously updated : 03/29/2021 Last updated : 05/31/2022 # Message browsing
The Peek operation on a queue or a subscription returns at most the requested nu
| Active messages | Yes | | Dead-lettered messages | No | | Locked messages | Yes |
-| Expired messages | May be (before they are dead-lettered) |
+| Expired messages | May be (before they're dead-lettered) |
| Scheduled messages | Yes for queues. No for subscriptions | ## Dead-lettered messages
service-bus-messaging Message Deferral https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-deferral.md
Title: Azure Service Bus - message deferral description: This article explains how to defer delivery of Azure Service Bus messages. The message remains in the queue or subscription, but it's set aside. Previously updated : 04/21/2021 Last updated : 05/31/2022 # Message deferral When a queue or subscription client receives a message that it's willing to process, but the processing isn't currently possible because of special circumstances, it has the option of "deferring" retrieval of the message to a later point. The message remains in the queue or subscription, but it's set aside. > [!NOTE]
-> Deferred messages won't be automatically moved to the dead-letter queue [after they expire](./service-bus-dead-letter-queues.md#time-to-live). This behaviour is by design.
+> Deferred messages won't be automatically moved to the dead-letter queue [after they expire](./service-bus-dead-letter-queues.md#time-to-live). This behavior is by design.
## Sample scenarios Deferral is a feature created specifically for workflow processing scenarios. Workflow frameworks may require certain operations to be processed in a particular order. They may have to postpone processing of some received messages until prescribed prior work that's informed by other messages has been completed.
service-bus-messaging Message Sequencing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-sequencing.md
Title: Azure Service Bus message sequencing and timestamps | Microsoft Docs description: This article explains how to preserve sequencing and ordering (with timestamps) of Azure Service Bus messages. Previously updated : 04/14/2021 Last updated : 05/31/2022 # Message sequencing and timestamps
For those cases in which absolute order of messages is significant and/or in whi
The **SequenceNumber** value is a unique 64-bit integer assigned to a message as it is accepted and stored by the broker and functions as its internal identifier. For partitioned entities, the topmost 16 bits reflect the partition identifier. Sequence numbers roll over to zero when the 48/64-bit range is exhausted.
-The sequence number can be trusted as a unique identifier since it is assigned by a central and neutral authority and not by clients. It also represents the true order of arrival, and is more precise than a time stamp as an order criterion, because time stamps may not have a high enough resolution at extreme message rates and may be subject to (however minimal) clock skew in situations where the broker ownership transitions between nodes.
+The sequence number can be trusted as a unique identifier since it's assigned by a central and neutral authority and not by clients. It also represents the true order of arrival, and is more precise than a time stamp as an order criterion, because time stamps may not have a high enough resolution at extreme message rates and may be subject to (however minimal) clock skew in situations where the broker ownership transitions between nodes.
The absolute arrival order matters, for example, in business scenarios in which a limited number of offered goods are served on a first-come-first-served basis while supplies last; concert ticket sales are an example.
The time-stamping capability acts as a neutral and trustworthy authority that ac
You can submit messages to a queue or topic for delayed processing; for example, to schedule a job to become available for processing by a system at a certain time. This capability realizes a reliable distributed time-based scheduler.
-Scheduled messages do not materialize in the queue until the defined enqueue time. Before that time, scheduled messages can be canceled. Cancellation deletes the message.
+Scheduled messages don't materialize in the queue until the defined enqueue time. Before that time, scheduled messages can be canceled. Cancellation deletes the message.
You can schedule messages using any of our clients in two ways: - Use the regular send API, but set the `ScheduledΓÇïEnqueueΓÇïTimeΓÇïUtc` property on the message before sending.
Scheduled messages and their sequence numbers can also be discovered using [mess
The **SequenceNumber** for a scheduled message is only valid while the message is in this state. As the message transitions to the active state, the message is appended to the queue as if had been enqueued at the current instant, which includes assigning a new **SequenceNumber**.
-Because the feature is anchored on individual messages and messages can only be enqueued once, Service Bus does not support recurring schedules for messages.
+Because the feature is anchored on individual messages and messages can only be enqueued once, Service Bus doesn't support recurring schedules for messages.
## Next steps
service-bus-messaging Message Transfers Locks Settlement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-transfers-locks-settlement.md
Title: Azure Service Bus message transfers, locks, and settlement description: This article provides an overview of Azure Service Bus message transfers, locks, and settlement operations. Previously updated : 04/12/2021 Last updated : 05/31/2022 ms.devlang: csharp
Using any of the supported Service Bus API clients, send operations into Service
If the message is rejected by Service Bus, the rejection contains an error indicator and text with a **tracking-id** in it. The rejection also includes information about whether the operation can be retried with any expectation of success. In the client, this information is turned into an exception and raised to the caller of the send operation. If the message has been accepted, the operation silently completes.
-When using the AMQP protocol, which is the exclusive protocol for the .NET Standard, Java, JavaScript, Python, and Go clients, and [an option for the .NET Framework client](service-bus-amqp-dotnet.md), message transfers and settlements are pipelined and asynchronous. We recommend that you use the asynchronous programming model API variants.
+When you use the AMQP protocol, which is the exclusive protocol for the .NET Standard, Java, JavaScript, Python, and Go clients, and [an option for the .NET Framework client](service-bus-amqp-dotnet.md), message transfers and settlements are pipelined and asynchronous. We recommend that you use the asynchronous programming model API variants.
A sender can put several messages on the wire in rapid succession without having to wait for each message to be acknowledged, as would otherwise be the case with the SBMP protocol or with HTTP 1.1. Those asynchronous send operations complete as the respective messages are accepted and stored, on partitioned entities or when send operation to different entities overlap. The completions might also occur out of the original send order.
for (int i = 0; i < 100; i++)
If the application starts the 10 asynchronous send operations in immediate succession and awaits their respective completion separately, the round-trip time for those 10 send operations overlaps. The 10 messages are transferred in immediate succession, potentially even sharing TCP frames, and the overall transfer duration largely depends on the network-related time it takes to get the messages transferred to the broker.
-Making the same assumptions as for the prior loop, the total overlapped execution time for the following loop might stay well under one second:
+With the same assumptions as for the prior loop, the total overlapped execution time for the following loop might stay well under one second:
```csharp var tasks = new List<Task>();
for (int i = 0; i < 100; i++)
await Task.WhenAll(tasks); ```
-It is important to note that all asynchronous programming models use some form of memory-based, hidden work queue that holds pending operations. When the send API returns, the send task is queued up in that work queue but the protocol gesture only commences once it is the task's turn to run. For code that tends to push bursts of messages and where reliability is a concern, care should be taken that not too many messages are put "in flight" at once, because all sent messages take up memory until they have factually been put onto the wire.
+It's important to note that all asynchronous programming models use some form of memory-based, hidden work queue that holds pending operations. When the send API returns, the send task is queued up in that work queue but the protocol gesture only commences once it's the task's turn to run. For code that tends to push bursts of messages and where reliability is a concern, care should be taken that not too many messages are put "in flight" at once, because all sent messages take up memory until they have factually been put onto the wire.
-Semaphores, as shown in the following code snippet in C#, are synchronization objects that enable such application-level throttling when needed. This use of a semaphore allows for at most 10 messages to be in flight at once. One of the 10 available semaphore locks is taken before the send and it is released as the send completes. The 11th pass through the loop waits until at least one of the prior sends has completed, and then makes its lock available:
+Semaphores, as shown in the following code snippet in C#, are synchronization objects that enable such application-level throttling when needed. This use of a semaphore allows for at most 10 messages to be in flight at once. One of the 10 available semaphore locks is taken before the send and it's released as the send completes. The 11th pass through the loop waits until at least one of the prior sends has completed, and then makes its lock available:
```csharp var semaphore = new SemaphoreSlim(10);
for (int i = 0; i < 100; i++)
} ```
-With a low-level AMQP client, Service Bus also accepts "pre-settled" transfers. A pre-settled transfer is a fire-and-forget operation for which the outcome, either way, is not reported back to the client and the message is considered settled when sent. The lack of feedback to the client also means that there is no actionable data available for diagnostics, which means that this mode does not qualify for help via Azure support.
+With a low-level AMQP client, Service Bus also accepts "pre-settled" transfers. A pre-settled transfer is a fire-and-forget operation for which the outcome, either way, isn't reported back to the client and the message is considered settled when sent. The lack of feedback to the client also means that there's no actionable data available for diagnostics, which means that this mode doesn't qualify for help via Azure support.
## Settling receive operations
For receive operations, the Service Bus API clients enable two different explici
The **receive-and-delete** mode tells the broker to consider all messages it sends to the receiving client as settled when sent. That means that the message is considered consumed as soon as the broker has put it onto the wire. If the message transfer fails, the message is lost.
-The upside of this mode is that the receiver does not need to take further action on the message and is also not slowed by waiting for the outcome of the settlement. If the data contained in the individual messages have low value and/or are only meaningful for a very short time, this mode is a reasonable choice.
+The upside of this mode is that the receiver doesn't need to take further action on the message and is also not slowed by waiting for the outcome of the settlement. If the data contained in the individual messages have low value and/or are only meaningful for a very short time, this mode is a reasonable choice.
### PeekLock
-The **peek-lock** mode tells the broker that the receiving client wants to settle received messages explicitly. The message is made available for the receiver to process, while held under an exclusive lock in the service so that other, competing receivers cannot see it. The duration of the lock is initially defined at the queue or subscription level and can be extended by the client owning the lock. For details about renewing locks, see the [Renew locks](#renew-locks) section in this article.
+The **peek-lock** mode tells the broker that the receiving client wants to settle received messages explicitly. The message is made available for the receiver to process, while held under an exclusive lock in the service so that other, competing receivers can't see it. The duration of the lock is initially defined at the queue or subscription level and can be extended by the client owning the lock. For details about renewing locks, see the [Renew locks](#renew-locks) section in this article.
When a message is locked, other clients receiving from the same queue or subscription can take on locks and retrieve the next available messages not under active lock. When the lock on a message is explicitly released or when the lock expires, the message pops back up at or near the front of the retrieval order for redelivery.
The receiving client initiates settlement of a received message with a positive
When the receiving client fails to process a message but wants the message to be redelivered, it can explicitly ask for the message to be released and unlocked instantly by calling the `Abandon` API for the message or it can do nothing and let the lock elapse.
-If a receiving client fails to process a message and knows that redelivering the message and retrying the operation will not help, it can reject the message, which moves it into the dead-letter queue by calling the `DeadLetter` API on the message, which also allows setting a custom property including a reason code that can be retrieved with the message from the dead-letter queue.
+If a receiving client fails to process a message and knows that redelivering the message and retrying the operation won't help, it can reject the message, which moves it into the dead-letter queue by calling the `DeadLetter` API on the message, which also allows setting a custom property including a reason code that can be retrieved with the message from the dead-letter queue.
A special case of settlement is deferral, which is discussed in a [separate article](message-deferral.md).
-The `Complete`, `Deadletter`, or `RenewLock` operations may fail due to network issues, if the held lock has expired, or there are other service-side conditions that prevent settlement. In one of the latter cases, the service sends a negative acknowledgment that surfaces as an exception in the API clients. If the reason is a broken network connection, the lock is dropped since Service Bus does not support recovery of existing AMQP links on a different connection.
+The `Complete`, `Deadletter`, or `RenewLock` operations may fail due to network issues, if the held lock has expired, or there are other service-side conditions that prevent settlement. In one of the latter cases, the service sends a negative acknowledgment that surfaces as an exception in the API clients. If the reason is a broken network connection, the lock is dropped since Service Bus doesn't support recovery of existing AMQP links on a different connection.
-If `Complete` fails, which occurs typically at the very end of message handling and in some cases after minutes of processing work, the receiving application can decide whether it preserves the state of the work and ignores the same message when it is delivered a second time, or whether it tosses out the work result and retries as the message is redelivered.
+If `Complete` fails, which occurs typically at the very end of message handling and in some cases after minutes of processing work, the receiving application can decide whether it preserves the state of the work and ignores the same message when it's delivered a second time, or whether it tosses out the work result and retries as the message is redelivered.
-The typical mechanism for identifying duplicate message deliveries is by checking the message-id, which can and should be set by the sender to a unique value, possibly aligned with an identifier from the originating process. A job scheduler would likely set the message-id to the identifier of the job it is trying to assign to a worker with the given worker, and the worker would ignore the second occurrence of the job assignment if that job is already done.
+The typical mechanism for identifying duplicate message deliveries is by checking the message-id, which can and should be set by the sender to a unique value, possibly aligned with an identifier from the originating process. A job scheduler would likely set the message-id to the identifier of the job it's trying to assign to a worker with the given worker, and the worker would ignore the second occurrence of the job assignment if that job is already done.
> [!IMPORTANT] > It is important to note that the lock that PeekLock acquires on the message is volatile and may be lost in the following conditions
service-bus-messaging Service Bus Amqp Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-amqp-overview.md
Title: Overview of AMQP 1.0 in Azure Service Bus description: Learn how Azure Service Bus supports Advanced Message Queuing Protocol (AMQP), an open standard protocol. Previously updated : 04/08/2021 Last updated : 05/31/2022 # Advanced Message Queueing Protocol (AMQP) 1.0 support in Service Bus
service-bus-messaging Service Bus Amqp Protocol Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-amqp-protocol-guide.md
Title: AMQP 1.0 in Azure Service Bus and Event Hubs protocol guide | Microsoft Docs description: Protocol guide to expressions and description of AMQP 1.0 in Azure Service Bus and Event Hubs Previously updated : 04/14/2021 Last updated : 05/31/2022 # AMQP 1.0 in Azure Service Bus and Event Hubs protocol guide
-The Advanced Message Queueing Protocol 1.0 is a standardized framing and transfer protocol for asynchronously, securely, and reliably transferring messages between two parties. It is the primary protocol of Azure Service Bus Messaging and Azure Event Hubs.
+The Advanced Message Queueing Protocol 1.0 is a standardized framing and transfer protocol for asynchronously, securely, and reliably transferring messages between two parties. It's the primary protocol of Azure Service Bus Messaging and Azure Event Hubs.
AMQP 1.0 is the result of broad industry collaboration that brought together middleware vendors, such as Microsoft and Red Hat, with many messaging middleware users such as JP Morgan Chase representing the financial services industry. The technical standardization forum for the AMQP protocol and extension specifications is OASIS, and it has achieved formal approval as an international standard as ISO/IEC 19494:2014.
The goal is for any developer using any existing AMQP 1.0 client stack on any pl
Common general-purpose AMQP 1.0 stacks, such as [Apache Qpid Proton](https://qpid.apache.org/proton/https://docsupdatetracker.net/index.html) or [AMQP.NET Lite](https://github.com/Azure/amqpnetlite), implement all core AMQP 1.0 protocol elements like sessions or links. Those foundational elements are sometimes wrapped with a higher-level API; Apache Proton even offers two, the imperative Messenger API and the reactive Reactor API.
-In the following discussion, we assume that the management of AMQP connections, sessions, and links and the handling of frame transfers and flow control are handled by the respective stack (such as Apache Proton-C) and do not require much if any specific attention from application developers. We abstractly assume the existence of a few API primitives like the ability to connect, and to create some form of *sender* and *receiver* abstraction objects, which then have some shape of `send()` and `receive()` operations, respectively.
+In the following discussion, we assume that the management of AMQP connections, sessions, and links and the handling of frame transfers and flow control are handled by the respective stack (such as Apache Proton-C) and don't require much if any specific attention from application developers. We abstractly assume the existence of a few API primitives like the ability to connect, and to create some form of *sender* and *receiver* abstraction objects, which then have some shape of `send()` and `receive()` operations, respectively.
When discussing advanced capabilities of Azure Service Bus, such as message browsing or management of sessions, those features are explained in AMQP terms, but also as a layered pseudo-implementation on top of this assumed API abstraction.
When discussing advanced capabilities of Azure Service Bus, such as message brow
AMQP is a framing and transfer protocol. Framing means that it provides structure for binary data streams that flow in either direction of a network connection. The structure provides delineation for distinct blocks of data, called *frames*, to be exchanged between the connected parties. The transfer capabilities make sure that both communicating parties can establish a shared understanding about when frames shall be transferred, and when transfers shall be considered complete.
-Unlike earlier expired draft versions produced by the AMQP working group that are still in use by a few message brokers, the working group's final, and standardized AMQP 1.0 protocol does not prescribe the presence of a message broker or any particular topology for entities inside a message broker.
+Unlike earlier expired draft versions produced by the AMQP working group that are still in use by a few message brokers, the working group's final, and standardized AMQP 1.0 protocol doesn't prescribe the presence of a message broker or any particular topology for entities inside a message broker.
-The protocol can be used for symmetric peer-to-peer communication, for interaction with message brokers that support queues and publish/subscribe entities, as Azure Service Bus does. It can also be used for interaction with messaging infrastructure where the interaction patterns are different from regular queues, as is the case with Azure Event Hubs. An Event Hub acts like a queue when events are sent to it, but acts more like a serial storage service when events are read from it; it somewhat resembles a tape drive. The client picks an offset into the available data stream and is then served all events from that offset to the latest available.
+The protocol can be used for symmetric peer-to-peer communication, for interaction with message brokers that support queues and publish/subscribe entities, as Azure Service Bus does. It can also be used for interaction with messaging infrastructure where the interaction patterns are different from regular queues, as is the case with Azure Event Hubs. An event hub acts like a queue when events are sent to it, but acts more like a serial storage service when events are read from it; it somewhat resembles a tape drive. The client picks an offset into the available data stream and is then served all events from that offset to the latest available.
The AMQP 1.0 protocol is designed to be extensible, enabling further specifications to enhance its capabilities. The three extension specifications discussed in this document illustrate this. For communication over existing HTTPS/WebSockets infrastructure, configuring the native AMQP TCP ports may be difficult. A binding specification defines how to layer AMQP over WebSockets. For interacting with the messaging infrastructure in a request/response fashion for management purposes or to provide advanced functionality, the AMQP management specification defines the required basic interaction primitives. For federated authorization model integration, the AMQP claims-based-security specification defines how to associate and renew authorization tokens associated with links.
AMQP calls the communicating programs *containers*; those contain *nodes*, which
![Diagram showing Sessions and Connections between containers.][1]
-The network connection is thus anchored on the container. It is initiated by the container in the client role making an outbound TCP socket connection to a container in the receiver role, which listens for and accepts inbound TCP connections. The connection handshake includes negotiating the protocol version, declaring or negotiating the use of Transport Level Security (TLS/SSL), and an authentication/authorization handshake at the connection scope that is based on SASL.
+The network connection is thus anchored on the container. It's initiated by the container in the client role making an outbound TCP socket connection to a container in the receiver role, which listens for and accepts inbound TCP connections. The connection handshake includes negotiating the protocol version, declaring or negotiating the use of Transport Level Security (TLS/SSL), and an authentication/authorization handshake at the connection scope that is based on SASL.
Azure Service Bus or Azure Event Hubs requires the use of TLS at all times. It supports connections over TCP port 5671, whereby the TCP connection is first overlaid with TLS before entering the AMQP protocol handshake, and also supports connections over TCP port 5672 whereby the server immediately offers a mandatory upgrade of connection to TLS using the AMQP-prescribed model. The AMQP WebSockets binding creates a tunnel over TCP port 443 that is then equivalent to AMQP 5671 connections. After setting up the connection and TLS, Service Bus offers two SASL mechanism options:
-* SASL PLAIN is commonly used for passing username and password credentials to a server. Service Bus does not have accounts, but named [Shared Access Security rules](service-bus-sas.md), which confer rights and are associated with a key. The name of a rule is used as the user name and the key (as base64 encoded text) is used as the password. The rights associated with the chosen rule govern the operations allowed on the connection.
+* SASL PLAIN is commonly used for passing username and password credentials to a server. Service Bus doesn't have accounts, but named [Shared Access Security rules](service-bus-sas.md), which confer rights and are associated with a key. The name of a rule is used as the user name and the key (as base64 encoded text) is used as the password. The rights associated with the chosen rule govern the operations allowed on the connection.
* SASL ANONYMOUS is used for bypassing SASL authorization when the client wants to use the claims-based-security (CBS) model that is described later. With this option, a client connection can be established anonymously for a short time during which the client can only interact with the CBS endpoint and the CBS handshake must complete.
-After the transport connection is established, the containers each declare the maximum frame size they are willing to handle, and after an idle timeout theyΓÇÖll unilaterally disconnect if there is no activity on the connection.
+After the transport connection is established, the containers each declare the maximum frame size they're willing to handle, and after an idle timeout theyΓÇÖll unilaterally disconnect if there's no activity on the connection.
They also declare how many concurrent channels are supported. A channel is a unidirectional, outbound, virtual transfer path on top of the connection. A session takes a channel from each of the interconnected containers to form a bi-directional communication path.
-Sessions have a window-based flow control model; when a session is created, each party declares how many frames it is willing to accept into its receive window. As the parties exchange frames, transferred frames fill that window and transfers stop when the window is full and until the window gets reset or expanded using the *flow performative* (*performative* is the AMQP term for protocol-level gestures exchanged between the two parties).
+Sessions have a window-based flow control model; when a session is created, each party declares how many frames it's willing to accept into its receive window. As the parties exchange frames, transferred frames fill that window and transfers stop when the window is full and until the window gets reset or expanded using the *flow performative* (*performative* is the AMQP term for protocol-level gestures exchanged between the two parties).
This window-based model is roughly analogous to the TCP concept of window-based flow control, but at the session level inside the socket. The protocolΓÇÖs concept of allowing for multiple concurrent sessions exists so that high priority traffic could be rushed past throttled normal traffic, like on a highway express lane.
-Azure Service Bus currently uses exactly one session for each connection. The Service Bus maximum frame-size is 262,144 bytes (256-K bytes) for Service Bus Standard. It is 1048576 (100 MB) for Service Bus Premium and Event Hubs. Service Bus does not impose any particular session-level throttling windows, but resets the window regularly as part of link-level flow control (see [the next section](#links)).
+Azure Service Bus currently uses exactly one session for each connection. The Service Bus maximum frame-size is 262,144 bytes (256-K bytes) for Service Bus Standard. It's 1048576 (100 MB) for Service Bus Premium and Event Hubs. Service Bus doesn't impose any particular session-level throttling windows, but resets the window regularly as part of link-level flow control (see [the next section](#links)).
Connections, channels, and sessions are ephemeral. If the underlying connection collapses, connections, TLS tunnel, SASL authorization context, and sessions must be reestablished.
Links are named and associated with nodes. As stated in the beginning, nodes are
In Service Bus, a node is directly equivalent to a queue, a topic, a subscription, or a deadletter subqueue of a queue or subscription. The node name used in AMQP is therefore the relative name of the entity inside of the Service Bus namespace. If a queue is named `myqueue`, thatΓÇÖs also its AMQP node name. A topic subscription follows the HTTP API convention by being sorted into a "subscriptions" resource collection and thus, a subscription **sub** on a topic **mytopic** has the AMQP node name **mytopic/subscriptions/sub**.
-The connecting client is also required to use a local node name for creating links; Service Bus is not prescriptive about those node names and does not interpret them. AMQP 1.0 client stacks generally use a scheme to assure that these ephemeral node names are unique in the scope of the client.
+The connecting client is also required to use a local node name for creating links; Service Bus isn't prescriptive about those node names and doesn't interpret them. AMQP 1.0 client stacks generally use a scheme to assure that these ephemeral node names are unique in the scope of the client.
### Transfers
Once a link has been established, messages can be transferred over that link. In
![A diagram showing a message's transfer between the Sender and Receiver and disposition that results from it.][3]
-In the simplest case, the sender can choose to send messages "pre-settled," meaning that the client isnΓÇÖt interested in the outcome and the receiver does not provide any feedback about the outcome of the operation. This mode is supported by Service Bus at the AMQP protocol level, but not exposed in any of the client APIs.
+In the simplest case, the sender can choose to send messages "pre-settled," meaning that the client isnΓÇÖt interested in the outcome and the receiver doesn't provide any feedback about the outcome of the operation. This mode is supported by Service Bus at the AMQP protocol level, but not exposed in any of the client APIs.
-The regular case is that messages are being sent unsettled, and the receiver then indicates acceptance or rejection using the *disposition* performative. Rejection occurs when the receiver cannot accept the message for any reason, and the rejection message contains information about the reason, which is an error structure defined by AMQP. If messages are rejected due to internal errors inside of Service Bus, the service returns extra information inside that structure that can be used for providing diagnostics hints to support personnel if you are filing support requests. You learn more details about errors later.
+The regular case is that messages are being sent unsettled, and the receiver then indicates acceptance or rejection using the *disposition* performative. Rejection occurs when the receiver can't accept the message for any reason, and the rejection message contains information about the reason, which is an error structure defined by AMQP. If messages are rejected due to internal errors inside of Service Bus, the service returns extra information inside that structure that can be used for providing diagnostics hints to support personnel if you're filing support requests. You learn more details about errors later.
-A special form of rejection is the *released* state, which indicates that the receiver has no technical objection to the transfer, but also no interest in settling the transfer. That case exists, for example, when a message is delivered to a Service Bus client, and the client chooses to "abandon" the message because it cannot perform the work resulting from processing the message; the message delivery itself is not at fault. A variation of that state is the *modified* state, which allows changes to the message as it is released. That state is not used by Service Bus at present.
+A special form of rejection is the *released* state, which indicates that the receiver has no technical objection to the transfer, but also no interest in settling the transfer. That case exists, for example, when a message is delivered to a Service Bus client, and the client chooses to "abandon" the message because it can't perform the work resulting from processing the message; the message delivery itself isn't at fault. A variation of that state is the *modified* state, which allows changes to the message as it is released. That state isn't used by Service Bus at present.
The AMQP 1.0 specification defines a further disposition state called *received*, that specifically helps to handle link recovery. Link recovery allows reconstituting the state of a link and any pending deliveries on top of a new connection and session, when the prior connection and session were lost. Service Bus does not support link recovery; if the client loses the connection to Service Bus with an unsettled message transfer pending, that message transfer is lost, and the client must reconnect, reestablish the link, and retry the transfer.
-As such, Service Bus and Event Hubs support "at least once" transfer where the sender can be assured for the message having been stored and accepted, but do not support "exactly once" transfers at the AMQP level, where the system would attempt to recover the link and continue to negotiate the delivery state to avoid duplication of the message transfer.
+As such, Service Bus and Event Hubs support "at least once" transfer where the sender can be assured for the message having been stored and accepted, but don't support "exactly once" transfers at the AMQP level, where the system would attempt to recover the link and continue to negotiate the delivery state to avoid duplication of the message transfer.
To compensate for possible duplicate sends, Service Bus supports duplicate detection as an optional feature on queues and topics. Duplicate detection records the message IDs of all incoming messages during a user-defined time window, then silently drops all messages sent with the same message-IDs during that same window.
When Service Bus is in the receiver role, it instantly provides the sender with
In the sender role, Service Bus sends messages to use up any outstanding link credit.
-A "receive" call at the API level translates into a *flow* performative being sent to Service Bus by the client, and Service Bus consumes that credit by taking the first available, unlocked message from the queue, locking it, and transferring it. If there is no message readily available for delivery, any outstanding credit by any link established with that particular entity remains recorded in order of arrival, and messages are locked and transferred as they become available, to use any outstanding credit.
+A "receive" call at the API level translates into a *flow* performative being sent to Service Bus by the client, and Service Bus consumes that credit by taking the first available, unlocked message from the queue, locking it, and transferring it. If there's no message readily available for delivery, any outstanding credit by any link established with that particular entity remains recorded in order of arrival, and messages are locked and transferred as they become available, to use any outstanding credit.
The lock on a message is released when the transfer is settled into one of the terminal states *accepted*, *rejected*, or *released*. The message is removed from Service Bus when the terminal state is *accepted*. It remains in Service Bus and is delivered to the next receiver when the transfer reaches any of the other states. Service Bus automatically moves the message into the entity's deadletter queue when it reaches the maximum delivery count allowed for the entity due to repeated rejections or releases.
-Even though the Service Bus APIs do not directly expose such an option today, a lower-level AMQP protocol client can use the link-credit model to turn the "pull-style" interaction of issuing one unit of credit for each receive request into a "push-style" model by issuing a large number of link credits and then receive messages as they become available without any further interaction. Push is supported through the [MessagingFactory.PrefetchCount](/dotnet/api/microsoft.servicebus.messaging.messagingfactory) or [MessageReceiver.PrefetchCount](/dotnet/api/microsoft.servicebus.messaging.messagereceiver) property settings. When they are non-zero, the AMQP client uses it as the link credit.
+Even though the Service Bus APIs don't directly expose such an option today, a lower-level AMQP protocol client can use the link-credit model to turn the "pull-style" interaction of issuing one unit of credit for each receive request into a "push-style" model by issuing a large number of link credits and then receive messages as they become available without any further interaction. Push is supported through the [MessagingFactory.PrefetchCount](/dotnet/api/microsoft.servicebus.messaging.messagingfactory) or [MessageReceiver.PrefetchCount](/dotnet/api/microsoft.servicebus.messaging.messagereceiver) property settings. When they're non-zero, the AMQP client uses it as the link credit.
-In this context, it's important to understand that the clock for the expiration of the lock on the message inside the entity starts when the message is taken from the entity, not when the message is put on the wire. Whenever the client indicates readiness to receive messages by issuing link credit, it is therefore expected to be actively pulling messages across the network and be ready to handle them. Otherwise the message lock may have expired before the message is even delivered. The use of link-credit flow control should directly reflect the immediate readiness to deal with available messages dispatched to the receiver.
+In this context, it's important to understand that the clock for the expiration of the lock on the message inside the entity starts when the message is taken from the entity, not when the message is put on the wire. Whenever the client indicates readiness to receive messages by issuing link credit, it's therefore expected to be actively pulling messages across the network and be ready to handle them. Otherwise the message lock may have expired before the message is even delivered. The use of link-credit flow control should directly reflect the immediate readiness to deal with available messages dispatched to the receiver.
In summary, the following sections provide a schematic overview of the performative flow during different API interactions. Each section describes a different logical operation. Some of those interactions may be "lazy," meaning they may only be performed when required. Creating a message sender may not cause a network interaction until the first message is sent or requested.
The arrows in the following table show the performative flow direction.
| Client | Service Bus | | | |
-| --> attach(<br/>name={link name},<br/>handle={numeric handle},<br/>role=**receiver**,<br/>source={entity name},<br/>target={client link ID}<br/>) |Client attaches to entity as receiver |
-| Service Bus replies attaching its end of the link |<-- attach(<br/>name={link name},<br/>handle={numeric handle},<br/>role=**sender**,<br/>source={entity name},<br/>target={client link ID}<br/>) |
+| `--> attach(<br/>name={link name},<br/>handle={numeric handle},<br/>role=**receiver**,<br/>source={entity name},<br/>target={client link ID}<br/>)` |Client attaches to entity as receiver |
+| Service Bus replies attaching its end of the link |`<-- attach(<br/>name={link name},<br/>handle={numeric handle},<br/>role=**sender**,<br/>source={entity name},<br/>target={client link ID}<br/>)` |
#### Create message sender | Client | Service Bus | | | |
-| --> attach(<br/>name={link name},<br/>handle={numeric handle},<br/>role=**sender**,<br/>source={client link ID},<br/>target={entity name}<br/>) |No action |
-| No action |<-- attach(<br/>name={link name},<br/>handle={numeric handle},<br/>role=**receiver**,<br/>source={client link ID},<br/>target={entity name}<br/>) |
+| `--> attach(<br/>name={link name},<br/>handle={numeric handle},<br/>role=**sender**,<br/>source={client link ID},<br/>target={entity name}<br/>)` |No action |
+| No action |`<-- attach(<br/>name={link name},<br/>handle={numeric handle},<br/>role=**receiver**,<br/>source={client link ID},<br/>target={entity name}<br/>)` |
#### Create message sender (error) | Client | Service Bus | | | |
-| --> attach(<br/>name={link name},<br/>handle={numeric handle},<br/>role=**sender**,<br/>source={client link ID},<br/>target={entity name}<br/>) |No action |
-| No action |<-- attach(<br/>name={link name},<br/>handle={numeric handle},<br/>role=**receiver**,<br/>source=null,<br/>target=null<br/>)<br/><br/><-- detach(<br/>handle={numeric handle},<br/>closed=**true**,<br/>error={error info}<br/>) |
+| `--> attach(<br/>name={link name},<br/>handle={numeric handle},<br/>role=**sender**,<br/>source={client link ID},<br/>target={entity name}<br/>)` |No action |
+| No action |`<-- attach(<br/>name={link name},<br/>handle={numeric handle},<br/>role=**receiver**,<br/>source=null,<br/>target=null<br/>)<br/><br/><-- detach(<br/>handle={numeric handle},<br/>closed=**true**,<br/>error={error info}<br/>)` |
#### Close message receiver/sender | Client | Service Bus | | | |
-| --> detach(<br/>handle={numeric handle},<br/>closed=**true**<br/>) |No action |
-| No action |<-- detach(<br/>handle={numeric handle},<br/>closed=**true**<br/>) |
+| `--> detach(<br/>handle={numeric handle},<br/>closed=**true**<br/>)` |No action |
+| No action |`<-- detach(<br/>handle={numeric handle},<br/>closed=**true**<br/>)` |
#### Send (success) | Client | Service Bus | | | |
-| --> transfer(<br/>delivery-id={numeric handle},<br/>delivery-tag={binary handle},<br/>settled=**false**,,more=**false**,<br/>state=**null**,<br/>resume=**false**<br/>) |No action |
-| No action |<-- disposition(<br/>role=receiver,<br/>first={delivery ID},<br/>last={delivery ID},<br/>settled=**true**,<br/>state=**accepted**<br/>) |
+| `--> transfer(<br/>delivery-id={numeric handle},<br/>delivery-tag={binary handle},<br/>settled=**false**,,more=**false**,<br/>state=**null**,<br/>resume=**false**<br/>)` |No action |
+| No action |`<-- disposition(<br/>role=receiver,<br/>first={delivery ID},<br/>last={delivery ID},<br/>settled=**true**,<br/>state=**accepted**<br/>)` |
#### Send (error) | Client | Service Bus | | | |
-| --> transfer(<br/>delivery-id={numeric handle},<br/>delivery-tag={binary handle},<br/>settled=**false**,,more=**false**,<br/>state=**null**,<br/>resume=**false**<br/>) |No action |
-| No action |<-- disposition(<br/>role=receiver,<br/>first={delivery ID},<br/>last={delivery ID},<br/>settled=**true**,<br/>state=**rejected**(<br/>error={error info}<br/>)<br/>) |
+| `--> transfer(<br/>delivery-id={numeric handle},<br/>delivery-tag={binary handle},<br/>settled=**false**,,more=**false**,<br/>state=**null**,<br/>resume=**false**<br/>)` |No action |
+| No action |`<-- disposition(<br/>role=receiver,<br/>first={delivery ID},<br/>last={delivery ID},<br/>settled=**true**,<br/>state=**rejected**(<br/>error={error info}<br/>)<br/>)` |
#### Receive | Client | Service Bus | | | |
-| --> flow(<br/>link-credit=1<br/>) |No action |
-| No action |< transfer(<br/>delivery-id={numeric handle},<br/>delivery-tag={binary handle},<br/>settled=**false**,<br/>more=**false**,<br/>state=**null**,<br/>resume=**false**<br/>) |
-| --> disposition(<br/>role=**receiver**,<br/>first={delivery ID},<br/>last={delivery ID},<br/>settled=**true**,<br/>state=**accepted**<br/>) |No action |
+| `--> flow(<br/>link-credit=1<br/>)` |No action |
+| No action |`< transfer(<br/>delivery-id={numeric handle},<br/>delivery-tag={binary handle},<br/>settled=**false**,<br/>more=**false**,<br/>state=**null**,<br/>resume=**false**<br/>)` |
+| `--> disposition(<br/>role=**receiver**,<br/>first={delivery ID},<br/>last={delivery ID},<br/>settled=**true**,<br/>state=**accepted**<br/>)` |No action |
#### Multi-message receive | Client | Service Bus | | | |
-| --> flow(<br/>link-credit=3<br/>) |No action |
-| No action |< transfer(<br/>delivery-id={numeric handle},<br/>delivery-tag={binary handle},<br/>settled=**false**,<br/>more=**false**,<br/>state=**null**,<br/>resume=**false**<br/>) |
-| No action |< transfer(<br/>delivery-id={numeric handle+1},<br/>delivery-tag={binary handle},<br/>settled=**false**,<br/>more=**false**,<br/>state=**null**,<br/>resume=**false**<br/>) |
-| No action |< transfer(<br/>delivery-id={numeric handle+2},<br/>delivery-tag={binary handle},<br/>settled=**false**,<br/>more=**false**,<br/>state=**null**,<br/>resume=**false**<br/>) |
-| --> disposition(<br/>role=receiver,<br/>first={delivery ID},<br/>last={delivery ID+2},<br/>settled=**true**,<br/>state=**accepted**<br/>) |No action |
+| `--> flow(<br/>link-credit=3<br/>)` |No action |
+| No action |`< transfer(<br/>delivery-id={numeric handle},<br/>delivery-tag={binary handle},<br/>settled=**false**,<br/>more=**false**,<br/>state=**null**,<br/>resume=**false**<br/>)` |
+| No action |`< transfer(<br/>delivery-id={numeric handle+1},<br/>delivery-tag={binary handle},<br/>settled=**false**,<br/>more=**false**,<br/>state=**null**,<br/>resume=**false**<br/>)` |
+| No action |`< transfer(<br/>delivery-id={numeric handle+2},<br/>delivery-tag={binary handle},<br/>settled=**false**,<br/>more=**false**,<br/>state=**null**,<br/>resume=**false**<br/>)` |
+| `--> disposition(<br/>role=receiver,<br/>first={delivery ID},<br/>last={delivery ID+2},<br/>settled=**true**,<br/>state=**accepted**<br/>)` |No action |
### Messages The following sections explain which properties from the standard AMQP message sections are used by Service Bus and how they map to the Service Bus API set.
-Any property that application needs to defines should be mapped to AMQP's `application-properties` map.
+Any property that application needs to define should be mapped to AMQP's `application-properties` map.
#### header
Any property that application needs to defines should be mapped to AMQP's `appli
#### Message annotations
-There are few other service bus message properties, which are not part of AMQP message properties, and are passed along as `MessageAnnotations` on the message.
+There are few other service bus message properties, which aren't part of AMQP message properties, and are passed along as `MessageAnnotations` on the message.
| Annotation Map Key | Usage | API name | | | | |
There are few other service bus message properties, which are not part of AMQP m
| x-opt-sequence-number | Service-defined unique number assigned to a message. | [SequenceNumber](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.sequencenumber) | | x-opt-offset | Service-defined enqueued sequence number of the message. | [EnqueuedSequenceNumber](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.enqueuedsequencenumber) | | x-opt-locked-until | Service-defined. The date and time until which the message will be locked in the queue/subscription. | [LockedUntilUtc](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.lockeduntilutc) |
-| x-opt-deadletter-source | Service-Defined. If the message is received from dead letter queue, the source of the original message. | [DeadLetterSource](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.deadlettersource) |
+| x-opt-deadletter-source | Service-Defined. If the message is received from dead letter queue, it represents the source of the original message. | [DeadLetterSource](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.deadlettersource) |
### Transaction capability
The operations are grouped by an identifier `txn-id`.
For transactional interaction, the client acts as a `transaction controller` , which controls the operations that should be grouped together. Service Bus Service acts as a `transactional resource` and performs work as requested by the `transaction controller`.
-The client and service communicate over a `control link` , which is established by the client. The `declare` and `discharge` messages are sent by the controller over the control link to allocate and complete transactions respectively (they do not represent the demarcation of transactional work). The actual send/receive is not performed on this link. Each transactional operation requested is explicitly identified with the desired `txn-id` and therefore may occur on any link on the Connection. If the control link is closed while there exist non-discharged transactions it created, then all such transactions are immediately rolled back, and attempts to perform further transactional work on them will lead to failure. Messages on control link must not be pre settled.
+The client and service communicate over a `control link` , which is established by the client. The `declare` and `discharge` messages are sent by the controller over the control link to allocate and complete transactions respectively (they don't represent the demarcation of transactional work). The actual send/receive is not performed on this link. Each transactional operation requested is explicitly identified with the desired `txn-id` and therefore may occur on any link on the Connection. If the control link is closed while there exist non-discharged transactions it created, then all such transactions are immediately rolled back, and attempts to perform further transactional work on them will lead to failure. Messages on control link must not be pre settled.
Every connection has to initiate its own control link to be able to start and end transactions. The service defines a special target that functions as a `coordinator`. The client/controller establishes a control link to this target. Control link is outside the boundary of an entity, that is, same control link can be used to initiate and discharge transactions for multiple entities.
This section covers advanced capabilities of Azure Service Bus that are based on
### AMQP management
-The AMQP management specification is the first of the draft extensions discussed in this article. This specification defines a set of protocols layered on top of the AMQP protocol that allow management interactions with the messaging infrastructure over AMQP. The specification defines generic operations such as *create*, *read*, *update*, and *delete* for managing entities inside a messaging infrastructure and a set of query operations.
+The AMQP management specification is the first of the draft extensions discussed in this article. This specification defines a set of protocols layered on top of the AMQP protocol that allows management interactions with the messaging infrastructure over AMQP. The specification defines generic operations such as *create*, *read*, *update*, and *delete* for managing entities inside a messaging infrastructure and a set of query operations.
All those gestures require a request/response interaction between the client and the messaging infrastructure, and therefore the specification defines how to model that interaction pattern on top of AMQP: the client connects to the messaging infrastructure, initiates a session, and then creates a pair of links. On one link, the client acts as sender and on the other it acts as receiver, thus creating a pair of links that can act as a bi-directional channel. | Logical Operation | Client | Service Bus | | | | |
-| Create Request Response Path |--> attach(<br/>name={*link name*},<br/>handle={*numeric handle*},<br/>role=**sender**,<br/>source=**null**,<br/>target=ΓÇ¥myentity/$managementΓÇ¥<br/>) |No action |
-| Create Request Response Path |No action |\<-- attach(<br/>name={*link name*},<br/>handle={*numeric handle*},<br/>role=**receiver**,<br/>source=null,<br/>target=ΓÇ¥myentityΓÇ¥<br/>) |
-| Create Request Response Path |--> attach(<br/>name={*link name*},<br/>handle={*numeric handle*},<br/>role=**receiver**,<br/>source=ΓÇ¥myentity/$managementΓÇ¥,<br/>target=ΓÇ¥myclient$idΓÇ¥<br/>) | |
-| Create Request Response Path |No action |\<-- attach(<br/>name={*link name*},<br/>handle={*numeric handle*},<br/>role=**sender**,<br/>source=ΓÇ¥myentityΓÇ¥,<br/>target=ΓÇ¥myclient$idΓÇ¥<br/>) |
+| Create Request Response Path |`--> attach(<br/>name={*link name*},<br/>handle={*numeric handle*},<br/>role=**sender**,<br/>source=**null**,<br/>target=ΓÇ¥myentity/$managementΓÇ¥<br/>)` |No action |
+| Create Request Response Path |No action |`\<-- attach(<br/>name={*link name*},<br/>handle={*numeric handle*},<br/>role=**receiver**,<br/>source=null,<br/>target=ΓÇ¥myentityΓÇ¥<br/>)` |
+| Create Request Response Path |`--> attach(<br/>name={*link name*},<br/>handle={*numeric handle*},<br/>role=**receiver**,<br/>source=ΓÇ¥myentity/$managementΓÇ¥,<br/>target=ΓÇ¥myclient$idΓÇ¥<br/>)` | |
+| Create Request Response Path |No action |`\<-- attach(<br/>name={*link name*},<br/>handle={*numeric handle*},<br/>role=**sender**,<br/>source=ΓÇ¥myentityΓÇ¥,<br/>target=ΓÇ¥myclient$idΓÇ¥<br/>)` |
Having that pair of links in place, the request/response implementation is straightforward: a request is a message sent to an entity inside the messaging infrastructure that understands this pattern. In that request-message, the *reply-to* field in the *properties* section is set to the *target* identifier for the link onto which to deliver the response. The handling entity processes the request, and then delivers the reply over the link whose *target* identifier matches the indicated *reply-to* identifier.
The request message has the following application properties:
| Key | Optional | Value Type | Value Contents | | | | | |
-| operation |No |string |**put-token** |
-| type |No |string |The type of the token being put. |
-| name |No |string |The "audience" to which the token applies. |
-| expiration |Yes |timestamp |The expiry time of the token. |
+| `operation` |No |string |**put-token** |
+| `type` |No |string |The type of the token being put. |
+| `name` |No |string |The "audience" to which the token applies. |
+| `expiration` |Yes |timestamp |The expiry time of the token. |
The *name* property identifies the entity with which the token shall be associated. In Service Bus it's the path to the queue, or topic/subscription. The *type* property identifies the token type: | Token Type | Token Description | Body Type | Notes | | | | | |
-| jwt |JSON Web Token (JWT) |AMQP Value (string) | |
-| servicebus.windows.net:sastoken |Service Bus SAS Token |AMQP Value (string) |- |
+| `jwt` |JSON Web Token (JWT) |AMQP Value (string) | |
+| `servicebus.windows.net:sastoken` |Service Bus SAS Token |AMQP Value (string) |- |
Tokens confer rights. Service Bus knows about three fundamental rights: "Send" enables sending, "Listen" enables receiving, and "Manage" enables manipulating entities. Service Bus SAS tokens refer to rules configured on the namespace or entity, and those rules are configured with rights. Signing the token with the key associated with that rule thus makes the token express the respective rights. The token associated with an entity using *put-token* permits the connected client to interact with the entity per the token rights. A link where the client takes on the *sender* role requires the "Send" right; taking on the *receiver* role requires the "Listen" right.
The reply message has the following *application-properties* values
| Key | Optional | Value Type | Value Contents | | | | | |
-| status-code |No |int |HTTP response code **[RFC2616]**. |
-| status-description |Yes |string |Description of the status. |
+| `status-code` |No |int |HTTP response code **[RFC2616]**. |
+| `status-description` |Yes |string |Description of the status. |
The client can call *put-token* repeatedly and for any entity in the messaging infrastructure. The tokens are scoped to the current client and anchored on the current connection, meaning the server drops any retained tokens when the connection drops.
With this functionality, you create a sender and establish the link to the `via-
| Client | Direction | Service Bus | | : | :: | : |
-| attach(<br/>name={link name},<br/>role=sender,<br/>source={client link ID},<br/>target=**{via-entity}**,<br/>**properties=map [(<br/>com.microsoft:transfer-destination-address=<br/>{destination-entity} )]** ) | > | |
-| | < | attach(<br/>name={link name},<br/>role=receiver,<br/>source={client link ID},<br/>target={via-entity},<br/>properties=map [(<br/>com.microsoft:transfer-destination-address=<br/>{destination-entity} )] ) |
+| `attach(<br/>name={link name},<br/>role=sender,<br/>source={client link ID},<br/>target=**{via-entity}**,<br/>**properties=map [(<br/>com.microsoft:transfer-destination-address=<br/>{destination-entity} )]** )` | > | |
+| | < | `attach(<br/>name={link name},<br/>role=receiver,<br/>source={client link ID},<br/>target={via-entity},<br/>properties=map [(<br/>com.microsoft:transfer-destination-address=<br/>{destination-entity} )] )` |
## Next steps To learn more about AMQP, see [Service Bus AMQP overview](service-bus-amqp-overview.md).
service-bus-messaging Service Bus Azure And Service Bus Queues Compared Contrasted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-azure-and-service-bus-queues-compared-contrasted.md
This section compares Storage queues and Service Bus queues from the perspective
| | | | | Maximum queue size |500 TB<br/><br/>(limited to a [single storage account capacity](../storage/common/storage-introduction.md#queue-storage)) |1 GB to 80 GB<br/><br/>(defined upon creation of a queue and [enabling partitioning](service-bus-partitioning.md) ΓÇô see the ΓÇ£Additional InformationΓÇ¥ section) | | Maximum message size |64 KB<br/><br/>(48 KB when using Base64 encoding)<br/><br/>Azure supports large messages by combining queues and blobs ΓÇô at which point you can enqueue up to 200 GB for a single item. |256 KB or 100 MB<br/><br/>(including both header and body, maximum header size: 64 KB).<br/><br/>Depends on the [service tier](service-bus-premium-messaging.md). |
-| Maximum message TTL |Infinite (api-version 2017-07-27 or later) |TimeSpan.Max |
+| Maximum message TTL |Infinite (api-version 2017-07-27 or later) |TimeSpan.MaxValue |
| Maximum number of queues |Unlimited |10,000<br/><br/>(per service namespace) | | Maximum number of concurrent clients |Unlimited |5,000 |
service-bus-messaging Service Bus Messages Payloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-messages-payloads.md
Title: Azure Service Bus messages, payloads, and serialization | Microsoft Docs description: This article provides an overview of Azure Service Bus messages, payloads, message routing, and serialization. Previously updated : 04/14/2021 Last updated : 05/31/2022 # Messages, payloads, and serialization
A Service Bus message consists of a binary payload section that Service Bus neve
The predefined broker properties are listed in the following table. The names are used with all official client APIs and also in the [BrokerProperties](/rest/api/servicebus/introduction) JSON object of the HTTP protocol mapping. The equivalent names used at the AMQP protocol level are listed in parentheses.
-While the below names use pascal casing, please note that JavaScript and Python clients would use camel and snake casing respectively.
+While the below names use pascal casing, note that JavaScript and Python clients would use camel and snake casing respectively.
| Property Name | Description | ||-|
While the below names use pascal casing, please note that JavaScript and Python
| `ReplyΓÇïToΓÇïSessionΓÇïId` (reply-to-group-id) | This value augments the **ReplyTo** information and specifies which **SessionId** should be set for the reply when sent to the reply entity. | | `ScheduledΓÇïEnqueueΓÇïTimeΓÇïUtc` | For messages that are only made available for retrieval after a delay, this property defines the UTC instant at which the message will be logically enqueued, sequenced, and therefore made available for retrieval. | | `SequenceΓÇïNumber` | The sequence number is a unique 64-bit integer assigned to a message as it is accepted and stored by the broker and functions as its true identifier. For partitioned entities, the topmost 16 bits reflect the partition identifier. Sequence numbers monotonically increase and are gapless. They roll over to 0 when the 48-64 bit range is exhausted. This property is read-only. |
-| `SessionΓÇïId` (group-id) | For session-aware entities, this application-defined value specifies the session affiliation of the message. Messages with the same session identifier are subject to summary locking and enable exact in-order processing and demultiplexing. For entities that are not session-aware, this value is ignored. |
+| `SessionΓÇïId` (group-id) | For session-aware entities, this application-defined value specifies the session affiliation of the message. Messages with the same session identifier are subject to summary locking and enable exact in-order processing and demultiplexing. For entities that aren't session-aware, this value is ignored. |
|
-| `TimeΓÇïToΓÇïLive` | This value is the relative duration after which the message expires, starting from the instant the message has been accepted and stored by the broker, as captured in **EnqueueTimeUtc**. When not set explicitly, the assumed value is the **DefaultTimeToLive** for the respective queue or topic. A message-level **TimeToLive** value can't be longer than the entity's **DefaultTimeToLive** setting. If it is longer, it is silently adjusted. |
+| `TimeΓÇïToΓÇïLive` | This value is the relative duration after which the message expires, starting from the instant it has been accepted and stored by the broker, as captured in **EnqueueTimeUtc**. When not set explicitly, the assumed value is the **DefaultTimeToLive** for the respective queue or topic. A message-level **TimeToLive** value can't be longer than the entity's **DefaultTimeToLive** setting. If it's longer, it's silently adjusted. |
| `To` (to) | This property is reserved for future use in routing scenarios and currently ignored by the broker itself. Applications can use this value in rule-driven autoforward chaining scenarios to indicate the intended logical destination of the message. | | `ViaΓÇïPartitionΓÇïKey` | If a message is sent via a transfer queue in the scope of a transaction, this value selects the transfer queue partition. |
When in transit or stored inside of Service Bus, the payload is always an opaque
Unlike the Java or .NET Standard variants, the .NET Framework version of the Service Bus API supports creating **BrokeredMessage** instances by passing arbitrary .NET objects into the constructor.
-When using the legacy SBMP protocol, those objects are then serialized with the default binary serializer, or with a serializer that is externally supplied. When using the AMQP protocol, the object is serialized into an AMQP object. The receiver can retrieve those objects with the [GetBody\<T>()](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.getbody#Microsoft_ServiceBus_Messaging_BrokeredMessage_GetBody__1) method, supplying the expected type. With AMQP, the objects are serialized into an AMQP graph of `ArrayList` and `IDictionary<string,object>` objects, and any AMQP client can decode them.
+When you use the legacy SBMP protocol, those objects are then serialized with the default binary serializer, or with a serializer that is externally supplied. The object is serialized into an AMQP object. The receiver can retrieve those objects with the [GetBody\<T>()](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.getbody#Microsoft_ServiceBus_Messaging_BrokeredMessage_GetBody__1) method, supplying the expected type. With AMQP, the objects are serialized into an AMQP graph of `ArrayList` and `IDictionary<string,object>` objects, and any AMQP client can decode them.
-While this hidden serialization magic is convenient, applications should take explicit control of object serialization and turn their object graphs into streams before including them into a message, and do the reverse on the receiver side. This yields interoperable results. While AMQP has a powerful binary encoding model, it is tied to the AMQP messaging ecosystem and HTTP clients will have trouble decoding such payloads.
+While this hidden serialization magic is convenient, applications should take explicit control of object serialization and turn their object graphs into streams before including them into a message, and do the reverse on the receiver side. This yields interoperable results. While AMQP has a powerful binary encoding model, it's tied to the AMQP messaging ecosystem and HTTP clients will have trouble decoding such payloads.
The .NET Standard and Java API variants only accept byte arrays, which means that the application must handle object serialization control.
service-connector Concept Service Connector Internals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/concept-service-connector-internals.md
az spring-cloud connection list-configuration -g {spring_cloud_rg} -n {spring_cl
## Configuration naming convention
-Service Connector sets the configuration (environment variables or Spring Boot configurations) when creating a connection. The environment variable key-value pair(s) are determined by your client type and authentication type. For example, using the Azure SDK with managed identity requires a client ID, client secret, etc. Using JDBC driver a requires database connection string. Follow this convention to name the configuration:
+Service Connector sets the configuration (environment variables or Spring Boot configurations) when creating a connection. The environment variable key-value pair(s) are determined by your client type and authentication type. For example, using the Azure SDK with managed identity requires a client ID, client secret, etc. Using JDBC driver requires a database connection string. Follow this convention to name the configuration:
If you're using **Spring Boot** as the client type:
service-connector Quickstart Cli App Service Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-app-service-connection.md
Use the Azure CLI [az webapp connection](/cli/azure/webapp/connection) command t
- **App Service name:** the name of your App Service that connects to the target service. ```azurecli-interactive
-az webapp connection list -g "<your-app-service-resource-group>" --webapp "<your-app-service-name>"
+az webapp connection list -g "<your-app-service-resource-group>" -n "<your-app-service-name>"
``` ## Next steps
service-fabric Service Fabric Controlled Chaos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-controlled-chaos.md
Title: Induce Chaos in Service Fabric clusters description: Using Fault Injection and Cluster Analysis Service APIs to manage Chaos in the cluster. Previously updated : 03/26/2021 Last updated : 05/31/2022 # Induce controlled Chaos in Service Fabric clusters
Connect-ServiceFabricCluster $clusterConnectionString
$events = @{} $now = [System.DateTime]::UtcNow
-Start-ServiceFabricChaos -TimeToRunMinute $timeToRunMinute -MaxConcurrentFaults $maxConcurrentFaults -MaxClusterStabilizationTimeoutSec $maxClusterStabilizationTimeSecs -EnableMoveReplicaFaults -WaitTimeBetweenIterationsSec $waitTimeBetweenIterationsSec -WaitTimeBetweenFaultsSec $waitTimeBetweenFaultsSec -ClusterHealthPolicy $clusterHealthPolicy -ChaosTargetFilter $chaosTargetFilter
+Start-ServiceFabricChaos -TimeToRunMinute $timeToRunMinute -MaxConcurrentFaults $maxConcurrentFaults -MaxClusterStabilizationTimeoutSec $maxClusterStabilizationTimeSecs -EnableMoveReplicaFaults -WaitTimeBetweenIterationsSec $waitTimeBetweenIterationsSec -WaitTimeBetweenFaultsSec $waitTimeBetweenFaultsSec -ClusterHealthPolicy $clusterHealthPolicy -ChaosTargetFilter $chaosTargetFilter -Context $context
while($true) {
site-recovery Azure To Azure Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-common-questions.md
Yes. Site Recovery supports disaster recovery of VMs that have Azure Disk Encryp
[Learn more](azure-to-azure-how-to-enable-replication-ade-vms.md) about enabling replication for encrypted VMs.
+See the [support matrix](azure-to-azure-support-matrix.md#replicated-machinesstorage) for information about support for other encryption features.
+ ### Can I select an automation account from a different resource group? When you allow Site Recovery to manage updates for the Mobility service extension running on replicated Azure VMs, it deploys a global runbook (used by Azure services), via an Azure automation account. You can use the automation account that Site Recovery creates, or select to use an existing automation account.
spring-cloud Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/faq.md
Each service instance in Azure Spring Apps is backed by a fully dedicated Kubern
Azure Spring Apps intelligently schedules your applications on the underlying Kubernetes worker nodes. To provide high availability, Azure Spring Apps distributes applications with 2 or more instances on different nodes.
-### In which regions is Azure Spring Apps available?
+### In which regions is Azure Spring Apps Basic/Standard tier available?
East US, East US 2, Central US, South Central US, North Central US, West US, West US 2, West US 3, West Europe, North Europe, UK South, Southeast Asia, Australia East, Canada Central, UAE North, Central India, Korea Central, East Asia, Japan East, South Africa North, Brazil South, France Central, China East 2(Mooncake), and China North 2(Mooncake). [Learn More](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud)
+### In which regions is Azure Spring Apps Enterprise tier available?
+
+East US, East US 2, South Central US, West US 2, West US 3, West Europe, North Europe, UK South, Southeast Asia, Australia East.
+ ### Is any customer data stored outside of the specified region? Azure Spring Apps is a regional service. All customer data in Azure Spring Apps is stored to a single, specified region. To learn more about geo and region, see [Data residency in Azure](https://azure.microsoft.com/global-infrastructure/data-residency/).
If you encounter any issues with Azure Spring Apps, create an [Azure Support Req
Enterprise tier has built-in VMware Spring Runtime Support, so you can open support tickets to [VMware](https://aka.ms/ascevsrsupport) if you think your issue is in the scope of VMware Spring Runtime Support. To better understand VMware Spring Runtime Support itself, see <https://tanzu.vmware.com/spring-runtime>. To understand the details about how to register and use this support service, see the Support section in the [Enterprise tier FAQ from VMware](https://aka.ms/EnterpriseTierFAQ). For any other issues, open support tickets with Microsoft.
+> [!IMPORTANT]
+> After you create an Enterprise tier instance, your entitlement will be ready within three business days. If you encounter any exceptions, raise a support ticket with Microsoft to get help with it.
+ ## Development ### I am a Spring developer but new to Azure. What is the quickest way for me to learn how to develop an application in Azure Spring Apps?
spring-cloud How To Migrate Standard Tier To Enterprise Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-migrate-standard-tier-to-enterprise-tier.md
This article shows you how to migrate an existing application in Basic or Standard tier to Enterprise tier. When you migrate from Basic or Standard tier to Enterprise tier, VMware Tanzu components will replace the OSS Spring Cloud components to provide more feature support.
+This article will use the Pet Clinic sample apps as examples of how to migrate.
+ ## Prerequisites -- An already provisioned Azure Spring Apps Enterprise tier service instance with Spring Cloud Gateway for Tanzu enabled. For more information, see [Quickstart: Provision an Azure Spring Apps service instance using Enterprise tier](./quickstart-provision-service-instance-enterprise.md). However, you won't need to change any code in your applications.-- [Azure CLI version 2.0.67 or later](/cli/azure/install-azure-cli).
+- [!INCLUDE [install-enterprise-extension](includes/install-enterprise-extension.md)]
+
+## Provision a service instance
+
+In Enterprise Tier, VMware Tanzu components will replace the OSS Spring Cloud components to provide more feature support. Tanzu components are enabled on demand according to your needs. You can select the components you need before creating the service instance.
+
+> [!NOTE]
+> To use Tanzu Components, you must enable them when you provision your Azure Spring Apps service instance. You can't enable them after provisioning at this time.
+
+Use the following steps to provision an Azure Spring Apps service instance:
+
+### [Portal](#tab/azure-portal)
+
+1. Open the [Azure portal](https://ms.portal.azure.com/).
+
+1. In the top search box, search for *Azure Spring Apps*.
+
+1. Select **Azure Spring Apps** from the results, then select **Create**.
+
+1. Select **Change** next to the **Pricing** option, then select **Enterprise**.
+
+ :::image type="content" source="media/enterprise/getting-started-enterprise/choose-enterprise-tier.png" alt-text="Screenshot of Azure portal Azure Spring Apps creation page with Basics section and 'Choose your pricing tier' pane showing." lightbox="media/enterprise/getting-started-enterprise/choose-enterprise-tier.png":::
+
+ Select the **Terms** checkbox to agree to the legal terms and privacy statements of the Enterprise tier offering in the Azure Marketplace.
+
+1. To configure VMware Tanzu components, select **Next: VMware Tanzu settings**.
+
+ > [!NOTE]
+ > All Tanzu components are enabled by default. Carefully consider which Tanzu components you want to use or enable during the provisioning phase. After provisioning the Azure Spring Apps instance, you can't enable or disable Tanzu components.
+
+ :::image type="content" source="media/enterprise/getting-started-enterprise/create-instance-tanzu-settings-public-preview.png" alt-text="Screenshot of Azure portal Azure Spring Apps creation page with V M ware Tanzu Settings section showing." lightbox="media/enterprise/getting-started-enterprise/create-instance-tanzu-settings-public-preview.png":::
+
+1. Select the **Application Insights** section, then select **Enable Application Insights**. You can also enable Application Insights after you provision the Azure Spring Apps instance.
+
+ - Choose an existing Application Insights instance or create a new Application Insights instance.
+ - Enter a **Sampling Rate** in the range of 0-100, or use the default value 10.
+
+ > [!NOTE]
+ > You'll pay for the usage of Application Insights when integrated with Azure Spring Apps. For more information about Application Insights pricing, see [Application Insights billing](../azure-monitor/logs/cost-logs.md#application-insights-billing).
+
+1. Select **Review and create** and wait for validation to complete, then select **Create** to start provisioning the service instance.
+
+It takes about 5 minutes to finish the resource provisioning.
+
+### [Azure CLI](#tab/azure-cli)
+
+1. Update Azure CLI with the Azure Spring Apps extension by using the following command:
+
+ ```azurecli
+ az extension update --name spring-cloud
+ ```
+
+1. Sign in to the Azure CLI and choose your active subscription by using the following command:
+
+ ```azurecli
+ az login
+ az account list --output table
+ az account set --subscription <subscription-ID>
+ ```
+
+1. Use the following command to accept the legal terms and privacy statements for the Enterprise tier. This step is only necessary if your subscription has never been used to create an Enterprise tier instance of Azure Spring Apps before.
+
+ ```azurecli
+ az provider register --namespace Microsoft.SaaS
+ az term accept --publisher vmware-inc --product azure-spring-cloud-vmware-tanzu-2 --plan tanzu-asc-ent-mtr
+ ```
+
+1. Enter a name for your Azure Spring Apps service instance. The name must be between 4 and 32 characters long and can only contain lowercase letters, numbers, and hyphens. The first character of the service name must be a letter and the last character must be either a letter or a number.
+
+1. Create a resource group and an Azure Spring Apps service instance using the following the command:
+
+ ```azurecli
+ az group create --name <resource-group-name>
+ az spring-cloud create \
+ --resource-group <resource-group-name> \
+ --name <service-instance-name> \
+ --sku enterprise
+ ```
+
+ For more information about resource groups, see [What is Azure Resource Manager?](../azure-resource-manager/management/overview.md).
-## Using Application Configuration Service for configuration
+1. Set your default resource group name and Spring Cloud service name using the following command:
-In Enterprise tier, Application Configuration Service provides external configuration support for your apps. Managed Spring Cloud Config Server is only available in Basic and Standard tiers and is not available in Enterprise tier.
+ ```azurecli
+ az config set defaults.group=<resource-group-name> defaults.spring-cloud=<service-instance-name>
+ ```
+++
+## Create and configure apps
+
+The app creation steps are the same as Standard Tier.
+
+1. To set the CLI defaults, use the following commands. Be sure to replace the placeholders with your own values.
+
+ ```azurecli
+ az account set --subscription=<your-subscription-id>
+ az configure --defaults group=<your-resource-group-name> spring-cloud=<your-service-name>
+ ```
+
+1. To create the two core applications for PetClinic, `api-gateway` and `customers-service`, use the following commands:
+
+ ```azurecli
+ az spring-cloud app create --name api-gateway --instance-count 1 --memory 2Gi --assign-endpoint
+ az spring-cloud app create --name customers-service --instance-count 1 --memory 2Gi
+ ```
+
+## Use Application Configuration Service for external configuration
+
+In Enterprise tier, Application Configuration Service provides external configuration support for your apps. Managed Spring Cloud Config Server is only available in Basic and Standard tiers and isn't available in Enterprise tier.
+
+| Component | Standard Tier | Enterprise Tier |
+||--||
+| Config Server | OSS config server <br> Auto bound (always injection) <br>Always provisioned | Application Configuration Service for Tanzu <br> Need manual binding to app <br> Enable on demand |
## Configure Application Configuration Service for Tanzu settings Follow these steps to use Application Configuration Service for Tanzu as a centralized configuration service.
-# [Azure portal](#tab/azure-portal)
+### [Portal](#tab/azure-portal)
1. Select **Application Configuration Service**. 1. Select **Overview** to view the running state and resources allocated to Application Configuration Service for Tanzu.
- :::image type="content" source="./media/enterprise/getting-started-enterprise/config-service-overview.png" alt-text="Application Configuration Service Overview screen" lightbox="./media/enterprise/getting-started-enterprise/config-service-overview.png":::
+ :::image type="content" source="./media/enterprise/getting-started-enterprise/config-service-overview.png" alt-text="Screenshot of Azure portal Azure Spring Apps with Application Configuration Service page and Overview section showing." lightbox="./media/enterprise/getting-started-enterprise/config-service-overview.png":::
+
+1. Select **Settings**, then add a new entry in the **Repositories** section with the following information:
-1. Select **Settings**, then add a new entry in the **Repositories** section with the Git backend information.
+ - Name: `default`
+ - Patterns: `api-gateway,customers-service`
+ - URI: `https://github.com/Azure-Samples/spring-petclinic-microservices-config`
+ - Label: `master`
-1. Select **Validate** to validate access to the target URI. After validation completes successfully, select **Apply** to update the configuration settings.
+1. Select **Validate** to validate access to the target URI.
- :::image type="content" source="./media/enterprise/getting-started-enterprise/config-service-settings.png" alt-text="Application Configuration Service Settings overview" lightbox="./media/enterprise/getting-started-enterprise/config-service-settings.png":::
+1. After validation completes successfully, select **Apply** to update the configuration settings.
-# [Azure CLI](#tab/azure-cli)
+ :::image type="content" source="./media/enterprise/getting-started-enterprise/config-service-settings.png" alt-text="Screenshot of Azure portal Azure Spring Apps with Application Configuration Service page and Settings section showing." lightbox="./media/enterprise/getting-started-enterprise/config-service-settings.png":::
+
+### [Azure CLI](#tab/azure-cli)
+
+To set the default repository, use the following command:
```azurecli az spring-cloud application-configuration-service git repo add \
- --name <entry-name> \
- --patterns <patterns> \
- --uri <git-backend-uri> \
- --label <git-branch-name>
+ --name default \
+ --patterns api-gateway,customers-service \
+ --uri https://github.com/Azure-Samples/spring-petclinic-microservices-config.git \
+ --label master
```
-### Bind application to Application Configuration Service for Tanzu and configure patterns
+## Bind application to Application Configuration Service for Tanzu
When you use Application Configuration Service for Tanzu with a Git backend, you must bind the app to Application Configuration Service for Tanzu. After binding the app, you'll need to configure which pattern will be used by the app. Follow these steps to bind and configure the pattern for the app.
-# [Azure portal](#tab/azure-portal)
-
-1. Open the **App binding** tab.
+### [Portal](#tab/azure-portal)
-1. Select **Bind app** and choose one app in the dropdown, then select **Apply** to bind.
+To bind apps to Application Configuration Service for VMware Tanzu®, follow these steps.
- :::image type="content" source="./media/enterprise/how-to-enterprise-application-configuration-service/config-service-app-bind-dropdown.png" alt-text="How to bind Application Configuration Service screenshot":::
-
- > [!NOTE]
- > When you change the bind/unbind status, you must restart or redeploy the app for the binding to take effect.
-
-1. Select **Apps**, then select the [pattern(s)](./how-to-enterprise-application-configuration-service.md#pattern) to be used by the apps.
-
- 1. In the left navigation menu, select **Apps** to view the list of apps.
-
- 1. Select the target app to configure patterns for from the `name` column.
+1. Select **Application Configuration Service**.
- 1. In the left navigation pane, select **Configuration**, then select **General settings**.
+1. Select **App binding**, then select **Bind app**.
- 1. In the **Config file patterns** dropdown, choose one or more patterns from the list.
+1. Choose one app in the dropdown, then select **Apply** to bind the application to Application Configuration Service for Tanzu.
- :::image type="content" source="./media/enterprise/how-to-enterprise-application-configuration-service/config-service-pattern.png" alt-text="Bind Application Configuration Service in deployment screenshot":::
+The list under **App name** will show the apps bound with Application Configuration Service for Tanzu.
- 1. Select **Save**.
+### [Azure CLI](#tab/azure-cli)
-# [Azure CLI](#tab/azure-cli)
+To bind apps to Application Configuration Service for VMware Tanzu® and VMware Tanzu® Service Registry, use the following commands:
```azurecli
-az spring-cloud application-configuration-service bind --app <app-name>
-az spring-cloud app deploy \
- --name <app-name> \
- --artifact-path <path-to-your-JAR-file> \
- --config-file-pattern <config-file-pattern>
+az spring-cloud application-configuration-service bind --app api-gateway
+az spring-cloud application-configuration-service bind --app customers-service
``` For more information, see [Use Application Configuration Service for Tanzu](./how-to-enterprise-application-configuration-service.md).
+## Using Service Registry for Tanzu
+
+[Service Registry](https://docs.pivotal.io/spring-cloud-services/2-1/common/service-registry/https://docsupdatetracker.net/index.html) is one of the proprietary VMware Tanzu components. It provides your apps with an implementation of the Service Discovery pattern, one of the key concepts of a microservice-based architecture. In Enterprise tier, Service Registry for Tanzu provides service registry and discover support for your apps. Managed Spring Cloud Eureka is only available in Basic and Standard tiers and isn't available in Enterprise tier.
+
+| Component | Standard Tier | Enterprise Tier |
+||-|--|
+| Service Registry | OSS eureka <br> Auto bound (always injection) <br>Always provisioned | Service Registry for Tanzu <br> Needs manual binding to app <br> Enable on demand |
+ ## Bind an application to Tanzu Service Registry
-[Service Registry](https://docs.pivotal.io/spring-cloud-services/2-1/common/service-registry/https://docsupdatetracker.net/index.html) is one of the proprietary VMware Tanzu components. It provides your apps with an implementation of the Service Discovery pattern, one of the key concepts of a microservice-based architecture.
+### [Portal](#tab/azure-portal)
+
+To bind apps to Application Configuration Service for VMware Tanzu®, follow these steps.
+
+1. In the Azure portal, Select **Service Registry**.
+
+1. Select **App binding**, then select **Bind app**.
+
+1. Choose one app in the dropdown, and then select **Apply** to bind the application to Tanzu Service Registry.
+
+ :::image type="content" source="media/enterprise/getting-started-enterprise/service-reg-app-bind-dropdown.png" alt-text="Screenshot of Azure portal Azure Spring Apps with Service Registry page and 'Bind app' dialog showing." lightbox="media/enterprise/getting-started-enterprise/service-reg-app-bind-dropdown.png":::
-Use the following steps to bind an application to Tanzu Service Registry.
+The list under **App name** shows the apps bound with Tanzu Service Registry.
-1. Open the **App binding** tab.
+### [Azure CLI](#tab/azure-cli)
-1. Select **Bind app** and choose one app in the dropdown, then select **Apply** to bind.
+To bind apps to Application Configuration Service for VMware Tanzu® and VMware Tanzu® Service Registry, use the following commands:
- :::image type="content" source="./media/enterprise/how-to-enterprise-service-registry/service-reg-app-bind-dropdown.png" alt-text="Bind Service Registry dropdown screenshot":::
+```azurecli
+az spring-cloud service-registry bind --app api-gateway
+az spring-cloud service-registry bind --app customers-service
+```
++ > [!NOTE] > When you change the bind/unbind status, you must restart or redeploy the app to make the change take effect. For more information, see [Use Tanzu Service Registry](./how-to-enterprise-service-registry.md).
-## Create and configure an application using Spring Cloud Gateway for Tanzu
+## Build and deploy applications
-[Spring Cloud Gateway for Tanzu](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/https://docsupdatetracker.net/index.html) is one of the VMware Tanzu components. It's based on the open-source Spring Cloud Gateway project. Spring Cloud Gateway for Tanzu handles cross-cutting concerns for API development teams, such as Single Sign-On (SSO), access control, rate-limiting, resiliency, security, and more.
+In Enterprise tier, Tanzu Build Service is used to build apps. It provides more features like polyglot apps to deploy from artifacts such as source code and zip files.
-Use the following steps to create and configure an application using Spring Cloud Gateway for Tanzu.
+To use Tanzu Build Service, you need to specify a resource for build task and builder to use. You can also specify the `--build-env` parameter to set build environments.
-### Create an app for Spring Cloud Gateway to route traffic to
+If the app binds with ACS, you need specify an extra argument `ΓÇöconfig-file-pattern`.
-1. Create an app which Spring Cloud Gateway for Tanzu will route traffic to by following the instructions in [Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise tier](quickstart-deploy-apps-enterprise.md).
+The following sections show how to build and deploy applications.
-1. Assign a public endpoint to the gateway to access it.
+## Build the applications locally
- # [Azure portal](#tab/azure-portal)
+To build locally, use the following steps:
- 1. Select the **Spring Cloud Gateway** section, then select **Overview** to view the running state and resources given to Spring Cloud Gateway and its operator.
+1. Clone the sample app repository in your Azure account, change the directory, and build the project using the following commands:
- 1. Select **Yes** next to *Assign endpoint* to assign a public endpoint. You'll get a URL in a few minutes. Save the URL to use later.
+ ```bash
+ git clone -b enterprise https://github.com/azure-samples/spring-petclinic-microservices
+ cd spring-petclinic-microservices
+ mvn clean package -DskipTests
+ ```
- :::image type="content" source="./media/enterprise/getting-started-enterprise/gateway-overview.png" alt-text="Gateway overview screenshot showing assigning endpoint" lightbox="./media/enterprise/getting-started-enterprise/gateway-overview.png":::
+ Compiling the project can take several minutes. Once complete, you'll have individual JAR files for each service in its respective folder.
- # [Azure CLI](#tab/azure-cli)
+1. Deploy the JAR files built in the previous step using the following commands:
```azurecli
- az spring-cloud gateway update --assign-endpoint
+ az spring-cloud app deploy \
+ --name api-gateway \
+ --artifact-path spring-petclinic-api-gateway/target/spring-petclinic-api-gateway-2.3.6.jar \
+ --config-file-patterns api-gateway
+ az spring-cloud app deploy \
+ --name customers-service \
+ --artifact-path spring-petclinic-customers-service/target/spring-petclinic-customers-service-2.3.6.jar \
+ --config-file-patterns customers-service
```
-
-
-### Configure Spring Cloud Gateway
-1. Configure Spring Cloud Gateway for Tanzu properties using the CLI:
+1. Query the application status after deployment by using the following command:
```azurecli
- az spring-cloud gateway update \
- --api-description "<api-description>" \
- --api-title "<api-title>" \
- --api-version "v0.1" \
- --server-url "<endpoint-in-the-previous-step>" \
- --allowed-origins "*"
+ az spring-cloud app list --output table
```
- You can view the properties in the portal.
+ This command produces output similar to the following example:
- :::image type="content" source="./media/enterprise/how-to-use-enterprise-spring-cloud-gateway/gateway-configuration.png" alt-text="Gateway Configuration settings screenshot" lightbox="./media/enterprise/how-to-use-enterprise-spring-cloud-gateway/gateway-configuration.png":::
+ ```output
+ Name Location ResourceGroup Public Url Production Deployment Provisioning State CPU Memory Running Instance Registered Instance Persistent Storage Bind Service Registry Bind Application Configuration Service
+ -- - -- -- -- -- -- -- -
+ api-gateway eastus <resource group> https://<service_name>-api-gateway.asc-test.net default Succeeded 1 2Gi 1/1 1/1 - True True
+ customers-service eastus <resource group> default Succeeded 1 2Gi 1/1 1/1 - True True
+ ```
-1. Configure routing rules to apps.
+## Use Application Insight
- Create rules to access apps deployed in the above steps through Spring Cloud Gateway for Tanzu.
+Azure Enterprise tier uses the build service feature [Buildpack Bindings](./how-to-enterprise-build-service.md#buildpack-bindings) to integrate [Application Insights](../azure-monitor/app/app-insights-overview.md) with the type `ApplicationInsights` instead of In-Process Agent.
- Save the following content to your application's JSON file, changing the placeholders to your application's information.
+| Standard Tier | Enterprise Tier |
+|--||
+| Application insight <br> New Relic <br> Dynatrace <br> AppDynamics | Application insight <br> New Relic <br> Dynatrace <br> AppDynamics <br> ElasticAPM |
- ```json
- [
- {
- "title": "<your-title>",
- "description": "Route to <your-app-name>",
- "predicates": [
- "Path=/api/<your-app-name>/owners"
- ],
- "filters": [
- "StripPrefix=2"
- ],
- "tags": [
- "<your-tags>"
- ]
- }
- ]
- ```
+To check or update the current settings in Application Insights, use the following steps:
-1. Apply the rule to your application using the following command:
+### [Portal](#tab/azure-portal)
- ```azurecli
- az spring-cloud gateway route-config create \
- --name <your-app-name-rule> \
- --app-name <your-app-name> \
- --routes-file <your-app-name>.json
- ```
+1. Select **Application Insights**.
+1. Enable Application Insights by selecting **Edit binding**, or the **Unbound** hyperlink.
- You can view the routes in the portal.
+ :::image type="content" source="media/enterprise/how-to-application-insights/application-insights-binding-enable.png" alt-text="Screenshot of Azure portal Azure Spring Apps instance with Application Insights page showing and drop-down menu visible with 'Edit binding' option.":::
- :::image type="content" source="media/enterprise/how-to-use-enterprise-spring-cloud-gateway/gateway-route.png" alt-text="Example screenshot of gateway routing configuration" lightbox="media/enterprise/how-to-use-enterprise-spring-cloud-gateway/gateway-route.png":::
+1. Edit the binding settings, then select **Save**.
-## Access application APIs through the gateway endpoint
+ :::image type="content" source="media/enterprise/how-to-application-insights/application-insights-edit-binding.png" alt-text="Screenshot of Azure portal 'Edit binding' pane." lightbox="media/enterprise/how-to-application-insights/application-insights-edit-binding.png":::
-1. Access the application APIs through the gateway endpoint using the following command:
+### [Azure CLI](#tab/azure-cli)
- ```bash
- curl https://<endpoint-url>/api/<your-app-name>
- ```
+To create an Application Insights buildpack binding, use the following command:
+
+```azurecli
+az spring-cloud build-service builder buildpack-binding create \
+ --resource-group <your-resource-group-name> \
+ --service <your-service-instance-name> \
+ --name <your-binding-name> \
+ --builder-name <your-builder-name> \
+ --type ApplicationInsights \
+ --properties sampling-percentage=<your-sampling-percentage> \
+ connection-string=<your-connection-string>
+```
-1. Query the routing rules using the following commands:
+To list all buildpack bindings, and find Application Insights bindings for the type `ApplicationInsights`, use the following command:
- ```azurecli
- az configure --defaults group=<resource group name> spring-cloud=<service name>
- az spring-cloud gateway route-config show \
- --name <your-app-rule> \
- --query '{appResourceId:properties.appResourceId, routes:properties.routes}'
- az spring-cloud gateway route-config list \
- --query '[].{name:name, appResourceId:properties.appResourceId, routes:properties.routes}'
- ```
+```azurecli
+az spring-cloud build-service builder buildpack-binding list \
+ --resource-group <your-resource-group-name> \
+ --service <your-service-resource-name> \
+ --builder-name <your-builder-name>
+```
+
+To replace an Application Insights buildpack binding, use the following command:
-For more information, see [Use Spring Cloud Gateway for Tanzu](./how-to-use-enterprise-spring-cloud-gateway.md).
+```azurecli
+az spring-cloud build-service builder buildpack-binding set \
+ --resource-group <your-resource-group-name> \
+ --service <your-service-instance-name> \
+ --name <your-binding-name> \
+ --builder-name <your-builder-name> \
+ --type ApplicationInsights \
+ --properties sampling-percentage=<your-sampling-percentage> \
+ connection-string=<your-connection-string>
+```
+
+To get an Application Insights buildpack binding, use the following command:
+
+```azurecli
+az spring-cloud build-service builder buildpack-binding show \
+ --resource-group <your-resource-group-name> \
+ --service <your-service-instance-name> \
+ --name <your-binding-name> \
+ --builder-name <your-builder-name> \
+```
+
+To delete an Application Insights buildpack binding, use the following command:
+
+```azurecli
+az spring-cloud build-service builder buildpack-binding delete \
+ --resource-group <your-resource-group-name> \
+ --service <your-service-instance-name> \
+ --name <your-binding-name> \
+ --builder-name <your-builder-name> \
+```
+
+For more information, see [Use Application Insights Java In-Process Agent in Azure Spring Apps](./how-to-application-insights.md).
++ ## Next steps - [Azure Spring Apps](index.yml)
+- [Use API portal for VMware Tanzu](./how-to-use-enterprise-api-portal.md)
+- [Use Spring Cloud Gateway for Tanzu](./how-to-use-enterprise-spring-cloud-gateway.md)
spring-cloud How To Set Up Sso With Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-set-up-sso-with-azure-ad.md
+
+ Title: How to set up Single Sign-on with Azure AD for Spring Cloud Gateway and API Portal for Tanzu
+
+description: How to set up Single Sign-on with Azure Active Directory for Spring Cloud Gateway and API Portal for Tanzu with Azure Spring Apps Enterprise Tier.
++++ Last updated : 05/20/2022+++
+# Set up Single Sign-on using Azure Active Directory for Spring Cloud Gateway and API Portal
+
+**This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
+
+This article shows you how to configure Single Sign-on (SSO) for Spring Cloud Gateway or API Portal using the Azure Active Directory (Azure AD) as an OpenID identify provider.
+
+## Prerequisites
+
+- An Enterprise tier instance with Spring Cloud Gateway or API portal enabled. For more information, see [Quickstart: Provision an Azure Spring Apps service instance using the Enterprise tier](quickstart-provision-service-instance-enterprise.md).
+- Sufficient permissions to manage Azure AD applications.
++
+To enable SSO for Spring Cloud Gateway or API Portal, you need the following four properties configured:
+
+| SSO Property | Azure AD Configuration |
+| - | - |
+| clientId | See [Register App](#create-an-azure-ad-application-registration) |
+| clientSecret | See [Create Client Secret](#add-a-client-secret) |
+| scope | See [Configure Scope](#configure-scope) |
+| issuerUri | See [Generate Issuer URI](#configure-issuer-uri) |
+
+You'll configure the properties in Azure AD in the following steps.
+
+## Assign an endpoint for Spring Cloud Gateway or API Portal
+
+First, you must get the assigned public endpoint for Spring Cloud Gateway and API portal by following these steps:
+
+1. Open your Enterprise tier service instance in [Azure portal](https://portal.azure.com).
+1. Select **Spring Cloud Gateway** or **API portal** under *VMware Tanzu components* in the left menu.
+1. Select **Yes** next to *Assign endpoint*.
+1. Copy the URL for use in the next section of this article.
+
+## Create an Azure AD application registration
+
+Register your application to establish a trust relationship between your app and the Microsoft identity platform using the following steps:
+
+1. From the *Home* screen, select **Azure Active Directory** from the left menu.
+1. Select **App Registrations** under *Manage*, then select **New registration**.
+1. Enter a display name for your application under *Name*, then select an account type to register under *Supported account types*.
+1. In *Redirect URI (optional)* select **Web**, then enter the URL from the above section in the text box. The redirect URI is the location where Azure AD redirects your client and sends security tokens after authentication.
+1. Select **Register** to finish registering the application.
++
+When registration finishes, you'll see the *Application (client) ID* on the **Overview** screen of the *App registrations** page.
+
+## Add a redirect URI after app registration
+
+You can also add redirect URIs after app registration by following these steps:
+
+1. From your application overview, under *Manage* in the left menu, select **Authentication**.
+1. Select **Web**, then select **Add URI** under *Redirect URIs*.
+1. Add a new redirect URI, then select **Save**.
++
+For more information on Application Registration, see [Quickstart: Register an app in the Microsoft identity platform ](../active-directory/develop/quickstart-register-app.md#quickstart-register-an-application-with-the-microsoft-identity-platform).
+
+## Add a client secret
+
+The application uses a client secret to authenticate itself in SSO workflow. You can add a client secret using the following steps:
+
+1. From your application overview, under *Manage* in the left menu, select **Certificates & secrets**.
+1. Select **Client secrets**, then select **New client secret**.
+1. Enter a description for the client secret, then set an expiration date.
+1. Select **Add**.
+
+> [!WARNING]
+> Remember to save the client secret in a secure place. You can't retrieve it after you leave this page. The client secret should be provided with the client ID when you sign in as the application.
+
+## Configure scope
+
+The `scope` property of SSO is a list of scopes to be included in JWT identity tokens. They're often referred to permissions. Identity platform supports several [OpenID Connect scopes](../active-directory/develop/v2-permissions-and-consent.md#openid-connect-scopes), such as `openid`, `email` and `profile`.
+
+## Configure issuer URI
+
+The issuer URI is the URI that is asserted as its Issuer Identifier. For example, if the issuer-uri provided is `https://example.com`, then an OpenID Provider Configuration Request will be made to `https://example.com/.well-known/openid-configuration`.
+
+The issuer URI of Azure AD is like `<authentication-endpoint>/<Your-TenantID>/v2.0`. Replace `<authentication-endpoint>` with the authentication endpoint for your cloud environment (for example, `https://login.microsoftonline.com` for global Azure), and replace `<Your-TenantID>` with the Directory (tenant) ID where the application was registered.
+
+## Configure SSO
+
+After configuring your Azure AD application, you can set up the SSO properties of Spring Cloud Gateway or API Portal following these steps:
+
+1. Select **Spring Cloud Gateway** or **API portal** under *VMware Tanzu components* in the left menu, then select **Configuration**.
+1. Enter the `Scope`, `Client Id`, `Client Secret`, and `Issuer URI` in the appropriate fields. Separate multiple scopes with a comma.
+1. Select **Save** to enable the SSO configuration.
+
+> [!NOTE]
+> After configuring SSO properties, remember to enable SSO for the Spring Cloud Gateway routes by setting `ssoEnabled=true`. For more information, see [route configuration](./how-to-use-enterprise-spring-cloud-gateway.md#configure-routes).
+
+## Next steps
+- [Configure routes](./how-to-use-enterprise-spring-cloud-gateway.md#configure-routes)
spring-cloud How To Use Enterprise Api Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-use-enterprise-api-portal.md
This article shows you how to use API portal for VMware Tanzu® with Azure Spring Apps Enterprise Tier.
-[API portal](https://docs.vmware.com/en/API-portal-for-VMware-Tanzu/1.0/api-portal/GUID-https://docsupdatetracker.net/index.html) is one of the commercial VMware Tanzu components. API portal supports viewing API definitions from [Spring Cloud Gateway for VMware Tanzu®](./how-to-use-enterprise-spring-cloud-gateway.md) and testing of specific API routes from the browser. It also supports enabling Single Sign-On authentication via configuration.
+[API portal](https://docs.vmware.com/en/API-portal-for-VMware-Tanzu/1.0/api-portal/GUID-https://docsupdatetracker.net/index.html) is one of the commercial VMware Tanzu components. API portal supports viewing API definitions from [Spring Cloud Gateway for VMware Tanzu®](./how-to-use-enterprise-spring-cloud-gateway.md) and testing of specific API routes from the browser. It also supports enabling Single Sign-on authentication via configuration.
## Prerequisites
This article shows you how to use API portal for VMware Tanzu® with Azure Sprin
The following sections describe configuration in API portal.
-### Configure single sign-on (SSO)
+### Configure single Sign-on (SSO)
-API portal supports authentication and authorization using single sign-on (SSO) with an OpenID identity provider (IdP) that supports the OpenID Connect Discovery protocol.
+API portal supports authentication and authorization using single Sign-on (SSO) with an OpenID identity provider (IdP) that supports the OpenID Connect Discovery protocol.
> [!NOTE] > Only authorization servers supporting the OpenID Connect Discovery protocol are supported. Be sure to configure the external authorization server to allow redirects back to the gateway. Refer to your authorization server's documentation and add `https://<gateway-external-url>/login/oauth2/code/sso` to the list of allowed redirect URIs. | Property | Required? | Description | | - | - | - |
-| issuerUri | Yes | The URI that the it asserts as its Issuer Identifier. For example, if the issuer-uri provided is "https://example.com", then an OpenID Provider Configuration Request will be made to "https://example.com/.well-known/openid-configuration". The result is expected to be an OpenID Provider Configuration Response. |
+| issuerUri | Yes | The URI that the app asserts as its Issuer Identifier. For example, if the issuer-uri provided is "https://example.com", then an OpenID Provider Configuration Request will be made to "https://example.com/.well-known/openid-configuration". The result is expected to be an OpenID Provider Configuration Response. |
| clientId | Yes | The OpenID Connect client ID provided by your IdP | | clientSecret | Yes | The OpenID Connect client secret provided by your IdP | | scope | Yes | A list of scopes to include in JWT identity tokens. This list should be based on the scopes allowed by your identity provider |
+To set up SSO with Azure AD, see [How to set up Single Sign-on with Azure AD for Spring Cloud Gateway and API Portal for Tanzu](./how-to-set-up-sso-with-azure-ad.md).
+ > [!NOTE] > If you configure the wrong SSO property, such as the wrong password, you should remove the entire SSO property and re-add the correct configuration.
API portal supports authentication and authorization using single sign-on (SSO)
### Configure the instance count
-Configuration of the instance count for API portal is supported, unless you are using SSO. If you are using the SSO feature, only one instance count is supported.
+Configuration of the instance count for API portal is supported, unless you're using SSO. If you're using the SSO feature, only one instance count is supported.
## Assign a public endpoint for API portal
spring-cloud How To Use Enterprise Spring Cloud Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-use-enterprise-spring-cloud-gateway.md
This article shows you how to use Spring Cloud Gateway for VMware Tanzu® with Azure Spring Apps Enterprise Tier.
-[Spring Cloud Gateway for Tanzu](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/https://docsupdatetracker.net/index.html) is one of the commercial VMware Tanzu components. It's based on the open-source Spring Cloud Gateway project. Spring Cloud Gateway for Tanzu handles cross-cutting concerns for API development teams, such as Single Sign-On (SSO), access control, rate-limiting, resiliency, security, and more. You can accelerate API delivery using modern cloud native patterns, and any programming language you choose for API development.
+[Spring Cloud Gateway for Tanzu](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/https://docsupdatetracker.net/index.html) is one of the commercial VMware Tanzu components. It's based on the open-source Spring Cloud Gateway project. Spring Cloud Gateway for Tanzu handles cross-cutting concerns for API development teams, such as Single Sign-on (SSO), access control, rate-limiting, resiliency, security, and more. You can accelerate API delivery using modern cloud native patterns, and any programming language you choose for API development.
Spring Cloud Gateway for Tanzu also has other commercial API route filters for transporting authorized JSON Web Token (JWT) claims to application services, client certificate authorization, rate-limiting approaches, circuit breaker configuration, and support for accessing application services via HTTP Basic Authentication credentials.
Cross-origin resource sharing (CORS) allows restricted resources on a web page t
> [!NOTE] > Be sure you have the correct CORS configuration if you want to integrate with the [API portal](./how-to-use-enterprise-api-portal.md). For an example, see the [Create an example application](#create-an-example-application) section.
-### Configure single sign-on (SSO)
+### Configure single Sign-on (SSO)
-Spring Cloud Gateway for Tanzu supports authentication and authorization using Single Sign-On (SSO) with an OpenID identity provider (IdP) which supports OpenID Connect Discovery protocol.
+Spring Cloud Gateway for Tanzu supports authentication and authorization using Single Sign-on (SSO) with an OpenID identity provider (IdP) which supports OpenID Connect Discovery protocol.
| Property | Required? | Description | | - | - | - |
Spring Cloud Gateway for Tanzu supports authentication and authorization using S
| clientSecret | Yes | The OpenID Connect client secret provided by your IdP | | scope | Yes | A list of scopes to include in JWT identity tokens. This list should be based on the scopes allowed by your identity provider |
+To set up SSO with Azure AD, see [How to set up Single Sign-on with Azure AD for Spring Cloud Gateway and API Portal for Tanzu](./how-to-set up-sso-with-azure-ad.md).
+ > [!NOTE] > Only authorization servers supporting OpenID Connect Discovery protocol are supported. Also, be sure to configure the external authorization server to allow redirects back to the gateway. Refer to your authorization server's documentation and add `https://<gateway-external-url>/login/oauth2/code/sso` to the list of allowed redirect URIs. > > If you configure the wrong SSO property, such as the wrong password, you should remove the entire SSO property and re-add the correct configuration.
+>
+> After configuring SSO, remember to set `ssoEnabled=true` for the Spring Cloud Gateway routes.
### Requested resource
This section describes how to add, update, and manage API routes for apps that u
The route definition includes the following parts: -- appResourceId: The full app resource id to route traffic to
+- appResourceId: The full app resource ID to route traffic to
- routes: A list of route rules about how the traffic goes to one app The following tables list the route definitions. All the properties are optional.
The following tables list the route definitions. All the properties are optional
| title | A title, will be applied to methods in the generated OpenAPI documentation | | description | A description, will be applied to methods in the generated OpenAPI documentation | | uri | Full uri, will override `appResourceId` |
-| ssoEnabled | Enable SSO validation. See "Using Single Sign-On" |
+| ssoEnabled | Enable SSO validation. See "Using Single Sign-on" |
| tokenRelay | Pass currently authenticated user's identity token to application service | | predicates | A list of predicates. See [Available Predicates](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/1.0/scg-k8s/GUID-configuring-routes.html#available-predicates) and [Commercial Route Filters](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/1.0/scg-k8s/GUID-route-predicates.html)| | filters | A list of filters. See [Available Filters](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/1.0/scg-k8s/GUID-configuring-routes.html#available-filters) and [Commercial Route Filters](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/1.0/scg-k8s/GUID-route-filters.html)| | order | Route processing order, same as Spring Cloud Gateway for Tanzu | | tags | Classification tags, will be applied to methods in the generated OpenAPI documentation |
-Not all the filters/predicates are supported in Azure Spring Apps because of security/compatible reasons. The following are not supported:
+Not all the filters/predicates are supported in Azure Spring Apps because of security/compatible reasons. The following aren't supported:
- BasicAuth - JWTKey
spring-cloud Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/overview.md
The following video introduces Azure Spring Apps Enterprise tier.
<br>
-> [!VIDEO https://www.youtube.com/embed/RoUtUv5CQSc]
+> [!VIDEO https://www.youtube.com/embed/CLvtz8SkrMA]
### Deploy and manage Spring and polyglot applications
Typically, open-source Spring project minor releases are supported for a minimum
Azure Spring Apps, including Enterprise tier, runs on Azure in a fully managed environment. You get all the benefits of Azure and the Java ecosystem, and the experience is familiar and intuitive, as shown in the following table:
-| Best practice | Ecosystem |
-|--|--|
-| Create service instances using a provisioning tool. | Azure Portal, CLI, ARM Template, Bicep, or Terraform |
-| Automate environments and application deployments. | GitHub, Azure DevOps, GitLab, and Jenkins |
-| Monitor end-to-end using any tool and platform. | Application Insights, Azure Log Analytics, Splunk, Elastic, New Relic, Dynatrace, or AppDynamics |
+| Best practice | Ecosystem |
+|--|-|
+| Create service instances using a provisioning tool. | Azure portal, CLI, ARM Template, Bicep, or Terraform |
+| Automate environments and application deployments. | GitHub, Azure DevOps, GitLab, and Jenkins |
+| Monitor end-to-end using any tool and platform. | Application Insights, Azure Log Analytics, Splunk, Elastic, New Relic, Dynatrace, or AppDynamics |
| Connect Spring applications and interact with your cloud services. | Spring integration with Azure services for data, messaging, eventing, cache, storage, and directories |
-| Securely load app secrets and certificates. | Azure Key Vault |
-| Use familiar development tools. | IntelliJ, VS Code, Eclipse, Spring Tool Suite, Maven, or Gradle |
+| Securely load app secrets and certificates. | Azure Key Vault |
+| Use familiar development tools. | IntelliJ, VS Code, Eclipse, Spring Tool Suite, Maven, or Gradle |
After you create your Enterprise tier service instance and deploy your applications, you can monitor with Application Insights or any other application performance management tools of your choice.
After you create your Enterprise tier service instance and deploy your applicati
The following quickstarts will help you get started using the Enterprise tier: * [View Enterprise Tier offering](how-to-enterprise-marketplace-offer.md)
-* [Provision an Azure Spring Apps instance using the Enterprise tier](quickstart-provision-service-instance-enterprise.md)
-* [Set up Application Configuration Service for Tanzu](quickstart-setup-application-configuration-service-enterprise.md)
-* [Build and deploy applications](quickstart-deploy-apps-enterprise.md)
+* [Introduction to Fitness Store sample](quickstart-sample-app-acme-fitness-store-introduction.md)
+* [Build and deploy apps](quickstart-deploy-apps-enterprise.md)
+* [Configure single sign-on](quickstart-configure-single-sign-on-enterprise.md)
+* [Integrate Azure Database for PostgreSQL and Azure Cache for Redis](quickstart-integrate-azure-database-and-redis-enterprise.md)
+* [Load application secrets using Key Vault](quickstart-key-vault-enterprise.md)
+* [Monitor applications end-to-end](quickstart-monitor-end-to-end-enterprise.md)
+* [Set request rate limits](quickstart-set-request-rate-limits-enterprise.md)
+* [Automate deployments](quickstart-automate-deployments-github-actions-enterprise.md)
Most of the Azure Spring Apps documentation applies to all tiers. Some articles apply only to Enterprise tier or only to Basic/Standard tier, as indicated at the beginning of each article.
spring-cloud Quickstart Automate Deployments Github Actions Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-automate-deployments-github-actions-enterprise.md
+
+ Title: "Quickstart - Automate deployments"
+
+description: Explains how to automate deployments to Azure Spring Apps Enterprise tier by using GitHub Actions and Terraform.
++++ Last updated : 05/31/2022+++
+# Quickstart: Automate deployments
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
+
+This quickstart shows you how to automate deployments to Azure Spring Apps Enterprise tier by using GitHub Actions and Terraform.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A license for Azure Spring Apps Enterprise tier. For more information, see [View Azure Spring Apps Enterprise tier Offer in Azure Marketplace](how-to-enterprise-marketplace-offer.md).
+- [The Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli).
+- [Git](https://git-scm.com/).
+- [jq](https://stedolan.github.io/jq/download/)
+- [!INCLUDE [install-enterprise-extension](includes/install-enterprise-extension.md)]
+
+## Set up a GitHub repository and authenticate
+
+The automation associated with the sample application requires a Storage account for maintaining Terraform state. The following steps show you how to create a Storage Account for use with GitHub Actions and Terraform.
+
+1. Use the following command to create a new resource group to contain the Storage Account:
+
+ ```azurecli
+ az group create \
+ --name <storage-resource-group> \
+ --location <location>
+ ```
+
+1. Use the following command to create a Storage Account:
+
+ ```azurecli
+ az storage account create \
+ --resource-group <storage-resource-group> \
+ --name <storage-account-name> \
+ --location <location> \
+ --sku Standard_RAGRS \
+ --kind StorageV2
+ ```
+
+1. Use the following command to create a Storage Container within the Storage Account:
+
+ ```azurecli
+ az storage container create \
+ --resource-group <storage-resource-group> \
+ --name terraform-state-container \
+ --account-name <storage-account-name> \
+ --auth-mode login
+ ```
+
+1. Use the following commands to get an Azure credential. You need an Azure service principal credential to authorize Azure login action.
+
+ ```azurecli
+ az login
+ az ad sp create-for-rbac \
+ --role contributor \
+ --scopes /subscriptions/<SUBSCRIPTION_ID> \
+ --sdk-auth
+ ```
+
+ The command should output a JSON object:
+
+ ```json
+ {
+ "clientId": "<GUID>",
+ "clientSecret": "<GUID>",
+ "subscriptionId": "<GUID>",
+ "tenantId": "<GUID>",
+ ...
+ }
+ ```
+
+1. This example uses the [fitness store](https://github.com/Azure-Samples/acme-fitness-store) sample on GitHub. Fork the sample, open the GitHub repository page, and then select the **Settings** tab. Open the **Secrets** menu, then select **Add a new secret**, as shown in the following screenshot.
+
+ :::image type="content" source="media/github-actions/actions1.png" alt-text="Screenshot showing GitHub Settings Add new secret." lightbox="media/github-actions/actions1.png"
+
+1. Set the secret name to `AZURE_CREDENTIALS` and set its value to the JSON string that you found under the heading **Set up your GitHub repository and authenticate**.
+
+ :::image type="content" source="media/github-actions/actions2.png" alt-text="Screenshot showing GitHub Settings Set secret data." lightbox="media/github-actions/actions2.png"
+
+1. Add the following secrets to GitHub Actions:
+
+ - `TF_PROJECT_NAME`: Use a value of your choosing. This value will be the name of your Terraform Project.
+ - `AZURE_LOCATION`: The Azure Region your resources will be created in.
+ - `OIDC_JWK_SET_URI`: Use the `JWK_SET_URI` defined in [Quickstart: Configure single sign-on for applications using Azure Spring Apps Enterprise tier](quickstart-configure-single-sign-on-enterprise.md).
+ - `OIDC_CLIENT_ID`: Use the `CLIENT_ID` defined in [Quickstart: Configure single sign-on for applications using Azure Spring Apps Enterprise tier](quickstart-configure-single-sign-on-enterprise.md).
+ - `OIDC_CLIENT_SECRET`: Use the `CLIENT_SECRET` defined in [Quickstart: Configure single sign-on for applications using Azure Spring Apps Enterprise tier](quickstart-configure-single-sign-on-enterprise.md).
+ - `OIDC_ISSUER_URI`: Use the `ISSUER_URI` defined in [Quickstart: Configure single sign-on for applications using Azure Spring Apps Enterprise tier](quickstart-configure-single-sign-on-enterprise.md).
+
+1. Add the secret `TF_BACKEND_CONFIG` to GitHub Actions with the following value:
+
+ ```text
+ resource_group_name = "<storage-resource-group>"
+ storage_account_name = "<storage-account-name>"
+ container_name = "terraform-state-container"
+ key = "dev.terraform.tfstate"
+ ```
+
+## Automate with GitHub Actions
+
+Now you can run GitHub Actions in your repository. The [provision workflow](https://github.com/Azure-Samples/acme-fitness-store/blob/Azure/.github/workflows/provision.yml) provisions all resources necessary to run the example application. The following screenshot shows an example run:
++
+Each application has a [deploy workflow](https://github.com/Azure-Samples/acme-fitness-store/blob/Azure/.github/workflows/catalog.yml) that will redeploy the application when changes are made to that application. The following screenshot shows some example output from the catalog service:
++
+The [cleanup workflow](https://github.com/Azure-Samples/acme-fitness-store/blob/Azure/.github/workflows/cleanup.yml) can be manually run to delete all resources created by the `provision` workflow. The following screenshot shows the output:
++
+## Clean up resources
+
+If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When no longer needed, delete the resource group, which deletes the resources in the resource group. To delete the resource group by using Azure CLI, use the following commands:
+
+```azurecli
+echo "Enter the Resource Group name:" &&
+read resourceGroupName &&
+az group delete --name $resourceGroupName &&
+echo "Press [ENTER] to continue ..."
+```
+
+## Next steps
+
+Continue on to any of the following optional quickstarts:
+
+- [Configure single sign-on](quickstart-configure-single-sign-on-enterprise.md)
+- [Integrate Azure Database for PostgreSQL and Azure Cache for Redis](quickstart-integrate-azure-database-and-redis-enterprise.md)
+- [Load application secrets using Key Vault](quickstart-key-vault-enterprise.md)
+- [Monitor applications end-to-end](quickstart-monitor-end-to-end-enterprise.md)
+- [Set request rate limits](quickstart-set-request-rate-limits-enterprise.md)
spring-cloud Quickstart Configure Single Sign On Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-configure-single-sign-on-enterprise.md
+
+ Title: "Quickstart - Configure single sign-on for applications using Azure Spring Apps Enterprise tier"
+description: Describes single sign-on configuration for Azure Spring Apps Enterprise tier.
++++ Last updated : 05/31/2022+++
+# Quickstart: Configure single sign-on for applications using Azure Spring Apps Enterprise tier
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
+
+This quickstart shows you how to configure single sign-on for applications running on Azure Spring Apps Enterprise tier.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A license for Azure Spring Apps Enterprise tier. For more information, see [View Azure Spring Apps Enterprise tier Offer in Azure Marketplace](how-to-enterprise-marketplace-offer.md).
+- [The Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli).
+- [Git](https://git-scm.com/).
+- [jq](https://stedolan.github.io/jq/download/)
+- [!INCLUDE [install-enterprise-extension](includes/install-enterprise-extension.md)]
+- Complete the steps in [Build and deploy apps to Azure Spring Apps using the Enterprise tier](quickstart-deploy-apps-enterprise.md).
+
+## Prepare single sign-on credentials
+
+To configure single sign-on for the application, you'll need to prepare credentials. The following sections describe steps for an existing provider or provisioning an application registration with Azure Active Directory.
+
+### Use an existing provider
+
+Follow these steps to configure single sign-on using an existing Identity Provider. If you're provisioning an Azure Active Directory App Registration, skip ahead to the following section, [Create and configure an application registration with Azure Active Directory](#create-and-configure-an-application-registration-with-azure-active-directory).
+
+1. Configure your existing identity provider to allow redirects back to Spring Cloud Gateway and API Portal. Spring Cloud Gateway has a single URI to allow re-entry to the gateway. API Portal has two URIs for supporting the user interface and underlying API. Retrieve these URIs by using the following commands, then add them to your single sign-on provider's configuration.
+
+ ```azurecli
+ GATEWAY_URL=$(az spring gateway show \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> | jq -r '.properties.url')
+
+ PORTAL_URL=$(az spring api-portal show \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> | jq -r '.properties.url')
+
+ echo "https://${GATEWAY_URL}/login/oauth2/code/sso"
+ echo "https://${PORTAL_URL}/oauth2-redirect.html"
+ echo "https://${PORTAL_URL}/login/oauth2/code/sso"
+ ```
+
+1. Obtain the `Client ID` and `Client Secret` for your identity provider.
+
+1. Obtain the `Issuer URI` for your identity provider. You must configure the provider with an issuer URI, which is the URI that it asserts as its Issuer Identifier. For example, if the `issuer-uri` provided is "https://example.com", then an OpenID Provider Configuration Request will be made to "https://example.com/.well-known/openid-configuration". The result is expected to be an OpenID Provider Configuration Response.
+
+ > [!NOTE]
+ > You can only use authorization servers supporting OpenID Connect Discovery protocol.
+
+1. Obtain the `JWK URI` for your identity provider for use later. The `JWK URI` typically takes the form `${ISSUER_URI}/keys` or `${ISSUER_URI}/<version>/keys`. The Identity Service application will use the public JSON Web Keys (JWK) to verify JSON Web Tokens (JWT) issued by your single sign-on identity provider's authorization server.
+
+### Create and configure an application registration with Azure Active Directory
+
+To register the application with Azure Active Directory, follow these steps. If you're using an existing provider's credentials, skip ahead to the following section, [Deploy the Identity Service application](#deploy-the-identity-service-application).
+
+1. Use the following command to create an application registration with Azure Active Directory and save the output:
+
+ ```azurecli
+ az ad app create --display-name <app-registration-name> > ad.json
+ ```
+
+1. Use the following command to retrieve the application ID and collect the client secret:
+
+ ```azurecli
+ APPLICATION_ID=$(cat ad.json | jq -r '.appId')
+ az ad app credential reset --id ${APPLICATION_ID} --append > sso.json
+ ```
+
+1. Use the following command to assign a Service Principal to the application registration:
+
+ ```azurecli
+ az ad sp create --id ${APPLICATION_ID}
+ ```
+
+1. Use the following commands to retrieve the URLs for Spring Cloud Gateway and API Portal and add the necessary Reply URLs to the Active Directory App Registration:
+
+ ```azurecli
+ APPLICATION_ID=$(cat ad.json | jq -r '.appId')
+
+ GATEWAY_URL=$(az spring gateway show \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> | jq -r '.properties.url')
+
+ PORTAL_URL=$(az spring api-portal show \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> | jq -r '.properties.url')
+
+ az ad app update \
+ --id ${APPLICATION_ID} \
+ --reply-urls "https://${GATEWAY_URL}/login/oauth2/code/sso" "https://${PORTAL_URL}/oauth2-redirect.html" "https://${PORTAL_URL}/login/oauth2/code/sso"
+ ```
+
+1. Use the following command to retrieve the application's `Client ID`. Save the output to use later in this quickstart.
+
+ ```bash
+ cat sso.json | jq -r '.appId'
+ ```
+
+1. Use the following command to retrieve the application's `Client Secret`. Save the output to use later in this quickstart.
+
+ ```bash
+ cat sso.json | jq -r '.password'
+ ```
+
+1. Use the following command to retrieve the `Issuer URI`. Save the output to use later in this quickstart.
+
+ ```bash
+ TENANT_ID=$(cat sso.json | jq -r '.tenant')
+ echo "https://login.microsoftonline.com/${TENANT_ID}/v2.0"
+ ```
+
+1. Retrieve the `JWK URI` from the output of the following command. The Identity Service application will use the public JSON Web Keys (JWK) to verify JSON Web Tokens (JWT) issued by Active Directory.
+
+ ```bash
+ TENANT_ID=$(cat sso.json | jq -r '.tenant')
+ echo "https://login.microsoftonline.com/${TENANT_ID}/discovery/v2.0/keys"
+ ```
+
+## Deploy the Identity Service application
+
+To complete the single sign-on experience, use the following steps to deploy the Identity Service application. The Identity Service application provides a single route to aid in identifying the user. For these steps, be sure to navigate to the project folder before running any commands.
+
+1. Use the following command to create the `identity-service` application:
+
+ ```azurecli
+ az spring app create \
+ --resource-group <resource-group-name> \
+ --name identity-service \
+ --service <Azure-Spring-Apps-service-instance-name>
+ ```
+
+1. Use the following command to enable externalized configuration for the identity service by binding to Application Configuration Service:
+
+ ```azurecli
+ az spring application-configuration-service bind \
+ --resource-group <resource-group-name> \
+ --app identity-service \
+ --service <Azure-Spring-Apps-service-instance-name>
+ ```
+
+1. Use the following command to enable service discovery and registration for the identity service by binding to Service Registry:
+
+ ```azurecli
+ az spring service-registry bind \
+ --resource-group <resource-group-name> \
+ --app identity-service \
+ --service <Azure-Spring-Apps-service-instance-name>
+ ```
+
+1. Use the following command to deploy the identity service:
+
+ ```azurecli
+ az spring app deploy \
+ --resource-group <resource-group-name> \
+ --name identity-service \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --config-file-pattern identity/default \
+ --source-path apps/acme-identity \
+ --env "JWK_URI=<jwk-uri>"
+ ```
+
+1. Use the following command to route requests to the identity service:
+
+ ```azurecli
+ az spring gateway route-config create \
+ --resource-group <resource-group-name> \
+ --name identity-routes \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --app-name identity-service \
+ --routes-file azure/routes/identity-service.json
+ ```
+
+## Configure single sign-on for Spring Cloud Gateway
+
+You can configure Spring Cloud Gateway to authenticate requests via single sign-on. To configure Spring Cloud Gateway to use single sign-on, follow these steps:
+
+1. Use the following commands to configure Spring Cloud Gateway to use single sign-on:
+
+ ```azurecli
+ GATEWAY_URL=$(az spring gateway show \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> | jq -r '.properties.url')
+
+ az spring gateway update \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --api-description "Fitness Store API" \
+ --api-title "Fitness Store" \
+ --api-version "v1.0" \
+ --server-url "https://${GATEWAY_URL}" \
+ --allowed-origins "*" \
+ --client-id <client-id> \
+ --client-secret <client-secret> \
+ --scope "openid,profile" \
+ --issuer-uri <issuer-uri>
+ ```
+
+1. Instruct the cart service application to use Spring Cloud Gateway for authentication. Use the following command to provide the necessary environment variables:
+
+ ```azurecli
+ az spring app update \
+ --resource-group <resource-group-name> \
+ --name cart-service \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --env "AUTH_URL=https://${GATEWAY_URL}" "CART_PORT=8080"
+ ```
+
+1. Instruct the order service application to use Spring Cloud Gateway for authentication. Use the following command to provide the necessary environment variables:
+
+ ```azurecli
+ az spring app update \
+ --resource-group <resource-group-name> \
+ --name order-service \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --env "AcmeServiceSettings__AuthUrl=https://${GATEWAY_URL}"
+ ```
+
+1. Use the following command to retrieve the URL for Spring Cloud Gateway:
+
+ ```bash
+ echo "https://${GATEWAY_URL}"
+ ```
+
+ You can open the output URL in a browser to explore the updated application. The Log In function will now work, allowing you to add items to the cart and place orders. After you sign in, the customer information button will display the signed-in username.
+
+## Configure single sign-on for API Portal
+
+You can configure API Portal to use single sign-on to require authentication before exploring APIs. Use the following commands to configure single sign-on for API Portal:
+
+```azurecli
+PORTAL_URL=$(az spring api-portal show \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> | jq -r '.properties.url')
+
+az spring api-portal update \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --client-id <client-id> \
+ --client-secret <client-secret> \
+ --scope "openid,profile,email" \
+ --issuer-uri <issuer-uri>
+```
+
+Use the following commands to retrieve the URL for API Portal:
+
+```azurecli
+PORTAL_URL=$(az spring api-portal show \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> | jq -r '.properties.url')
+
+echo "https://${PORTAL_URL}"
+```
+
+You can open the output URL in a browser to explore the application APIs. This time, you'll be directed to sign on before exploring APIs.
+++
+## Clean up resources
+
+If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When no longer needed, delete the resource group, which deletes the resources in the resource group. To delete the resource group by using Azure CLI, use the following commands:
+
+```azurecli
+echo "Enter the Resource Group name:" &&
+read resourceGroupName &&
+az group delete --name $resourceGroupName &&
+echo "Press [ENTER] to continue ..."
+```
+
+## Next steps
+
+Continue on to any of the following optional quickstarts:
+
+- [Integrate Azure Database for PostgreSQL and Azure Cache for Redis](quickstart-integrate-azure-database-and-redis-enterprise.md)
+- [Load application secrets using Key Vault](quickstart-key-vault-enterprise.md)
+- [Monitor applications end-to-end](quickstart-monitor-end-to-end-enterprise.md)
+- [Set request rate limits](quickstart-set-request-rate-limits-enterprise.md)
+- [Automate deployments](quickstart-automate-deployments-github-actions-enterprise.md)
spring-cloud Quickstart Deploy Apps Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-deploy-apps-enterprise.md
Title: "Quickstart - Build and deploy apps to Azure Spring Apps Enterprise tier" description: Describes app deployment to Azure Spring Apps Enterprise tier.--++ Previously updated : 02/09/2022- Last updated : 05/31/2022+ # Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise tier
This quickstart shows you how to build and deploy applications to Azure Spring A
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An already provisioned Azure Spring Apps Enterprise tier instance. For more information, see [Quickstart: Provision an Azure Spring Apps service using the Enterprise tier](quickstart-provision-service-instance-enterprise.md).-- [Apache Maven](https://maven.apache.org/download.cgi)
+- A license for Azure Spring Apps Enterprise tier. For more information, see [View Azure Spring Apps Enterprise tier Offer in Azure Marketplace](how-to-enterprise-marketplace-offer.md).
- [The Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli).
+- [Git](https://git-scm.com/).
+- [jq](https://stedolan.github.io/jq/download/)
- [!INCLUDE [install-enterprise-extension](includes/install-enterprise-extension.md)]
-## Create and configure apps
+## Download the sample app
-To create apps on Azure Spring Apps, follow these steps:
+Use the following commands to download the sample:
-1. To set the CLI defaults, use the following commands. Be sure to replace the placeholders with your own values.
+```bash
+git clone https://github.com/Azure-Samples/acme-fitness-store
+cd acme-fitness-store
+```
+
+## Provision a service instance
+
+Use the following steps to provision an Azure Spring Apps service instance.
+
+1. Use the following command to sign in to the Azure CLI and choose your active subscription:
```azurecli
- az account set --subscription=<subscription-id>
- az configure --defaults group=<resource-group-name> spring-cloud=<service-name>
+ az login
+ az account list --output table
+ az account set --subscription <subscription-ID>
```
-1. To create the two core applications for PetClinic, `api-gateway` and `customers-service`, use the following commands:
+1. Use the following command to accept the legal terms and privacy statements for the Enterprise tier. This step is necessary only if your subscription has never been used to create an Enterprise tier instance of Azure Spring Apps.
```azurecli
- az spring app create --name api-gateway --instance-count 1 --memory 2Gi --assign-endpoint
- az spring app create --name customers-service --instance-count 1 --memory 2Gi
+ az provider register --namespace Microsoft.SaaS
+ az term accept \
+ --publisher vmware-inc \
+ --product azure-spring-cloud-vmware-tanzu-2 \
+ --plan tanzu-asc-ent-mtr
```
-## Bind apps to Application Configuration Service for Tanzu and Tanzu Service Registry
+1. Select a location. This location must be a location supporting Azure Spring Apps Enterprise tier. For more information, see the [Azure Spring Apps FAQ](faq.md).
+
+1. Use the following command to create a resource group:
+
+ ```azurecli
+ az group create \
+ --name <resource-group-name> \
+ --location <location>
+ ```
+
+ For more information about resource groups, see [What is Azure Resource Manager?](../azure-resource-manager/management/overview.md).
+
+1. Prepare a name for your Azure Spring Apps service instance. The name must be between 4 and 32 characters long and can contain only lowercase letters, numbers, and hyphens. The first character of the service name must be a letter and the last character must be either a letter or a number.
+
+1. Use the following command to create an Azure Spring Apps service instance:
-### [Portal](#tab/azure-portal)
+ ```azurecli
+ az spring create \
+ --resource-group <resource-group-name> \
+ --name <Azure-Spring-Apps-service-instance-name> \
+ --sku enterprise \
+ --enable-application-configuration-service \
+ --enable-service-registry \
+ --enable-gateway \
+ --enable-api-portal
+ ```
-To bind apps to Application Configuration Service for VMware Tanzu®, follow these steps.
+1. Use the following command to create a Log Analytics Workspace to be used for your Azure Spring Apps service:
-1. In the Azure portal, select **Application Configuration Service**.
-1. Select **App binding**, then select **Bind app**.
-1. Choose one app in the dropdown and select **Apply** to bind the application to Application Configuration Service for Tanzu.
+ ```azurecli
+ az monitor log-analytics workspace create \
+ --resource-group <resource-group-name> \
+ --workspace-name <workspace-name> \
+ --location <location>
+ ```
- :::image type="content" source="media/enterprise/getting-started-enterprise/config-service-app-bind-dropdown.png" alt-text="Screenshot of Azure portal Azure Spring Apps with Application Configuration Service page and 'App binding' section with 'Bind app' dialog showing.":::
+1. Use the following commands to retrieve the Resource ID for your Log Analytics Workspace and Azure Spring Apps service instance:
-A list under **App name** shows the apps bound with Application Configuration Service for Tanzu, as shown in the following screenshot:
+ ```bash
+ LOG_ANALYTICS_RESOURCE_ID=$(az monitor log-analytics workspace show \
+ --resource-group <resource-group-name> \
+ --workspace-name <workspace-name> | jq -r '.id')
+ SPRING_CLOUD_RESOURCE_ID=$(az spring show \
+ --resource-group <resource-group-name> \
+ --name <Azure-Spring-Apps-service-instance-name> | jq -r '.id')
+ ```
-To bind apps to VMware Tanzu® Service Registry, follow these steps.
+1. Use the following command to configure diagnostic settings for the Azure Spring Apps Service:
-1. Select **Service Registry**.
-1. Select **App binding**, then select **Bind app**.
-1. Choose one app in the dropdown, and then select **Apply** to bind the application to Tanzu Service Registry.
+ ```azurecli
+ az monitor diagnostic-settings create \
+ --name "send-logs-and-metrics-to-log-analytics" \
+ --resource ${SPRING_CLOUD_RESOURCE_ID} \
+ --workspace ${LOG_ANALYTICS_RESOURCE_ID} \
+ --logs '[
+ {
+ "category": "ApplicationConsole",
+ "enabled": true,
+ "retentionPolicy": {
+ "enabled": false,
+ "days": 0
+ }
+ },
+ {
+ "category": "SystemLogs",
+ "enabled": true,
+ "retentionPolicy": {
+ "enabled": false,
+ "days": 0
+ }
+ },
+ {
+ "category": "IngressLogs",
+ "enabled": true,
+ "retentionPolicy": {
+ "enabled": false,
+ "days": 0
+ }
+ }
+ ]' \
+ --metrics '[
+ {
+ "category": "AllMetrics",
+ "enabled": true,
+ "retentionPolicy": {
+ "enabled": false,
+ "days": 0
+ }
+ }
+ ]'
+ ```
- :::image type="content" source="media/enterprise/getting-started-enterprise/service-reg-app-bind-dropdown.png" alt-text="Screenshot of Azure portal Azure Spring Apps with Service Registry page and 'Bind app' dialog showing.":::
+1. Use the following commands to create applications for `cart-service`, `order-service`, `payment-service`, `catalog-service`, and `frontend`:
-A list under **App name** shows the apps bound with Tanzu Service Registry, as shown in the following screenshot:
+ ```azurecli
+ az spring app create \
+ --resource-group <resource-group-name> \
+ --name cart-service \
+ --service <Azure-Spring-Apps-service-instance-name>
+
+ az spring app create \
+ --resource-group <resource-group-name> \
+ --name order-service \
+ --service <Azure-Spring-Apps-service-instance-name>
+
+ az spring app create \
+ --resource-group <resource-group-name> \
+ --name payment-service \
+ --service <Azure-Spring-Apps-service-instance-name>
+
+ az spring app create \
+ --resource-group <resource-group-name> \
+ --name catalog-service
+ --service <Azure-Spring-Apps-service-instance-name>
+
+ az spring app create \
+ --resource-group <resource-group-name> \
+ --name frontend \
+ --service <Azure-Spring-Apps-service-instance-name>
+ ```
+## Externalize configuration with Application Configuration Service
-### [Azure CLI](#tab/azure-cli)
+Use the following steps to configure Application Configuration Service.
-To bind apps to Application Configuration Service for VMware Tanzu® and VMware Tanzu® Service Registry, use the following commands.
+1. Use the following command to create a configuration repository for Application Configuration Service:
+
+ ```azurecli
+ az spring application-configuration-service git repo add \
+ --resource-group <resource-group-name> \
+ --name acme-fitness-store-config \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --label main \
+ --patterns "catalog/default,catalog/key-vault,identity/default,identity/key-vault,payment/default" \
+ --uri "https://github.com/Azure-Samples/acme-fitness-store-config"
+ ```
+
+1. Use the following commands to bind applications to Application Configuration Service:
+
+ ```azurecli
+ az spring application-configuration-service bind \
+ --resource-group <resource-group-name> \
+ --app payment-service \
+ --service <Azure-Spring-Apps-service-instance-name>
+
+ az spring application-configuration-service bind \
+ --resource-group <resource-group-name> \
+ --app catalog-service \
+ --service <Azure-Spring-Apps-service-instance-name>
+ ```
+
+## Activate service registration and discovery
+
+To active service registration and discovery, use the following commands to bind applications to Service Registry:
```azurecli
-az spring application-configuration-service bind --app api-gateway
-az spring application-configuration-service bind --app customers-service
-az spring service-registry bind --app api-gateway
-az spring service-registry bind --app customers-service
+az spring service-registry bind \
+ --resource-group <resource-group-name> \
+ --app payment-service \
+ --service <Azure-Spring-Apps-service-instance-name>
+
+az spring service-registry bind \
+ --resource-group <resource-group-name> \
+ --app catalog-service \
+ --service <Azure-Spring-Apps-service-instance-name>
``` -
+## Deploy polyglot applications with Tanzu Build Service
-## Build and deploy applications
+Use the following steps to deploy and build applications. For these steps, make sure that the terminal is in the project folder before running any commands.
-The following sections show how to build and deploy applications.
+1. Use the following command to create a custom builder in Tanzu Build Service:
-### Build the applications locally
+ ```azurecli
+ az spring build-service builder create \
+ --resource-group <resource-group-name> \
+ --name quickstart-builder \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --builder-file azure/builder.json
+ ```
-To build locally, use the following steps:
+1. Use the following command to build and deploy the payment service:
-1. Clone the sample app repository to your Azure Cloud account, change the directory, and build the project using the following commands:
+ ```azurecli
+ az spring app deploy \
+ --resource-group <resource-group-name> \
+ --name payment-service \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --config-file-pattern payment/default \
+ --source-path apps/acme-payment
+ ```
- ```bash
- git clone -b enterprise https://github.com/azure-samples/spring-petclinic-microservices
- cd spring-petclinic-microservices
- mvn clean package -DskipTests
+1. Use the following command to build and deploy the catalog service:
+
+ ```azurecli
+ az spring app deploy \
+ --resource-group <resource-group-name> \
+ --name catalog-service \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --config-file-pattern catalog/default \
+ --source-path apps/acme-catalog
```
- Compiling the project can take several minutes. Once compilation is complete, you'll have individual JAR files for each service in its respective folder.
+1. Use the following command to build and deploy the order service:
+
+ ```azurecli
+ az spring app deploy \
+ --resource-group <resource-group-name> \
+ --name order-service \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --builder quickstart-builder \
+ --source-path apps/acme-order
+ ```
-1. Deploy the JAR files built in the previous step using the following commands:
+1. Use the following command to build and deploy the cart service:
```azurecli az spring app deploy \
- --name api-gateway \
- --artifact-path spring-petclinic-api-gateway/target/spring-petclinic-api-gateway-2.3.6.jar \
- --config-file-patterns api-gateway
+ --resource-group <resource-group-name> \
+ --name cart-service \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --builder quickstart-builder \
+ --env "CART_PORT=8080" \
+ --source-path apps/acme-cart
+ ```
+
+1. Use the following command to build and deploy the frontend application:
+
+ ```azurecli
az spring app deploy \
- --name customers-service \
- --artifact-path spring-petclinic-customers-service/target/spring-petclinic-customers-service-2.3.6.jar \
- --config-file-patterns customers-service
+ --resource-group <resource-group-name> \
+ --name frontend \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --source-path apps/acme-shopping
+ ```
+
+> [!TIP]
+> To troubleshot deployments, you can use the following command to get logs streaming in real time whenever the app is running: `az spring app logs --name <app name> --follow`.
+
+## Route requests to apps with Spring Cloud Gateway
+
+Use the following steps to configure Spring Cloud Gateway and configure routes to applications.
+
+1. Use the following command to assign an endpoint to Spring Cloud Gateway:
+
+ ```azurecli
+ az spring gateway update \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --assign-endpoint true
```
-1. Query the application status after deployment by using the following command:
+1. Use the following commands to configure Spring Cloud Gateway API information:
```azurecli
- az spring app list --output table
+ GATEWAY_URL=$(az spring gateway show \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> | jq -r '.properties.url')
+
+ az spring gateway update \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --api-description "Fitness Store API" \
+ --api-title "Fitness Store" \
+ --api-version "v1.0" \
+ --server-url "https://${GATEWAY_URL}" \
+ --allowed-origins "*"
```
- This command produces output similar to the following example:
+1. Use the following command to create routes for the cart service:
- ```output
- Name Location ResourceGroup Public Url Production Deployment Provisioning State CPU Memory Running Instance Registered Instance Persistent Storage Bind Service Registry Bind Application Configuration Service
- -- - - - -- -- -- -- -- -- -
- api-gateway eastus <resource group> https://<service_name>-api-gateway.azuremicroservices.io default Succeeded 1 2Gi 1/1 1/1 - True True
- customers-service eastus <resource group> default Succeeded 1 2Gi 1/1 1/1 - True True
+ ```azurecli
+ az spring gateway route-config create \
+ --resource-group <resource-group-name> \
+ --name cart-routes \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --app-name cart-service \
+ --routes-file azure/routes/cart-service.json
```
-### Verify the applications
+1. Use the following command to create routes for the order service:
+
+ ```azurecli
+ az spring gateway route-config create \
+ --resource-group <resource-group-name> \
+ --name order-routes \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --app-name order-service \
+ --routes-file azure/routes/order-service.json
+ ```
-Access the `api gateway` and `customers service` applications from the browser using the `Public Url` shown above. The Public Url has the format `https://<service_name>-api-gateway.azuremicroservices.io`.
+1. Use the following command to create routes for the catalog service:
-![Access petclinic customers service](./media/enterprise/getting-started-enterprise/access-customers-service.png)
+ ```azurecli
+ az spring gateway route-config create \
+ --resource-group <resource-group-name> \
+ --name catalog-routes \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --app-name catalog-service \
+ --routes-file azure/routes/catalog-service.json
+ ```
+
+1. Use the following command to create routes for the frontend:
+
+ ```azurecli
+ az spring gateway route-config create \
+ --resource-group <resource-group-name> \
+ --name frontend-routes \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --app-name frontend \
+ --routes-file azure/routes/frontend.json
+ ```
+
+1. Use the following commands to retrieve the URL for Spring Cloud Gateway:
+
+ ```azurecli
+ GATEWAY_URL=$(az spring gateway show \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> | jq -r '.properties.url')
+
+ echo "https://${GATEWAY_URL}"
+ ```
+
+ You can open the output URL in a browser to explore the deployed application.
+
+## Browse and try APIs with API Portal
+
+Use the following steps to configure API Portal.
+
+1. Use the following command to assign an endpoint to API Portal:
+
+ ```azurecli
+ az spring api-portal update \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --assign-endpoint true
+ ```
+
+1. Use the following commands to retrieve the URL for API Portal:
+
+ ```azurecli
+ PORTAL_URL=$(az spring api-portal show \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> | jq -r '.properties.url')
+
+ echo "https://${PORTAL_URL}"
+ ```
+
+ You can open the output URL in a browser to explore the application APIs.
++ ## Clean up resources
echo "Press [ENTER] to continue ..."
## Next steps
-> [!div class="nextstepaction"]
-> [Quickstart: Set up a Log Analytics workspace](quickstart-setup-log-analytics.md)
+Now that you've successfully built and deployed your app, continue on to any of the following optional quickstarts:
+
+- [Configure single sign-on](quickstart-configure-single-sign-on-enterprise.md)
+- [Integrate Azure Database for PostgreSQL and Azure Cache for Redis](quickstart-integrate-azure-database-and-redis-enterprise.md)
+- [Load application secrets using Key Vault](quickstart-key-vault-enterprise.md)
+- [Monitor applications end-to-end](quickstart-monitor-end-to-end-enterprise.md)
+- [Set request rate limits](quickstart-set-request-rate-limits-enterprise.md)
+- [Automate deployments](quickstart-automate-deployments-github-actions-enterprise.md)
spring-cloud Quickstart Deploy Infrastructure Vnet Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-deploy-infrastructure-vnet-azure-cli.md
Previously updated : 11/12/2021 Last updated : 05/31/2022 # Quickstart: Provision Azure Spring Apps using Azure CLI
Last updated 11/12/2021
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ❌ Basic tier ✔️ Standard tier ✔️ Enterprise tier
This quickstart describes how to use Azure CLI to deploy an Azure Spring Apps cluster into an existing virtual network. Azure Spring Apps makes it easy to deploy Spring applications to Azure without any code changes. The service manages the infrastructure of Spring applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
+The Enterprise tier deployment plan includes the following Tanzu components:
+
+* Build Service
+* Application Configuration Service
+* Service Registry
+* Spring Cloud Gateway
+* API Portal
+ ## Prerequisites * An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. * Two dedicated subnets for the Azure Spring Apps cluster, one for the service runtime and another for the Spring applications. For subnet and virtual network requirements, see the [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md). * An existing Log Analytics workspace for Azure Spring Apps diagnostics settings and a workspace-based Application Insights resource. For more information, see [Analyze logs and metrics with diagnostics settings](diagnostic-services.md) and [Application Insights Java In-Process Agent in Azure Spring Apps](how-to-application-insights.md).
-* Three internal Classless Inter-Domain Routing (CIDR) ranges (at least */16* each) that you've identified for use by the Azure Spring Apps cluster. These CIDR ranges will not be directly routable and will be used only internally by the Azure Spring Apps cluster. Clusters may not use *169.254.0.0/16*, *172.30.0.0/16*, *172.31.0.0/16*, or *192.0.2.0/24* for the internal Spring app CIDR ranges, or any IP ranges included within the cluster virtual network address range.
+* Three internal Classless Inter-Domain Routing (CIDR) ranges (at least */16* each) that you've identified for use by the Azure Spring Apps cluster. These CIDR ranges won't be directly routable and will be used only internally by the Azure Spring Apps cluster. Clusters may not use *169.254.0.0/16*, *172.30.0.0/16*, *172.31.0.0/16*, or *192.0.2.0/24* for the internal Spring app CIDR ranges, or any IP ranges included within the cluster virtual network address range.
* Service permission granted to the virtual network. The Azure Spring Apps Resource Provider requires Owner permission to your virtual network in order to grant a dedicated and dynamic service principal on the virtual network for further deployment and maintenance. For instructions and more information, see the [Grant service permission to the virtual network](how-to-deploy-in-azure-virtual-network.md#grant-service-permission-to-the-virtual-network) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md). * If you're using Azure Firewall or a Network Virtual Appliance (NVA), you'll also need to satisfy the following prerequisites: * Network and fully qualified domain name (FQDN) rules. For more information, see [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements). * A unique User Defined Route (UDR) applied to each of the service runtime and Spring application subnets. For more information about UDRs, see [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md). The UDR should be configured with a route for *0.0.0.0/0* with a destination of your NVA before deploying the Azure Spring Apps cluster. For more information, see the [Bring your own route table](how-to-deploy-in-azure-virtual-network.md#bring-your-own-route-table) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md). * [Azure CLI](/cli/azure/install-azure-cli)
+* If you're deploying Azure Spring Apps Enterprise tier for the first time in the target subscription, use the following commands to register the provider and accept the legal terms and privacy statements for the Enterprise tier.
+
+ ```azurecli
+ az provider register --namespace Microsoft.SaaS
+ az term accept \
+ --publisher vmware-inc \
+ --product azure-spring-cloud-vmware-tanzu-2 \
+ --plan tanzu-asc-ent-mtr
+ ```
## Review the Azure CLI deployment script The deployment script used in this quickstart is from the [Azure Spring Apps reference architecture](reference-architecture.md).
-```azurecli
-#!/bin/bash
-
-echo "Enter Azure Subscription ID: "
-read subscription
-subscription=$subscription
-
-echo "Enter Azure region for resource deployment: "
-read region
-location=$region
-
-echo "Enter Azure Spring Apps Resource Group Name: "
-read azurespringappsrg
-azurespringapps_resource_group_name=$azurespringappsrg
-
-echo "Enter Azure Spring Apps VNet Resource Group Name: "
-read azurespringappsvnetrg
-azurespringapps_vnet_resource_group_name=$azurespringappsvnetrg
-
-echo "Enter Azure Spring Apps Spoke VNet : "
-read azurespringappsappspokevnet
-azurespringappsappspokevnet=$azurespringappsappspokevnet
-
-echo "Enter Azure Spring Apps App SubNet : "
-read azurespringappsappsubnet
-azurespringapps_app_subnet_name='/subscriptions/'$subscription'/resourcegroups/'$azurespringapps_vnet_resource_group_name'/providers/Microsoft.Network/virtualNetworks/'$azurespringappsappspokevnet'/subnets/'$azurespringappsappsubnet
-
-echo "Enter Azure Spring Apps Service SubNet : "
-read azurespringappsservicesubnet
-azurespringapps_service_subnet_name='/subscriptions/'$subscription'/resourcegroups/'$azurespringapps_vnet_resource_group_name'/providers/Microsoft.Network/virtualNetworks/'$azurespringappsappspokevnet'/subnets/'$azurespringappsservicesubnet
-
-echo "Enter Azure Log Analytics Workspace Resource Group Name: "
-read loganalyticsrg
-loganalyticsrg=$loganalyticsrg
-
-echo "Enter Log Analytics Workspace Resource ID: "
-read workspace
-workspaceID='/subscriptions/'$subscription'/resourcegroups/'$loganalyticsrg'/providers/microsoft.operationalinsights/workspaces/'$workspace
-
-echo "Enter Reserved CIDR Ranges for Azure Spring Apps: "
-read reservedcidrrange
-reservedcidrrange=$reservedcidrrange
-
-echo "Enter key=value pair used for tagging Azure Resources (space separated for multiple tags): "
-read tag
-tags=$tag
-
-randomstring=$(LC_ALL=C tr -dc 'a-z0-9' < /dev/urandom | fold -w 13 | head -n 1)
-azurespringapps_service='spring-'$randomstring #Name of unique Azure Spring Apps service instance
-azurespringapps_appinsights=$azurespringapps_service
-azurespringapps_resourceid='/subscriptions/'$subscription'/resourceGroups/'$azurespringapps_resource_group_name'/providers/Microsoft.AppPlatform/Spring/'$azurespringapps_service
-
-# Create Application Insights
-az monitor app-insights component create \
- --app ${azurespringapps_service} \
- --location ${location} \
- --kind web \
- -g ${azurespringappsrg} \
- --application-type web \
- --workspace ${workspaceID}
-
-# Create Azure Spring Apps Instance
-az spring create \
- -n ${azurespringapps_service} \
- -g ${azurespringappsrg} \
- -l ${location} \
- --enable-java-agent true \
- --app-insights ${azurespringapps_service} \
- --sku Standard \
- --app-subnet ${azurespringapps_app_subnet_name} \
- --service-runtime-subnet ${azurespringapps_service_subnet_name} \
- --reserved-cidr-range ${reservedcidrrange} \
- --tags ${tags}
-
-# Update diagnostic setting for Azure Spring Apps instance
-az monitor diagnostic-settings create \
- --name monitoring \
- --resource ${azurespringapps_resourceid} \
- --logs '[{"category": "ApplicationConsole","enabled": true}]' \
- --workspace ${workspaceID}
-```
+### [Standard tier](#tab/azure-spring-apps-standard)
++
+### [Enterprise tier](#tab/azure-spring-apps-enterprise)
+++ ## Deploy the cluster
To deploy the Azure Spring Apps cluster using the Azure CLI script, follow these
az group create --name <your-resource-group-name> --location <location-name> ```
-1. Save the [deploySpringCloud.sh](https://raw.githubusercontent.com/Azure/azure-spring-cloud-reference-architecture/main/CLI/brownfield-deployment/deploySpringCloud.sh) Bash script locally, then execute it from the Bash prompt.
+1. Save the script for Azure Spring Apps [Standard tier](https://raw.githubusercontent.com/Azure/azure-spring-cloud-reference-architecture/main/CLI/brownfield-deployment/azuredeploySpringStandard.sh) or [Enterprise tier](https://raw.githubusercontent.com/Azure/azure-spring-cloud-reference-architecture/main/CLI/brownfield-deployment/azuredeploySpringEnterprise.sh) locally, then run it from the Bash prompt.
+
+ **Standard tier:**
+
+ ```azurecli
+ ./azuredeploySpringStandard.sh
+ ```
+
+ **Enterprise tier:**
```azurecli
- ./deploySpringCloud.sh
+ ./azuredeploySpringEnterprise.sh
``` 1. Enter the following values when prompted by the script:
To deploy the Azure Spring Apps cluster using the Azure CLI script, follow these
* The name of the resource group that you created earlier. * The name of the virtual network resource group where you'll deploy your resources. * The name of the spoke virtual network (for example, *vnet-spoke*).
- * The name of the subnet to be used by the Azure Spring Apps service (for example, *snet-app*).
- * The name of the subnet to be used by the Spring runtime service (for example, *snet-runtime*).
+ * The name of the subnet to be used by the Azure Spring Apps Application Service (for example, *snet-app*).
+ * The name of the subnet to be used by the Azure Spring Apps Runtime Service (for example, *snet-runtime*).
* The name of the resource group for the Azure Log Analytics workspace to be used for storing diagnostic logs. * The name of the Azure Log Analytics workspace (for example, *la-cb5sqq6574o2a*). * The CIDR ranges from your virtual network to be used by Azure Spring Apps (for example, *XX.X.X.X/16,XX.X.X.X/16,XX.X.X.X/16*).
spring-cloud Quickstart Deploy Infrastructure Vnet Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-deploy-infrastructure-vnet-bicep.md
Previously updated : 11/12/2021 Last updated : 05/31/2022 # Quickstart: Provision Azure Spring Apps using Bicep
Last updated 11/12/2021
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ❌ Basic tier ✔️ Standard tier ✔️ Enterprise tier
This quickstart describes how to use a Bicep template to deploy an Azure Spring Apps cluster into an existing virtual network. Azure Spring Apps makes it easy to deploy Spring applications to Azure without any code changes. The service manages the infrastructure of Spring applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
+The Enterprise tier deployment plan includes the following Tanzu components:
+
+* Build Service
+* Application Configuration Service
+* Service Registry
+* Spring Cloud Gateway
+* API Portal
+ ## Prerequisites * An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. * Two dedicated subnets for the Azure Spring Apps cluster, one for the service runtime and another for the Spring applications. For subnet and virtual network requirements, see the [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md). * An existing Log Analytics workspace for Azure Spring Apps diagnostics settings. For more information, see [Analyze logs and metrics with diagnostics settings](diagnostic-services.md).
-* Three internal Classless Inter-Domain Routing (CIDR) ranges (at least */16* each) that you've identified for use by the Azure Spring Apps cluster. These CIDR ranges will not be directly routable and will be used only internally by the Azure Spring Apps cluster. Clusters may not use *169.254.0.0/16*, *172.30.0.0/16*, *172.31.0.0/16*, or *192.0.2.0/24* for the internal Spring app CIDR ranges, or any IP ranges included within the cluster virtual network address range.
+* Three internal Classless Inter-Domain Routing (CIDR) ranges (at least */16* each) that you've identified for use by the Azure Spring Apps cluster. These CIDR ranges won't be directly routable and will be used only internally by the Azure Spring Apps cluster. Clusters may not use *169.254.0.0/16*, *172.30.0.0/16*, *172.31.0.0/16*, or *192.0.2.0/24* for the internal Spring app CIDR ranges, or any IP ranges included within the cluster virtual network address range.
* Service permission granted to the virtual network. The Azure Spring Apps Resource Provider requires Owner permission to your virtual network in order to grant a dedicated and dynamic service principal on the virtual network for further deployment and maintenance. For instructions and more information, see the [Grant service permission to the virtual network](how-to-deploy-in-azure-virtual-network.md#grant-service-permission-to-the-virtual-network) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md). * If you're using Azure Firewall or a Network Virtual Appliance (NVA), you'll also need to satisfy the following prerequisites: * Network and fully qualified domain name (FQDN) rules. For more information, see [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements). * A unique User Defined Route (UDR) applied to each of the service runtime and Spring application subnets. For more information about UDRs, see [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md). The UDR should be configured with a route for *0.0.0.0/0* with a destination of your NVA before deploying the Azure Spring Apps cluster. For more information, see the [Bring your own route table](how-to-deploy-in-azure-virtual-network.md#bring-your-own-route-table) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md). * [Azure CLI](/cli/azure/install-azure-cli)
+* If you're deploying Azure Spring Apps Enterprise tier for the first time in the target subscription, use the following commands to register the provider and accept the legal terms and privacy statements for the Enterprise tier.
+
+ ```azurecli
+ az provider register --namespace Microsoft.SaaS
+ az term accept \
+ --publisher vmware-inc \
+ --product azure-spring-cloud-vmware-tanzu-2 \
+ --plan tanzu-asc-ent-mtr
+ ```
## Deploy using Bicep
-To deploy the cluster, follow these steps:
-
-1. Create an *azuredeploy.bicep* file with the following contents:
-
- ```Bicep
- @description('The instance name of the Azure Spring Apps resource')
- param springInstanceName string
-
- @description('The name of the Application Insights instance for Azure Spring Apps')
- param appInsightsName string
-
- @description('The resource ID of the existing Log Analytics workspace. This will be used for both diagnostics logs and Application Insights')
- param laWorkspaceResourceId string
-
- @description('The resourceID of the Azure Spring Apps App Subnet')
- param springAppSubnetID string
-
- @description('The resourceID of the Azure Spring Apps Runtime Subnet')
- param springRuntimeSubnetID string
-
- @description('Comma-separated list of IP address ranges in CIDR format. The IP ranges are reserved to host underlying Azure Spring Apps infrastructure, which should be 3 at least /16 unused IP ranges, must not overlap with any Subnet IP ranges')
- param springServiceCidrs string = '10.0.0.0/16,10.2.0.0/16,10.3.0.1/16'
-
- @description('The tags that will be associated to the Resources')
- param tags object = {
- environment: 'lab'
- }
-
- var springSkuName = 'S0'
- var springSkuTier = 'Standard'
- var location = resourceGroup().location
-
- resource appInsights 'Microsoft.Insights/components@2020-02-02-preview' = {
- name: appInsightsName
- location: location
- kind: 'web'
- tags: tags
- properties: {
- Application_Type: 'web'
- Flow_Type: 'Bluefield'
- Request_Source: 'rest'
- WorkspaceResourceId: laWorkspaceResourceId
- }
- }
-
- resource springInstance 'Microsoft.AppPlatform/Spring@2020-07-01' = {
- name: springInstanceName
- location: location
- tags: tags
- sku: {
- name: springSkuName
- tier: springSkuTier
- }
- properties: {
- networkProfile: {
- serviceCidr: springServiceCidrs
- serviceRuntimeSubnetId: springRuntimeSubnetID
- appSubnetId: springAppSubnetID
- }
- }
- }
-
- resource springMonitoringSettings 'Microsoft.AppPlatform/Spring/monitoringSettings@2020-07-01' = {
- name: '${springInstance.name}/default'
- properties: {
- traceEnabled: true
- appInsightsInstrumentationKey: appInsights.properties.InstrumentationKey
- }
- }
-
- resource springDiagnostics 'microsoft.insights/diagnosticSettings@2017-05-01-preview' = {
- name: 'monitoring'
- scope: springInstance
- properties: {
- workspaceId: laWorkspaceResourceId
- logs: [
- {
- category: 'ApplicationConsole'
- enabled: true
- retentionPolicy: {
- days: 30
- enabled: false
- }
- }
- ]
- }
- }
- ```
+To deploy the cluster, use the following steps.
-1. Open a Bash window and run the following Azure CLI command, replacing the *\<value>* placeholders with the following values:
+First, create an *azuredeploy.bicep* file with the following contents:
- * **resource-group:** The resource group name for deploying the Azure Spring Apps instance.
- * **springCloudInstanceName:** The name of the Azure Spring Apps resource.
- * **appInsightsName:** The name of the Application Insights instance for Azure Spring Apps.
- * **laWorkspaceResourceId:** The resource ID of the existing Log Analytics workspace (for example, */ subscriptions/\<your subscription>/resourcegroups/\<your log analytics resource group>/providers/ Microsoft.OperationalInsights/workspaces/\<your log analytics workspace name>*.)
- * **springCloudAppSubnetID:** The resourceID of the Azure Spring Apps App Subnet.
- * **springCloudRuntimeSubnetID:** The resourceID of the Azure Spring Apps Runtime Subnet.
- * **springCloudServiceCidrs:** A comma-separated list of IP address ranges (3 in total) in CIDR format. The IP ranges are reserved to host underlying Azure Spring Apps infrastructure. These 3 ranges should be at least */16* unused IP ranges, and must not overlap with any routable subnet IP ranges used within the network.
+### [Standard tier](#tab/azure-spring-apps-standard)
- ```azurecli
- az deployment group create \
- --resource-group <value> \
- --name initial \
- --template-file azuredeploy.bicep \
- --parameters \
- springCloudInstanceName=<value> \
- appInsightsName=<value> \
- laWorkspaceResourceId=<value> \
- springCloudAppSubnetID=<value> \
- springCloudRuntimeSubnetID=<value> \
- springCloudServiceCidrs=<value>
- ```
+
+### [Enterprise tier](#tab/azure-spring-apps-enterprise)
+++
- This command uses the Bicep template to create an Azure Spring Apps instance in an existing virtual network. The command also creates a workspace-based Application Insights instance in an existing Azure Monitor Log Analytics Workspace.
+Next, open a Bash window and run the following Azure CLI command, replacing the *\<value>* placeholders with the following values:
+
+* **resource-group:** The resource group name for deploying the Azure Spring Apps instance.
+* **springCloudInstanceName:** The name of the Azure Spring Apps resource.
+* **appInsightsName:** The name of the Application Insights instance for Azure Spring Apps.
+* **laWorkspaceResourceId:** The resource ID of the existing Log Analytics workspace (for example, */subscriptions/\<your subscription>/resourcegroups/\<your Log Analytics resource group>/providers/Microsoft.OperationalInsights/workspaces/\<your Log Analytics workspace name>*.)
+* **springCloudAppSubnetID:** The resource ID of the Azure Spring Apps Application Subnet.
+* **springCloudRuntimeSubnetID:** The resource ID of the Azure Spring Apps Runtime Subnet.
+* **springCloudServiceCidrs:** A comma-separated list of IP address ranges (three in total) in CIDR format. The IP ranges are reserved to host underlying Azure Spring Apps infrastructure. These three ranges should be at least */16* unused IP ranges, and must not overlap with any routable subnet IP ranges used within the network.
+
+ ```azurecli
+ az deployment group create \
+ --resource-group <value> \
+ --name initial \
+ --template-file azuredeploy.bicep \
+ --parameters \
+ springCloudInstanceName=<value> \
+ appInsightsName=<value> \
+ laWorkspaceResourceId=<value> \
+ springCloudAppSubnetID=<value> \
+ springCloudRuntimeSubnetID=<value> \
+ springCloudServiceCidrs=<value>
+ ```
+
+ This command uses the Bicep template to create an Azure Spring Apps instance in an existing virtual network. The command also creates a workspace-based Application Insights instance in an existing Azure Monitor Log Analytics Workspace.
## Review deployed resources
spring-cloud Quickstart Deploy Infrastructure Vnet Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-deploy-infrastructure-vnet-terraform.md
Previously updated : 11/12/2021 Last updated : 05/31/2022 # Quickstart: Provision Azure Spring Apps using Terraform
Last updated 11/12/2021
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ❌ Basic tier ✔️ Standard tier ✔️ Enterprise tier
This quickstart describes how to use Terraform to deploy an Azure Spring Apps cluster into an existing virtual network. Azure Spring Apps makes it easy to deploy Spring applications to Azure without any code changes. The service manages the infrastructure of Spring applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
+The Enterprise tier deployment plan includes the following Tanzu components:
+
+* Build Service
+* Application Configuration Service
+* Service Registry
+* Spring Cloud Gateway
+* API Portal
+
+The API Portal component will be included when it becomes available through the AzureRM Terraform provider.
+
+For more customization including custom domain support, see the [Azure Spring Apps Terraform provider](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/spring_cloud_service) documentation.
+ ## Prerequisites * An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. * [Hashicorp Terraform](https://www.terraform.io/downloads.html) * Two dedicated subnets for the Azure Spring Apps cluster, one for the service runtime and another for the Spring applications. For subnet and virtual network requirements, see the [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md). * An existing Log Analytics workspace for Azure Spring Apps diagnostics settings and a workspace-based Application Insights resource. For more information, see [Analyze logs and metrics with diagnostics settings](diagnostic-services.md) and [Application Insights Java In-Process Agent in Azure Spring Apps](how-to-application-insights.md).
-* Three internal Classless Inter-Domain Routing (CIDR) ranges (at least */16* each) that you've identified for use by the Azure Spring Apps cluster. These CIDR ranges will not be directly routable and will be used only internally by the Azure Spring Apps cluster. Clusters may not use *169.254.0.0/16*, *172.30.0.0/16*, *172.31.0.0/16*, or *192.0.2.0/24* for the internal Spring app CIDR ranges, or any IP ranges included within the cluster virtual network address range.
+* Three internal Classless Inter-Domain Routing (CIDR) ranges (at least */16* each) that you've identified for use by the Azure Spring Apps cluster. These CIDR ranges won't be directly routable and will be used only internally by the Azure Spring Apps cluster. Clusters may not use *169.254.0.0/16*, *172.30.0.0/16*, *172.31.0.0/16*, or *192.0.2.0/24* for the internal Azure Spring Apps CIDR ranges, or any IP ranges included within the cluster virtual network address range.
* Service permission granted to the virtual network. The Azure Spring Apps Resource Provider requires Owner permission to your virtual network in order to grant a dedicated and dynamic service principal on the virtual network for further deployment and maintenance. For instructions and more information, see the [Grant service permission to the virtual network](how-to-deploy-in-azure-virtual-network.md#grant-service-permission-to-the-virtual-network) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md). * If you're using Azure Firewall or a Network Virtual Appliance (NVA), you'll also need to satisfy the following prerequisites: * Network and fully qualified domain name (FQDN) rules. For more information, see [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements). * A unique User Defined Route (UDR) applied to each of the service runtime and Spring application subnets. For more information about UDRs, see [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md). The UDR should be configured with a route for *0.0.0.0/0* with a destination of your NVA before deploying the Azure Spring Apps cluster. For more information, see the [Bring your own route table](how-to-deploy-in-azure-virtual-network.md#bring-your-own-route-table) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md).
+* If you're deploying Azure Spring Apps Enterprise tier for the first time in the target subscription, use the following commands to register the provider and accept the legal terms and privacy statements for the Enterprise tier.
-## Review the configuration file
+ ```azurecli
+ az provider register --namespace Microsoft.SaaS
+ az term accept \
+ --publisher vmware-inc \
+ --product azure-spring-cloud-vmware-tanzu-2 \
+ --plan tanzu-asc-ent-mtr
+ ```
+
+## Review the Terraform plan
The configuration file used in this quickstart is from the [Azure Spring Apps reference architecture](reference-architecture.md).
-```hcl
-provider "azurerm" {
- features {}
-}
-
-resource "azurerm_resource_group" "sc_corp_rg" {
- name = var.resource_group_name
- location = var.location
-}
-
-resource "azurerm_application_insights" "sc_app_insights" {
- name = var.app_insights_name
- location = var.location
- resource_group_name = var.resource_group_name
- application_type = "web"
- depends_on = [azurerm_resource_group.sc_corp_rg]
-}
-
-resource "azurerm_spring_cloud_service" "sc" {
- name = var.sc_service_name
- resource_group_name = var.resource_group_name
- location = var.location
-
- network {
- app_subnet_id = "/subscriptions/${var.subscription}/resourceGroups/${var.azurespringappsvnetrg}/providers/Microsoft.Network/virtualNetworks/${var.vnet_spoke_name}/subnets/${var.app_subnet_id}"
- service_runtime_subnet_id = "/subscriptions/${var.subscription}/resourceGroups/${var.azurespringappsvnetrg}/providers/Microsoft.Network/virtualNetworks/${var.vnet_spoke_name}/subnets/${var.service_runtime_subnet_id}"
- cidr_ranges = var.sc_cidr
- }
-
- timeouts {
- create = "60m"
- delete = "2h"
- }
-
- depends_on = [azurerm_resource_group.sc_corp_rg]
- tags = var.tags
-
-}
-
-resource "azurerm_monitor_diagnostic_setting" "sc_diag" {
- name = "monitoring"
- target_resource_id = azurerm_spring_cloud_service.sc.id
- log_analytics_workspace_id = "/subscriptions/${var.subscription}/resourceGroups/${var.azurespringappsvnetrg}/providers/Microsoft.OperationalInsights/workspaces/${var.sc_law_id}"
-
- log {
- category = "ApplicationConsole"
- enabled = true
-
- retention_policy {
- enabled = false
- }
- }
-
- metric {
- category = "AllMetrics"
-
- retention_policy {
- enabled = false
- }
- }
-}
-```
+### [Standard tier](#tab/azure-spring-apps-standard)
+
-## Apply the configuration
+### [Enterprise tier](#tab/azure-spring-apps-enterprise)
-To apply the configuration, follow these steps:
-1. Save the [variables.tf](https://raw.githubusercontent.com/Azure/azure-spring-cloud-reference-architecture/main/terraform/brownfield-deployment/variable.tf) file locally, then open it in an editor.
++
+## Apply the Terraform plan
+
+To apply the Terraform plan, follow these steps:
+
+1. Save the *variables.tf* file for [Standard tier](https://raw.githubusercontent.com/Azure/azure-spring-cloud-reference-architecture/main/terraform/brownfield-deployment/Standard/variable.tf) or [Enterprise tier](https://raw.githubusercontent.com/Azure/azure-spring-cloud-reference-architecture/main/terraform/brownfield-deployment/Enterprise/variable.tf) locally, then open it in an editor.
1. Edit the file to add the following values:
To apply the configuration, follow these steps:
* A deployment location from the regions where Azure Spring Apps is available, as shown in [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud&regions=all). You'll need the short form of the location name. To get this value, use the following command to generate a list of Azure locations, then look up the **Name** value for the region you selected.
- ```azurecli
- az account list-locations --output table
- ```
+ ```azurecli
+ az account list-locations --output table
+ ```
+
+1. Edit the file to add the following new deployment information:
* The name of the resource group you'll deploy to.
- * A name of your choice for the Spring app deployment.
- * The name of the virtual network resource group where you'll deploy your resources.
- * The name of the spoke virtual network (for example, *vnet-spoke*).
- * The name of the subnet to be used by the Azure Spring Apps service (for example, *snet-app*).
- * The name of the subnet to be used by the Spring runtime service (for example, *snet-runtime*).
+ * A name of your choice for the Azure Spring Apps Deployment.
+ * A name of your choice for the Application Insights resource.
+ * Three CIDR ranges (at least /16) which are used to host the Azure Spring Apps backend infrastructure. The CIDR ranges must not overlap with any existing CIDR ranges in the target Subnet
+ * The key/value pairs to be applied as tags on all resources that support tags. For more information, see [Use tags to organize your Azure resources and management hierarchy](../azure-resource-manager/management/tag-resources.md)
+
+1. Edit the file to add the following existing infrastructure information:
+
+ * The name of the resource group where the existing virtual network resides.
+ * The name of the existing scope virtual network.
+ * The name of the existing subnet to be used by the Azure Spring Apps Application Service.
+ * The name of the existing subnet to be used by the Azure Spring Apps Runtime Service.
* The name of the Azure Log Analytics workspace.
- * The CIDR ranges from your virtual network to be used by Azure Spring Apps (for example, *XX.X.X.X/16,XX.X.X.X/16,XX.X.X.X/16*).
- * The key/value pairs to be applied as tags on all resources that support tags. For more information, see [Use tags to organize your Azure resources and management hierarchy](../azure-resource-manager/management/tag-resources.md).
1. Run the following command to initialize the Terraform modules:
spring-cloud Quickstart Deploy Infrastructure Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-deploy-infrastructure-vnet.md
Previously updated : 11/12/2021 Last updated : 05/31/2022 # Quickstart: Provision Azure Spring Apps using an ARM template
Last updated 11/12/2021
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ❌ Basic tier ✔️ Standard tier ✔️ Enterprise tier
This quickstart describes how to use an Azure Resource Manager template (ARM template) to deploy an Azure Spring Apps cluster into an existing virtual network. Azure Spring Apps makes it easy to deploy Spring applications to Azure without any code changes. The service manages the infrastructure of Spring applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
+The Enterprise tier deployment plan includes the following Tanzu components:
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+* Build Service
+* Application Configuration Service
+* Service Registry
+* Spring Cloud Gateway
+* API Portal
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg?sanitize=true)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-spring-cloud-reference-architecture%2Fmain%2FARM%2Fbrownfield-deployment%2fazuredeploy.json)
## Prerequisites * An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. * Two dedicated subnets for the Azure Spring Apps cluster, one for the service runtime and another for the Spring applications. For subnet and virtual network requirements, see the [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md). * An existing Log Analytics workspace for Azure Spring Apps diagnostics settings and a workspace-based Application Insights resource. For more information, see [Analyze logs and metrics with diagnostics settings](diagnostic-services.md) and [Application Insights Java In-Process Agent in Azure Spring Apps](how-to-application-insights.md).
-* Three internal Classless Inter-Domain Routing (CIDR) ranges (at least */16* each) that you've identified for use by the Azure Spring Apps cluster. These CIDR ranges will not be directly routable and will be used only internally by the Azure Spring Apps cluster. Clusters may not use *169.254.0.0/16*, *172.30.0.0/16*, *172.31.0.0/16*, or *192.0.2.0/24* for the internal Spring app CIDR ranges, or any IP ranges included within the cluster virtual network address range.
+* Three internal Classless Inter-Domain Routing (CIDR) ranges (at least */16* each) that you've identified for use by the Azure Spring Apps cluster. These CIDR ranges won't be directly routable and will be used only internally by the Azure Spring Apps cluster. Clusters may not use *169.254.0.0/16*, *172.30.0.0/16*, *172.31.0.0/16*, or *192.0.2.0/24* for the internal Azure Spring Apps CIDR ranges, or any IP ranges included within the cluster virtual network address range.
* Service permission granted to the virtual network. The Azure Spring Apps Resource Provider requires Owner permission to your virtual network in order to grant a dedicated and dynamic service principal on the virtual network for further deployment and maintenance. For instructions and more information, see the [Grant service permission to the virtual network](how-to-deploy-in-azure-virtual-network.md#grant-service-permission-to-the-virtual-network) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md). * If you're using Azure Firewall or a Network Virtual Appliance (NVA), you'll also need to satisfy the following prerequisites: * Network and fully qualified domain name (FQDN) rules. For more information, see [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements). * A unique User Defined Route (UDR) applied to each of the service runtime and Spring application subnets. For more information about UDRs, see [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md). The UDR should be configured with a route for *0.0.0.0/0* with a destination of your NVA before deploying the Azure Spring Apps cluster. For more information, see the [Bring your own route table](how-to-deploy-in-azure-virtual-network.md#bring-your-own-route-table) section of [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md).
+* If you're deploying Azure Spring Apps Enterprise tier for the first time in the target subscription, use the following commands to register the provider and accept the legal terms and privacy statements for the Enterprise tier.
+
+ ```azurecli
+ az provider register --namespace Microsoft.SaaS
+ az term accept \
+ --publisher vmware-inc \
+ --product azure-spring-cloud-vmware-tanzu-2 \
+ --plan tanzu-asc-ent-mtr
+ ```
## Review the template
-The template used in this quickstart is from the [Azure Spring Apps reference architecture](reference-architecture.md).
+The templates used in this quickstart are from the [Azure Spring Apps Reference Architecture](reference-architecture.md).
+### [Standard tier](#tab/azure-spring-apps-standard)
-Two Azure resources are defined in the template:
+
+### [Enterprise tier](#tab/azure-spring-apps-enterprise)
+++
-* [Microsoft.AppPlatform/Spring](/azure/templates/microsoft.appplatform/spring): Create an Azure Spring Apps instance.
-* [Microsoft.Insights/components](/azure/templates/microsoft.insights/components): Create an Application Insights workspace.
+Two Azure resources are defined in the template:
-For Azure CLI, Terraform, and Bicep deployments, see the [Azure Spring Apps Reference Architecture](https://github.com/Azure/azure-spring-cloud-reference-architecture) repository on GitHub.
+* [Microsoft.AppPlatform/Spring](/azure/templates/microsoft.appplatform/spring) creates an Azure Spring Apps instance.
+* [Microsoft.Insights/components](/azure/templates/microsoft.insights/components) creates an Application Insights workspace.
## Deploy the template
-To deploy the template, follow these steps:
+To deploy the template, use the following steps.
+
+First, select the following image to sign in to Azure and open a template. The template creates an Azure Spring Apps instance in an existing Virtual Network and a workspace-based Application Insights instance in an existing Azure Monitor Log Analytics Workspace.
+
+### [Standard tier](#tab/azure-spring-apps-standard)
-1. Select the following image to sign in to Azure and open a template. The template creates an Azure Spring Apps instance into an existing Virtual Network and a workspace-based Application Insights instance into an existing Azure Monitor Log Analytics Workspace.
- [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg?sanitize=true)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-spring-cloud-reference-architecture%2Fmain%2FARM%2Fbrownfield-deployment%2fazuredeploy.json)
+### [Enterprise tier](#tab/azure-spring-apps-enterprise)
+++
-2. Enter values for the following fields:
+Next, enter values for the following fields:
- * **Resource Group:** select **Create new**, enter a unique name for the **resource group**, and then select **OK**.
- * **springCloudInstanceName:** Enter the name of the Azure Spring Apps resource.
- * **appInsightsName:** Enter the name of the Application Insights instance for Azure Spring Apps.
- * **laWorkspaceResourceId:** Enter the resource ID of the existing Log Analytics workspace (for example, */subscriptions/\<your subscription>/resourcegroups/\<your log analytics resource group>/providers/Microsoft.OperationalInsights/workspaces/\<your log analytics workspace name>*.)
- * **springCloudAppSubnetID:** Enter the resourceID of the Azure Spring Apps App Subnet.
- * **springCloudRuntimeSubnetID:** Enter the resourceID of the Azure Spring Apps Runtime Subnet.
- * **springCloudServiceCidrs:** Enter a comma-separated list of IP address ranges (3 in total) in CIDR format. The IP ranges are reserved to host underlying Azure Spring Apps infrastructure. These 3 ranges should be at least */16* unused IP ranges, and must not overlap with any routable subnet IP ranges used within the network.
- * **tags:** Enter any custom tags.
+* **Resource Group:** Select **Create new**, enter a unique name for the **resource group**, and then select **OK**.
+* **springCloudInstanceName:** Enter the name of the Azure Spring Apps resource.
+* **appInsightsName:** Enter the name of the Application Insights instance for Azure Spring Apps.
+* **laWorkspaceResourceId:** Enter the resource ID of the existing Log Analytics workspace (for example, */subscriptions/\<your subscription>/resourcegroups/\<your Log Analytics resource group>/providers/Microsoft.OperationalInsights/workspaces/\<your Log Analytics workspace name>*.)
+* **springCloudAppSubnetID:** Enter the resource ID of the Azure Spring Apps Application Subnet.
+* **springCloudRuntimeSubnetID:** Enter the resource ID of the Azure Spring Apps Runtime Subnet.
+* **springCloudServiceCidrs:** Enter a comma-separated list of IP address ranges (three in total) in CIDR format. The IP ranges are reserved to host underlying Azure Spring Apps infrastructure. These three ranges should be at least */16* unused IP ranges, and must not overlap with any routable subnet IP ranges used within the network.
+* **tags:** Enter any custom tags.
-3. Select **Review + Create** and then **Create**.
+Finally, select **Review + Create** and then **Create**.
## Review deployed resources
You can either use the Azure portal to check the deployed resources, or use Azur
If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When no longer needed, delete the resource group, which deletes the resources in the resource group. To delete the resource group by using Azure CLI or Azure PowerShell, use the following commands:
-### [CLI](#tab/azure-cli)
+### [Azure CLI](#tab/azure-cli)
```azurecli echo "Enter the Resource Group name:" &&
spring-cloud Quickstart Integrate Azure Database And Redis Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-integrate-azure-database-and-redis-enterprise.md
+
+ Title: "Quickstart - Integrate with Azure Database for PostgreSQL and Azure Cache for Redis"
+
+description: Explains how to provision and prepare an Azure Database for PostgreSQL and an Azure Cache for Redis to be used with apps running Azure Spring Apps Enterprise tier.
++++ Last updated : 05/31/2022+++
+# Quickstart: Integrate with Azure Database for PostgreSQL and Azure Cache for Redis
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
+
+This quickstart shows you how to provision and prepare an Azure Database for PostgreSQL and an Azure Cache for Redis to be used with apps running in Azure Spring Apps Enterprise tier.
+
+This article uses these services for demonstration purposes. You can connect your application to any backing service of your choice by using instructions similar to the ones in the [Create Service Connectors](#create-service-connectors) section later in this article.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A license for Azure Spring Apps Enterprise tier. For more information, see [View Azure Spring Apps Enterprise tier Offer in Azure Marketplace](how-to-enterprise-marketplace-offer.md).
+- [The Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli).
+- [Git](https://git-scm.com/).
+- [jq](https://stedolan.github.io/jq/download/)
+- [!INCLUDE [install-enterprise-extension](includes/install-enterprise-extension.md)]
+- Complete the steps in [Build and deploy apps to Azure Spring Apps using the Enterprise tier](quickstart-deploy-apps-enterprise.md).
+
+## Provision services
+
+To add persistence to the application, create an Azure Cache for Redis and an Azure Database for PostgreSQL Flexible Server.
+
+### [Azure CLI](#tab/azure-cli)
+
+The following steps describe how to provision an Azure Cache for Redis instance and an Azure Database for PostgreSQL Flexible Server by using the Azure CLI.
+
+1. Use the following command to create an instance of Azure Cache for Redis:
+
+ ```azurecli
+ az redis create \
+ --resource-group <resource-group-name> \
+ --name <redis-cache-name> \
+ --location ${REGION} \
+ --sku Basic \
+ --vm-size c0
+ ```
+
+ > [!NOTE]
+ > Redis Cache creation takes approximately 20 minutes.
+
+1. Use the following command to create an Azure Database for PostgreSQL Flexible Server instance:
+
+ ```azurecli
+ az postgres flexible-server create \
+ --resource-group <resource-group-name> \
+ --name <postgres-server-name> \
+ --location <location> \
+ --admin-user <postgres-username> \
+ --admin-password <postgres-password> \
+ --yes
+ ```
+
+1. Use the following command to allow connections from other Azure Services to the newly created Flexible Server:
+
+ ```azurecli
+ az postgres flexible-server firewall-rule create \
+ --rule-name allAzureIPs \
+ --name <postgres-server-name> \
+ --resource-group <resource-group-name> \
+ --start-ip-address 0.0.0.0 \
+ --end-ip-address 0.0.0.0
+ ```
+
+1. Use the following command to enable the `uuid-ossp` extension for the newly created Flexible Server:
+
+ ```azurecli
+ az postgres flexible-server parameter set \
+ --resource-group <resource-group-name> \
+ --name azure.extensions \
+ --value uuid-ossp \
+ --server-name <postgres-server-name> \
+ ```
+
+1. Use the following command to create a database for the Order Service application:
+
+ ```azurecli
+ az postgres flexible-server db create \
+ --resource-group <resource-group-name> \
+ --server-name <postgres-server-name> \
+ --database-name acmefit_order
+ ```
+
+1. Use the following command to create a database for the Catalog Service application:
+
+ ```azurecli
+ az postgres flexible-server db create \
+ --resource-group <resource-group-name> \
+ --server-name <postgres-server-name> \
+ --database-name acmefit_catalog
+ ```
+
+### [ARM template](#tab/arm-template)
+
+The following instructions describe how to provision an Azure Cache for Redis and an Azure Database for PostgreSQL Flexible Server by using an Azure Resource Manager template (ARM template).
++
+You can find the template used in this quickstart in the [fitness store sample GitHub repository](https://github.com/Azure-Samples/acme-fitness-store/blob/Azure/azure/templates/azuredeploy.json).
+
+To deploy this template, follow these steps:
+
+1. Select the following image to sign in to Azure and open a template. The template creates an Azure Cache for Redis and an Azure Database for PostgreSQL Flexible Server.
+
+ :::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Button to deploy the ARM template to Azure." border="false" link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Facme-fitness-store%2FAzure%2Fazure%2Ftemplates%2Fazuredeploy.json":::
+
+1. Enter values for the following fields:
+
+ - **Resource Group:** Select **Create new**, enter a unique name for the **resource group**, and then select **OK**.
+ - **cacheName:** Enter the name for the Azure Cache for Redis Server.
+ - **dbServerName:** Enter the name for the Azure Database for PostgreSQL Flexible Server.
+ - **administratorLogin:** Enter the admin username for the Azure Database for PostgreSQL Flexible Server.
+ - **administratorLoginPassword:** Enter the admin password for the Azure Database for PostgreSQL Flexible Server.
+ - **tags:** Enter any custom tags.
+
+1. Select **Review + Create** and then **Create**.
+++
+## Create Service Connectors
+
+The following steps show how to bind applications running in Azure Spring Apps Enterprise tier to other Azure services by using Service Connectors.
+
+1. Use the following command to create a service connector to Azure Database for PostgreSQL for the Order Service application:
+
+ ```azurecli
+ az spring connection create postgres-flexible \
+ --resource-group <resource-group-name> \
+ --target-resource-group <target-resource-group> \
+ --connection order_service_db \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --app order-service \
+ --deployment default \
+ --server <postgres-server-name> \
+ --database acmefit_order \
+ --secret name=<postgres-username> secret=<postgres-password> \
+ --client-type dotnet
+ ```
+
+1. Use the following command to create a service connector to Azure Database for PostgreSQL for the Catalog Service application:
+
+ ```azurecli
+ az spring connection create postgres-flexible \
+ --resource-group <resource-group-name> \
+ --target-resource-group <target-resource-group> \
+ --connection catalog_service_db \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --app catalog-service \
+ --deployment default \
+ --server <postgres-server-name> \
+ --database acmefit_catalog \
+ --secret name=<postgres-username> secret=<postgres-password> \
+ --client-type springboot
+ ```
+
+1. Use the following command to create a service connector to Azure Cache for Redis for the Cart Service application:
+
+ ```azurecli
+ az spring connection create redis \
+ --resource-group <resource-group-name> \
+ --target-resource-group <target-resource-group> \
+ --connection cart_service_cache \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --app cart-service \
+ --deployment default \
+ --server <redis-cache-name> \
+ --database 0 \
+ --client-type java
+ ```
+
+1. Use the following command to reload the Catalog Service application to load the new connection properties:
+
+ ```azurecli
+ az spring app restart
+ --resource-group <resource-group-name> \
+ --name catalog-service \
+ --service <Azure-Spring-Apps-service-instance-name>
+ ```
+
+1. Use the following commands to retrieve the database connection information and update the Order Service application:
+
+ ```azurecli
+ POSTGRES_CONNECTION_STR=$(az spring connection show \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --deployment default \
+ --connection order_service_db \
+ --app order-service | jq '.configurations[0].value' -r)
+
+ az spring app update \
+ --resource-group <resource-group-name> \
+ --name order-service \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --env "DatabaseProvider=Postgres" "ConnectionStrings__OrderContext=${POSTGRES_CONNECTION_STR}"
+ ```
+
+1. Use the following commands to retrieve Redis connection information and update the Cart Service application:
+
+ ```azurecli
+ REDIS_CONN_STR=$(az spring connection show \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --deployment default \
+ --app cart-service \
+ --connection cart_service_cache | jq -r '.configurations[0].value')
+
+ az spring app update \
+ --resource-group <resource-group-name> \
+ --name cart-service \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --env "CART_PORT=8080" "REDIS_CONNECTIONSTRING=${REDIS_CONN_STR}"
+ ```
+
+## Access the application
+
+Retrieve the URL for Spring Cloud Gateway and explore the updated application. You can use the output from the following command to explore the application:
+
+```azurecli
+GATEWAY_URL=$(az spring gateway show \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> | jq -r '.properties.url')
+
+echo "https://${GATEWAY_URL}"
+```
+
+## Clean up resources
+
+If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When no longer needed, delete the resource group, which deletes the resources in the resource group. To delete the resource group by using Azure CLI, use the following commands:
+
+```azurecli
+echo "Enter the Resource Group name:" &&
+read resourceGroupName &&
+az group delete --name $resourceGroupName &&
+echo "Press [ENTER] to continue ..."
+```
+
+## Next steps
+
+Continue on to any of the following optional quickstarts:
+
+- [Configure single sign-on](quickstart-configure-single-sign-on-enterprise.md)
+- [Load application secrets using Key Vault](quickstart-key-vault-enterprise.md)
+- [Monitor applications end-to-end](quickstart-monitor-end-to-end-enterprise.md)
+- [Set request rate limits](quickstart-set-request-rate-limits-enterprise.md)
+- [Automate deployments](quickstart-automate-deployments-github-actions-enterprise.md)
spring-cloud Quickstart Integrate Azure Database Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-integrate-azure-database-mysql.md
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Basic/Standard tier ❌ Enterprise tier
Pet Clinic, as deployed in the default configuration [Quickstart: Build and deploy apps to Azure Spring Apps](quickstart-deploy-apps.md), uses an in-memory database (HSQLDB) that is populated with data at startup. This quickstart explains how to provision and prepare an Azure Database for MySQL instance and then configure Pet Clinic on Azure Spring Apps to use it as a persistent database with only one command.
spring-cloud Quickstart Key Vault Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-key-vault-enterprise.md
+
+ Title: "Quickstart - Load application secrets using Key Vault"
+
+description: Explains how to use Azure Key Vault to securely load secrets for apps running Azure Spring Apps Enterprise tier.
++++ Last updated : 05/31/2022+++
+# Quickstart: Load application secrets using Key Vault
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
+
+This quickstart shows you how to securely load secrets using Azure Key Vault for apps running Azure Spring Apps Enterprise tier.
+
+Every application has properties that connect it to its environment and supporting services. These services include resources like databases, logging and monitoring tools, messaging platforms, and so on. Each resource requires a way to locate and access it, often in the form of URLs and credentials. This information is often protected by law, and must be kept secret in order to protect customer data. In Azure Spring Apps, you can configure applications to directly load these secrets into memory from Key Vault by using managed identities and Azure role-based access control.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A license for Azure Spring Apps Enterprise tier. For more information, see [View Azure Spring Apps Enterprise tier Offer in Azure Marketplace](how-to-enterprise-marketplace-offer.md).
+- [The Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli).
+- [Git](https://git-scm.com/).
+- [jq](https://stedolan.github.io/jq/download/)
+- [!INCLUDE [install-enterprise-extension](includes/install-enterprise-extension.md)]
+- Complete the steps in the following quickstarts:
+ - [Build and deploy apps to Azure Spring Apps using the Enterprise tier](quickstart-deploy-apps-enterprise.md).
+ - [Integrate with Azure Database for PostgreSQL and Azure Cache for Redis](quickstart-integrate-azure-database-and-redis-enterprise.md)
+
+## Provision Key Vault and store secrets
+
+The following instructions describe how to create a Key Vault and securely save application secrets.
+
+1. Use the following command to create a Key Vault to store application secrets:
+
+ ```azurecli
+ az keyvault create \
+ --resource-group <resource-group-name> \
+ --name <key-vault-name>
+ ```
+
+1. Use the following command to store the full database server name in Key Vault:
+
+ ```azurecli
+ az keyvault secret set \
+ --vault-name <key-vault-name> \
+ --name "POSTGRES-SERVER-NAME" \
+ --value "<postgres-server-name>.postgres.database.azure.com"
+ ```
+
+1. Use the following command to store the database name in Key Vault for the Catalog Service application:
+
+ ```azurecli
+ az keyvault secret set \
+ --vault-name <key-vault-name> \
+ --name "CATALOG-DATABASE-NAME" \
+ --value "acmefit_catalog"
+ ```
+
+1. Use the following commands to store the database login credentials in Key Vault:
+
+ ```azurecli
+ az keyvault secret set \
+ --vault-name <key-vault-name> \
+ --name "POSTGRES-LOGIN-NAME" \
+ --value "<postgres-username>"
+
+ az keyvault secret set \
+ --vault-name <key-vault-name> \
+ --name "POSTGRES-LOGIN-PASSWORD" \
+ --value "<postgres-password>"
+ ```
+
+1. Use the following command to store the database connection string in Key Vault for the Order Service application:
+
+ ```azurecli
+ az keyvault secret set \
+ --vault-name <key-vault-name> \
+ --name "ConnectionStrings--OrderContext" \
+ --value "Server=<postgres-server-name>;Database=acmefit_order;Port=5432;Ssl Mode=Require;User Id=<postgres-user>;Password=<postgres-password>;"
+ ```
+
+1. Use the following commands to retrieve Redis connection properties and store them in Key Vault:
+
+ ```azurecli
+ REDIS_HOST=$(az redis show \
+ --resource-group <resource-group-name> \
+ --name <redis-cache-name> | jq -r '.hostName')
+
+ REDIS_PORT=$(az redis show \
+ --resource-group <resource-group-name> \
+ --name <redis-cache-name> | jq -r '.sslPort')
+
+ REDIS_PRIMARY_KEY=$(az redis list-keys \
+ --resource-group <resource-group-name> \
+ --name <redis-cache-name> | jq -r '.primaryKey')
+
+ az keyvault secret set \
+ --vault-name <key-vault-name> \
+ --name "CART-REDIS-CONNECTION-STRING" \
+ --value "rediss://:${REDIS_PRIMARY_KEY}@${REDIS_HOST}:${REDIS_PORT}/0"
+ ```
+
+1. If you've configured [single sign-on](quickstart-configure-single-sign-on-enterprise.md), use the following command to store the JSON Web Key (JWK) URI in Key Vault:
+
+ ```azurecli
+ az keyvault secret set \
+ --vault-name <key-vault-name> \
+ --name "SSO-PROVIDER-JWK-URI" \
+ --value <jwk-uri>
+ ```
+
+## Grant applications access to secrets in Key Vault
+
+The following instructions describe how to grant access to Key Vault secrets to applications deployed to Azure Spring Apps Enterprise tier.
+
+1. Use the following command to enable a System Assigned Identity for the Cart Service application:
+
+ ```azurecli
+ az spring app identity assign \
+ --resource-group <resource-group-name> \
+ --name cart-service \
+ --service <Azure-Spring-Apps-service-instance-name>
+ ```
+
+1. Use the following commands to set an access policy of `get list` on Key Vault for the Cart Service application:
+
+ ```azurecli
+ CART_SERVICE_APP_IDENTITY=$(az spring app show \
+ --resource-group <resource-group-name> \
+ --name cart-service \
+ --service <Azure-Spring-Apps-service-instance-name> | jq -r '.identity.principalId')
+
+ az keyvault set-policy \
+ --name <key-vault-name> \
+ --object-id ${CART_SERVICE_APP_IDENTITY} \
+ --secret-permissions get list
+ ```
+
+1. Use the following command to enable a System Assigned Identity for the Order Service application:
+
+ ```azurecli
+ az spring app identity assign \
+ --resource-group <resource-group-name> \
+ --name order-service \
+ --service <Azure-Spring-Apps-service-instance-name>
+ ```
+
+1. Use the following commands to set an access policy of `get list` on Key Vault for the Order Service application:
+
+ ```azurecli
+ ORDER_SERVICE_APP_IDENTITY=$(az spring app show \
+ --resource-group <resource-group-name> \
+ --name order-service \
+ --service <Azure-Spring-Apps-service-instance-name> | jq -r '.identity.principalId')
+
+ az keyvault set-policy \
+ --name <key-vault-name> \
+ --object-id ${ORDER_SERVICE_APP_IDENTITY} \
+ --secret-permissions get list
+ ```
+
+1. Use the following command to enable a System Assigned Identity for the Catalog Service application:
+
+ ```azurecli
+ az spring app identity assign \
+ --resource-group <resource-group-name> \
+ --name catalog-service \
+ --service <Azure-Spring-Apps-service-instance-name>
+ ```
+
+1. Use the following commands to set an access policy of `get list` on Key Vault for the Catalog Service application:
+
+ ```azurecli
+ CATALOG_SERVICE_APP_IDENTITY=$(az spring app show \
+ --resource-group <resource-group-name> \
+ --name catalog-service \
+ --service <Azure-Spring-Apps-service-instance-name> | jq -r '.identity.principalId')
+
+ az keyvault set-policy \
+ --name <key-vault-name> \
+ --object-id ${CATALOG_SERVICE_APP_IDENTITY} \
+ --secret-permissions get list
+ ```
+
+1. If you've configured [single sign-on](quickstart-configure-single-sign-on-enterprise.md), use the following command to enable a System Assigned Identity for the Identity Service application:
+
+ ```azurecli
+ az spring app identity assign \
+ --resource-group <resource-group-name> \
+ --name identity-service \
+ --service <Azure-Spring-Apps-service-instance-name>
+ ```
+
+1. Use the following commands to set an access policy of `get list` on Key Vault for the Identity Service application:
+
+ ```azurecli
+ IDENTITY_SERVICE_APP_IDENTITY=$(az spring app show \
+ --resource-group <resource-group-name> \
+ --name identity-service \
+ --service <Azure-Spring-Apps-service-instance-name> | jq -r '.identity.principalId')
+
+ az keyvault set-policy \
+ --name <key-vault-name> \
+ --object-id ${IDENTITY_SERVICE_APP_IDENTITY} \
+ --secret-permissions get list
+ ```
+
+## Update applications to load Key Vault secrets
+
+After granting access to read secrets from Key Vault, use the following steps to update the applications to use the new secret values in their configurations.
+
+1. Use the following command to retrieve the URI for Key Vault to be used in updating applications:
+
+ ```azurecli
+ KEYVAULT_URI=$(az keyvault show --name <key-vault-name> | jq -r '.properties.vaultUri')
+ ```
+
+1. Use the following command to retrieve the URL for Spring Cloud Gateway to be used in updating applications:
+
+ ```azurecli
+ GATEWAY_URL=$(az spring gateway show \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> | jq -r '.properties.url')
+ ```
+
+1. Use the following command to remove the Service Connector binding the Order Service application and the Azure Database for PostgreSQL Flexible Server:
+
+ ```azurecli
+ az spring connection delete \
+ --resource-group <resource-group-name> \
+ --app order-service \
+ --connection order_service_db \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --deployment default \
+ --yes
+ ```
+
+1. Use the following command to update the Order Service environment with the URI to access Key Vault:
+
+ ```azurecli
+ az spring app update \
+ --resource-group <resource-group-name> \
+ --name order-service \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --env "ConnectionStrings__KeyVaultUri=${KEYVAULT_URI}" "AcmeServiceSettings__AuthUrl=https://${GATEWAY_URL}" "DatabaseProvider=Postgres"
+ ```
+
+1. Use the following command to remove the Service Connector binding the Catalog Service application and the Azure Database for PostgreSQL Flexible Server:
+
+ ```azurecli
+ az spring connection delete \
+ --resource-group <resource-group-name> \
+ --app catalog-service \
+ --connection catalog_service_db \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --deployment default \
+ --yes
+ ```
+
+1. Use the following command to update the Catalog Service environment and configuration pattern to access Key Vault:
+
+ ```azurecli
+ az spring app update \
+ --resource-group <resource-group-name> \
+ --name catalog-service \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --config-file-pattern catalog/default,catalog/key-vault \
+ --env "SPRING_CLOUD_AZURE_KEYVAULT_SECRET_PROPERTY_SOURCES_0_ENDPOINT=${KEYVAULT_URI}" "SPRING_CLOUD_AZURE_KEYVAULT_SECRET_PROPERTY_SOURCES_0_NAME='acme-fitness-store-vault'" "SPRING_PROFILES_ACTIVE=default,key-vault"
+ ```
+
+1. Use the following command to remove the Service Connector binding the Cart Service application and the Azure Cache for Redis:
+
+ ```azurecli
+ az spring connection delete \
+ --resource-group <resource-group-name> \
+ --app cart-service \
+ --connection cart_service_cache \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --deployment default \
+ --yes
+ ```
+
+1. Use the following command to update the Cart Service environment to access Key Vault:
+
+ ```azurecli
+ az spring app update \
+ --resource-group <resource-group-name> \
+ --name cart-service \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --env "CART_PORT=8080" "KEYVAULT_URI=${KEYVAULT_URI}" "AUTH_URL=https://${GATEWAY_URL}"
+ ```
+
+1. Use the following command to update the Identity Service environment and configuration pattern to access Key Vault:
+
+ ```azurecli
+ az spring app update \
+ --resource-group <resource-group-name> \
+ --name identity-service \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --config-file-pattern identity/default,identity /key-vault \
+ --env "SPRING_CLOUD_AZURE_KEYVAULT_SECRET_PROPERTY_SOURCES_0_ENDPOINT=${KEYVAULT_URI}" "SPRING_CLOUD_AZURE_KEYVAULT_SECRET_PROPERTY_SOURCES_0_NAME='acme-fitness-store-vault'" "SPRING_PROFILES_ACTIVE=default,key-vault"
+ ```
+
+1. Use the following commands to retrieve the URL for Spring Cloud Gateway:
+
+ ```azurecli
+ GATEWAY_URL=$(az spring gateway show \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> | jq -r '.properties.url')
+
+ echo "https://${GATEWAY_URL}"
+ ```
+
+ You can open the output URL in a browser to explore the updated application.
+
+## Clean up resources
+
+If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When no longer needed, delete the resource group, which deletes the resources in the resource group. To delete the resource group by using Azure CLI, use the following commands:
+
+```azurecli
+echo "Enter the Resource Group name:" &&
+read resourceGroupName &&
+az group delete --name $resourceGroupName &&
+echo "Press [ENTER] to continue ..."
+```
+
+## Next steps
+
+Continue on to any of the following optional quickstarts:
+
+- [Configure single sign-on](quickstart-configure-single-sign-on-enterprise.md)
+- [Integrate Azure Database for PostgreSQL and Azure Cache for Redis](quickstart-integrate-azure-database-and-redis-enterprise.md)
+- [Monitor applications end-to-end](quickstart-monitor-end-to-end-enterprise.md)
+- [Set request rate limits](quickstart-set-request-rate-limits-enterprise.md)
+- [Automate deployments](quickstart-automate-deployments-github-actions-enterprise.md)
spring-cloud Quickstart Logs Metrics Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-logs-metrics-tracing.md
zone_pivot_groups: programming-languages-spring-cloud
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Basic/Standard tier ❌ Enterprise tier
::: zone pivot="programming-language-csharp" With the built-in monitoring capability in Azure Spring Apps, you can debug and monitor complex issues. Azure Spring Apps integrates Steeltoe [distributed tracing](https://docs.steeltoe.io/api/v3/tracing/) with Azure's [Application Insights](../azure-monitor/app/app-insights-overview.md). This integration provides powerful logs, metrics, and distributed tracing capability from the Azure portal.
spring-cloud Quickstart Monitor End To End Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-monitor-end-to-end-enterprise.md
+
+ Title: "Quickstart - Monitor applications end-to-end"
+
+description: Explains how to monitor apps running Azure Spring Apps Enterprise tier by using Application Insights and Log Analytics.
++++ Last updated : 05/31/2022+++
+# Quickstart: Monitor application end-to-end
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
+
+This quickstart shows you how monitor apps running Azure Spring Apps Enterprise tier by using Application Insights and Log Analytics.
+
+> [!NOTE]
+> You can monitor your Spring workloads end-to-end by using any tool and platform of your choice, including App Insights, Log Analytics, New Relic, Dynatrace, AppDynamics, Elastic, or Splunk. For more information, see [Working with other monitoring tools](#working-with-other-monitoring-tools) later in this article.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A license for Azure Spring Apps Enterprise tier. For more information, see [View Azure Spring Apps Enterprise tier Offer in Azure Marketplace](how-to-enterprise-marketplace-offer.md).
+- [The Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli).
+- [Git](https://git-scm.com/).
+- [jq](https://stedolan.github.io/jq/download/)
+- [!INCLUDE [install-enterprise-extension](includes/install-enterprise-extension.md)]
+- Resources to monitor, such as the ones created in the following quickstarts:
+ - [Build and deploy apps to Azure Spring Apps using the Enterprise tier](quickstart-deploy-apps-enterprise.md)
+ - [Integrate with Azure Database for PostgreSQL and Azure Cache for Redis](quickstart-integrate-azure-database-and-redis-enterprise.md)
+ - [Load application secrets using Key Vault](quickstart-key-vault-enterprise.md)
+
+## Update applications
+
+You must manually provide the Application Insights connection string to the Order Service (ASP.NET core) and Cart Service (python) applications. The following instructions describe how to provide this connection string and increase the sampling rate to Application Insights.
+
+> [!NOTE]
+> Currently only the buildpacks for Java and NodeJS applications support Application Insights instrumentation.
+
+1. Use the following commands to retrieve the Application Insights connection string and set it in Key Vault:
+
+ ```azurecli
+ INSTRUMENTATION_KEY=$(az monitor app-insights component show \
+ --resource-group=<resource-group-name>
+ --app <app-insights-name> | jq -r '.connectionString')
+
+ az keyvault secret set \
+ --vault-name <key-vault-name> \
+ --name "ApplicationInsights--ConnectionString" \
+ --value ${INSTRUMENTATION_KEY}
+ ```
+
+ > [!NOTE]
+ > By default, the Application Insights service instance has the same name as the Azure Spring Apps service instance.
+
+1. Use the following command to update the sampling rate for the Application Insights binding to increase the amount of data available:
+
+ ```azurecli
+ az spring build-service builder buildpack-binding set \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --builder-name default \
+ --name default \
+ --type ApplicationInsights \
+ --properties sampling-rate=100 connection_string=${INSTRUMENTATION_KEY}
+ ```
+
+1. Use the following commands to restart applications to reload configuration:
+
+ ```azurecli
+ az spring app restart \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --name cart-service
+
+ az spring app restart \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --name order-service
+
+ az spring app restart \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --name catalog-service
+
+ az spring app restart \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --name frontend
+
+ az spring app restart \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --name identity-service
+ ```
+
+ For the Java and NodeJS applications, restarting will allow the new sampling rate to take effect. For the non-Java applications, restarting will allow them to access the newly added Instrumentation Key from the Key Vault.
+
+## View logs
+
+There are two ways to see logs on Azure Spring Apps: log streaming of real-time logs per app instance or **Log Analytics** for aggregated logs with advanced query capability
+
+### Use log streaming
+
+Generate traffic in the application by moving through the application, viewing the catalog, and placing orders. Use the following commands to generate traffic continuously, until canceled:
+
+```azurecli
+GATEWAY_URL=$(az spring gateway show \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> | jq -r '.properties.url')
+
+cd traffic-generator
+GATEWAY_URL=https://${GATEWAY_URL} ./gradlew gatlingRun-com.vmware.acme.simulation.GuestSimulation
+```
+
+Use the following command to get the latest 100 lines of application console logs from the Catalog Service application:
+
+```azurecli
+az spring app logs \
+ --resource-group <resource-group-name> \
+ --name catalog-service \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --lines 100
+```
+
+By adding the `--follow` option, you can get real-time log streaming from an app. Use the following command to try log streaming for the Catalog Service application:
+
+```azurecli
+az spring app logs \
+ --resource-group <resource-group-name> \
+ --name catalog-service \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --follow
+```
+
+> [!TIP]
+> You can use az spring app logs `--help` to explore more parameters and log stream functionalities.
+
+### Use Log Analytics
+
+Navigate to the Azure portal and open the Log Analytics instance that you created. You can find the Log Analytics instance in the same resource group where you created the Azure Spring Apps service instance.
+
+On the Log Analytics page, select the **Logs** pane and run any of the following sample queries for Azure Spring Apps.
+
+Type and run the following Kusto query to see application logs:
+
+```kusto
+AppPlatformLogsforSpring
+| where TimeGenerated > ago(24h)
+| limit 500
+| sort by TimeGenerated
+| project TimeGenerated, AppName, Log
+```
+
+This query produces results similar to the ones shown in the following screenshot:
++
+Type and run the following Kusto query to see `catalog-service` application logs:
+
+```kusto
+AppPlatformLogsforSpring
+| where AppName has "catalog-service"
+| limit 500
+| sort by TimeGenerated
+| project TimeGenerated, AppName, Log
+```
+
+This query produces results similar to the ones shown in the following screenshot:
++
+Type and run the following Kusto query to see errors and exceptions thrown by each app:
+
+```kusto
+AppPlatformLogsforSpring
+| where Log contains "error" or Log contains "exception"
+| extend FullAppName = strcat(ServiceName, "/", AppName)
+| summarize count_per_app = count() by FullAppName, ServiceName, AppName, _ResourceId
+| sort by count_per_app desc
+| render piechart
+```
+
+This query produces results similar to the ones shown in the following screenshot:
++
+Type and run the following Kusto query to see all in the inbound calls into Azure Spring Apps:
+
+```kusto
+AppPlatformIngressLogs
+| project TimeGenerated, RemoteAddr, Host, Request, Status, BodyBytesSent, RequestTime, ReqId, RequestHeaders
+| sort by TimeGenerated
+```
+
+Type and run the following Kusto query to see all the logs from the managed Spring Cloud
+Config Gateway managed by Azure Spring Apps:
+
+```kusto
+AppPlatformSystemLogs
+| where LogType contains "SpringCloudGateway"
+| project TimeGenerated,Log
+```
+
+This query produces results similar to the ones shown in the following screenshot:
++
+Type and run the following Kusto query to see all the logs from the managed Spring Cloud
+Service Registry managed by Azure Spring Apps:
+
+```kusto
+AppPlatformSystemLogs
+| where LogType contains "ServiceRegistry"
+| project TimeGenerated, Log
+```
+
+This query produces results similar to the ones shown in the following screenshot:
++
+## Use tracing
+
+In the Azure portal, open the Application Insights instance created by Azure Spring Apps and start monitoring Spring Boot applications. You can find the Application Insights instance in the same resource group where you created an Azure Spring Apps service instance.
+
+Navigate to the **Application map** pane, which will be similar to the following screenshot:
++
+Navigate to the **Performance** pane, which will be similar to the following screenshot:
++
+Navigate to the **Performance/Dependencies** pane. Here you can see the performance number for dependencies, particularly SQL calls, similar to what's shown in the following screenshot:
++
+Navigate to the **Performance/Roles** pane. Here you can see the performance metrics for individual instances or roles, similar to what's shown in the following screenshot:
++
+Select a SQL call to see the end-to-end transaction in context, similar to what's shown in the following screenshot:
++
+Navigate to the **Failures/Exceptions** pane. Here you can see a collection of exceptions, similar to what's shown in the following screenshot:
++
+## View metrics
+
+Navigate to the **Metrics** pane. Here you can see metrics contributed by Spring Boot apps, Spring Cloud modules, and dependencies. The chart in the following screenshot shows **http_server_requests** and **Heap Memory Used**:
++
+Spring Boot registers a large number of core metrics: JVM, CPU, Tomcat, Logback, and so on.
+The Spring Boot auto-configuration enables the instrumentation of requests handled by Spring MVC.
+The REST controllers `ProductController` and `PaymentController` have been instrumented by the `@Timed` Micrometer annotation at the class level.
+
+The `acme-catalog` application has the following custom metric enabled: @Timed: `store.products`
+
+The `acem-payment` application has the following custom metric enabled: @Timed: `store.payment`
+
+You can see these custom metrics in the **Metrics** pane, as shown in the following screenshot.
++
+Navigate to the **Live Metrics** pane. Here you can see live metrics on screen with low latencies < 1 second, as shown in the following screenshot:
++
+## Working with other monitoring tools
+
+Azure Spring Apps enterprise tier also supports exporting metrics to other tools, including the following tools:
+
+- AppDynamics
+- ApacheSkyWalking
+- Dynatrace
+- ElasticAPM
+- NewRelic
+
+You can add more bindings to a builder in Tanzu Build Service by using the following command:
+
+```azurecli
+az spring build-service builder buildpack-binding create \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --builder-name <builder-name> \
+ --name <binding-name> \
+ --type <ApplicationInsights|AppDynamics|ApacheSkyWalking|Dynatrace|ElasticAPM|NewRelic> \
+ --properties <connection-properties>
+ --secrets <secret-properties>
+```
+
+## Clean up resources
+
+If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When no longer needed, delete the resource group, which deletes the resources in the resource group. To delete the resource group by using Azure CLI, use the following commands:
+
+```azurecli
+echo "Enter the Resource Group name:" &&
+read resourceGroupName &&
+az group delete --name $resourceGroupName &&
+echo "Press [ENTER] to continue ..."
+```
+
+## Next steps
+
+Continue on to any of the following optional quickstarts:
+
+- [Configure single sign-on](quickstart-configure-single-sign-on-enterprise.md)
+- [Integrate Azure Database for PostgreSQL and Azure Cache for Redis](quickstart-integrate-azure-database-and-redis-enterprise.md)
+- [Load application secrets using Key Vault](quickstart-key-vault-enterprise.md)
+- [Set request rate limits](quickstart-set-request-rate-limits-enterprise.md)
+- [Automate deployments](quickstart-automate-deployments-github-actions-enterprise.md)
spring-cloud Quickstart Provision Service Instance Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-provision-service-instance-enterprise.md
- Title: "Quickstart - Provision an Azure Spring Apps service instance using the Enterprise tier"
-description: Describes the creation of an Azure Spring Apps service instance for app deployment using the Enterprise tier.
---- Previously updated : 02/09/2022---
-# Quickstart: Provision an Azure Spring Apps service instance using the Enterprise tier
-
-> [!NOTE]
-> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-
-**This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
-
-This quickstart shows you how to create an Azure Spring Apps service instance using the Enterprise tier.
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- A license for Azure Spring Apps Enterprise Tier. For more information, see [View Azure Spring Apps Enterprise Tier Offer in Azure Marketplace](./how-to-enterprise-marketplace-offer.md).-- [The Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli).-- [!INCLUDE [install-enterprise-extension](includes/install-enterprise-extension.md)]-
-## Provision a service instance
-
-Use the following steps to provision an Azure Spring Apps service instance:
-
-### [Portal](#tab/azure-portal)
-
-1. Open the [Azure portal](https://ms.portal.azure.com/).
-
-1. In the top search box, search for *Azure Spring Apps*.
-
-1. Select **Azure Spring Apps** from the **Services** results.
-
-1. On the **Azure Spring Apps** page, select **Create**.
-
-1. On the Azure Spring Apps **Create** page, select **Change** next to the **Pricing** option, then select the **Enterprise** tier.
-
- :::image type="content" source="media/enterprise/getting-started-enterprise/choose-enterprise-tier.png" alt-text="Screenshot of Azure portal Azure Spring Apps creation page with Basics section and 'Choose your pricing tier' pane showing." lightbox="media/enterprise/getting-started-enterprise/choose-enterprise-tier.png":::
-
- Select the **Terms** checkbox to agree to the legal terms and privacy statements of the Enterprise tier offering in the Azure Marketplace.
-
-1. To configure VMware Tanzu components, select **Next: VMware Tanzu settings**.
-
- > [!NOTE]
- > All Tanzu components are enabled by default. Be sure to carefully consider which Tanzu components you want to use or enable during the provisioning phase. After provisioning the Azure Spring Apps instance, you can't enable or disable Tanzu components.
-
- :::image type="content" source="media/enterprise/getting-started-enterprise/create-instance-tanzu-settings-public-preview.png" alt-text="Screenshot of Azure portal Azure Spring Apps creation page with V M ware Tanzu Settings section showing." lightbox="media/enterprise/getting-started-enterprise/create-instance-tanzu-settings-public-preview.png":::
-
-1. Select the **Application Insights** section, then select **Enable Application Insights**. You can also enable Application Insights after you provision the Azure Spring Apps instance.
-
- - Choose an existing Application Insights instance or create a new Application Insights instance.
- - Give a **Sampling Rate** with in the range of 0-100, or use the default value 10.
-
- > [!NOTE]
- > You'll pay for the usage of Application Insights when integrated with Azure Spring Apps. For more information about Application Insights pricing, see [Application Insights billing](../azure-monitor/logs/cost-logs.md#application-insights-billing).
-
- :::image type="content" source="media/enterprise/getting-started-enterprise/application-insights.png" alt-text="Screenshot of Azure portal Azure Spring Apps creation page with Application Insights section showing." lightbox="media/enterprise/getting-started-enterprise/application-insights.png":::
-
-1. Select **Review and create**. After validation completes successfully, select **Create** to start provisioning the service instance.
-
-It takes about 5 minutes to finish the resource provisioning.
-
-### [Azure CLI](#tab/azure-cli)
-
-1. Update Azure CLI with the Azure Spring Apps extension by using the following command:
-
- ```azurecli
- az extension update --name spring
- ```
-
-1. Sign in to the Azure CLI and choose your active subscription by using the following command:
-
- ```azurecli
- az login
- az account list --output table
- az account set --subscription <subscription-ID>
- ```
-
-1. Use the following command to accept the legal terms and privacy statements for the Enterprise tier. This step is necessary only if your subscription has never been used to create an Enterprise tier instance of Azure Spring Apps.
-
- ```azurecli
- az provider register --namespace Microsoft.SaaS
- az term accept --publisher vmware-inc --product azure-spring-cloud-vmware-tanzu-2 --plan tanzu-asc-ent-mtr
- ```
-
-1. Prepare a name for your Azure Spring Apps service instance. The name must be between 4 and 32 characters long and can contain only lowercase letters, numbers, and hyphens. The first character of the service name must be a letter and the last character must be either a letter or a number.
-
-1. Create a resource group and an Azure Spring Apps service instance using the following the command:
-
- ```azurecli
- az group create --name <resource-group-name>
- az spring create \
- --resource-group <resource-group-name> \
- --name <service-instance-name> \
- --sku enterprise
- ```
-
- For more information about resource groups, see [What is Azure Resource Manager?](../azure-resource-manager/management/overview.md).
-
-1. Set your default resource group name and Spring Cloud service name using the following command:
-
- ```azurecli
- az config set defaults.group=<resource-group-name> defaults.spring-cloud=<service-instance-name>
- ```
---
-## Clean up resources
-
-If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When no longer needed, delete the resource group, which deletes the resources in the resource group. To delete the resource group by using Azure CLI, use the following commands:
-
-```azurecli
-echo "Enter the Resource Group name:" &&
-read resourceGroupName &&
-az group delete --name $resourceGroupName &&
-echo "Press [ENTER] to continue ..."
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Quickstart: Set up Application Configuration Service for Tanzu](quickstart-setup-application-configuration-service-enterprise.md)
spring-cloud Quickstart Sample App Acme Fitness Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-sample-app-acme-fitness-store-introduction.md
+
+ Title: Introduction to the Fitness Store sample app
+
+description: Describes the sample app used in this series of quickstarts for deployment to Azure Spring Apps Enterprise tier.
++++ Last updated : 05/31/2022+++
+# Introduction to the Fitness Store sample app
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
+
+This quickstart describes the [fitness store](https://github.com/Azure-Samples/acme-fitness-store) sample application, which will show you how to deploy polyglot applications to Azure Spring Apps Enterprise tier. You'll see how polyglot applications are built and deployed using Azure Spring Apps Enterprise tier capabilities. These capabilities include Tanzu Build Service, Service Discovery, externalized configuration with Application Configuration Service, application routing with Spring Cloud Gateway, logs, metrics, and distributed tracing.
+
+The following diagram shows a common application architecture:
++
+This architecture shows an application composed of smaller applications with a gateway, multiple databases, security services, monitoring, and automation.
+
+This quickstart applies this architecture to a Fitness Store application. This application is composed of the following services split up by domain:
+
+- Three Java Spring Boot applications:
+ - **Catalog Service** contains an API for fetching available products.
+ - **Payment Service** validates and processes payments for users' orders.
+ - **Identity Service** provides reference to the authenticated user.
+
+- One Python application:
+ - **Cart Service** manages users' items that have been selected for purchase.
+
+- One ASP.NET Core application:
+ - **Order Service** places orders to buy products that are in the users' carts.
+
+- One NodeJS and static HTML application:
+ - **Frontend** is the shopping application that depends on the other services.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Quickstart: Build and deploy apps to Azure Spring Apps Enterprise tier](quickstart-deploy-apps-enterprise.md)
spring-cloud Quickstart Set Request Rate Limits Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-set-request-rate-limits-enterprise.md
+
+ Title: "Quickstart - Set request rate limits"
+
+description: Explains how to set request rate limits by using Spring Cloud Gateway on Azure Spring Apps Enterprise tier.
++++ Last updated : 05/31/2022+++
+# Quickstart: Set request rate limits
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
+
+This quickstart shows you how to set request rate limits by using Spring Cloud Gateway on Azure Spring Apps Enterprise tier.
+
+Rate limiting enables you to avoid problems that arise with spikes in traffic. When you set request rate limits, your application can reject excessive requests. This configuration helps you minimize throttling errors and more accurately predict throughput.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A license for Azure Spring Apps Enterprise tier. For more information, see [View Azure Spring Apps Enterprise tier Offer in Azure Marketplace](how-to-enterprise-marketplace-offer.md).
+- [The Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli).
+- [Git](https://git-scm.com/).
+- [jq](https://stedolan.github.io/jq/download/)
+- [!INCLUDE [install-enterprise-extension](includes/install-enterprise-extension.md)]
+- Complete the steps in [Build and deploy apps to Azure Spring Apps using the Enterprise tier](quickstart-deploy-apps-enterprise.md).
+
+## Set request rate limits
+
+Spring Cloud Gateway includes route filters from the Open Source version and several more route filters. One of these filters is the [RateLimit: Limiting user requests filter](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/1.1/scg-k8s/GUID-route-filters.html#ratelimit-limiting-user-requests-filter). The RateLimit filter limits the number of requests allowed per route during a time window.
+
+When defining a route, you can add the RateLimit filter by including it in the list of filters for the route. The filter accepts four options:
+
+- The number of requests accepted during the window.
+- The duration of the window. This value is in milliseconds by default, but you can specify a suffix of *s*, *m*, or *h* to indicate that the value is in seconds, minutes, or hours.
+- (Optional) A user partition key. You can also apply rate limiting per user. That is, different users can have their own throughput allowed based on an identifier found in the request. Indicate whether the key is in a JWT claim or HTTP header with `claim` or `header` syntax.
+- (Optional) You can rate limit by IP addresses, but not in combination with rate limiting per user.
+
+The following example would limit all users to two requests every five seconds to the `/products` route:
+
+```json
+{
+ "predicates": [
+ "Path=/products",
+ "Method=GET"
+ ],
+ "filters": [
+ "StripPrefix=0",
+ "RateLimit=2,5s"
+ ]
+}
+```
+
+If you want to expose a route for different sets of users, each one identified by its own `client_id` HTTP header, use the following route definition:
+
+```json
+{
+ "predicates": [
+ "Path=/products",
+ "Method=GET"
+ ],
+ "filters": [
+ "StripPrefix=0",
+ "RateLimit=2,5s,{header:client_id}"
+ ]
+}
+```
+
+When the limit is exceeded, responses will fail with `429 Too Many Requests` status.
+
+Use the following command to apply the `RateLimit` filter to the `/products` route:
+
+```azurecli
+az spring gateway route-config update \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --name catalog-routes \
+ --app-name catalog-service \
+ --routes-file azure/routes/catalog-service_rate-limit.json
+```
+
+Use the following commands to retrieve the URL for the `/products` route in Spring Cloud Gateway:
+
+```azurecli
+GATEWAY_URL=$(az spring gateway show \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-service-instance-name> | jq -r '.properties.url')
+
+echo "https://${GATEWAY_URL}/products"
+```
+
+Make several requests to the URL for `/products` within a five-second period to see requests fail with a status `429 Too Many Requests`.
+
+## Clean up resources
+
+If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When no longer needed, delete the resource group, which deletes the resources in the resource group. To delete the resource group by using Azure CLI, use the following commands:
+
+```azurecli
+echo "Enter the Resource Group name:" &&
+read resourceGroupName &&
+az group delete --name $resourceGroupName &&
+echo "Press [ENTER] to continue ..."
+```
+
+## Next steps
+
+Continue on to any of the following optional quickstarts:
+
+- [Configure single sign-on](quickstart-configure-single-sign-on-enterprise.md)
+- [Integrate Azure Database for PostgreSQL and Azure Cache for Redis](quickstart-integrate-azure-database-and-redis-enterprise.md)
+- [Load application secrets using Key Vault](quickstart-key-vault-enterprise.md)
+- [Monitor applications end-to-end](quickstart-monitor-end-to-end-enterprise.md)
+- [Automate deployments](quickstart-automate-deployments-github-actions-enterprise.md)
spring-cloud Quickstart Setup Application Configuration Service Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-setup-application-configuration-service-enterprise.md
- Title: "Quickstart - Set up Application Configuration Service for Tanzu for Azure Spring Apps Enterprise tier"
-description: Describes how to set up Application Configuration Service for Tanzu for Azure Spring Apps Enterprise tier.
---- Previously updated : 02/09/2022---
-# Quickstart: Set up Application Configuration Service for Tanzu
-
-> [!NOTE]
-> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-
-**This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
-
-This quickstart shows you how to set up Application Configuration Service for VMware Tanzu® for use with Azure Spring Apps Enterprise tier.
-
-> [!NOTE]
-> To use Application Configuration Service for Tanzu, you must enable it when you provision your Azure Spring Apps service instance. You cannot enable it after provisioning at this time.
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- A license for Azure Spring Apps Enterprise Tier. For more information, see [View Azure Spring Apps Enterprise Tier offering from Azure Marketplace](./how-to-enterprise-marketplace-offer.md).-- [Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli).-- [Apache Maven](https://maven.apache.org/download.cgi)-- [!INCLUDE [install-enterprise-extension](includes/install-enterprise-extension.md)]-
-## Use Application Configuration Service for Tanzu
-
-To use Application Configuration Service for Tanzu, follow these steps.
-
-### [Portal](#tab/azure-portal)
-
-1. Select **Application Configuration Service**.
-1. Select **Overview** to view the running state and resources allocated to Application Configuration Service for Tanzu.
-
- :::image type="content" source="media/enterprise/getting-started-enterprise/config-service-overview.png" alt-text="Screenshot of Azure portal Azure Spring Apps with Application Configuration Service page and Overview section showing.":::
-
-1. Select **Settings** and add a new entry in the **Repositories** section with the following information:
-
- - Name: `default`
- - Patterns: `api-gateway,customers-service`
- - URI: `https://github.com/Azure-Samples/spring-petclinic-microservices-config`
- - Label: `master`
-
-1. Select **Validate** to validate access to the target URI. After validation completes successfully, select **Apply** to update the configuration settings.
-
- :::image type="content" source="media/enterprise/getting-started-enterprise/config-service-settings.png" alt-text="Screenshot of Azure portal Azure Spring Apps with Application Configuration Service page and Settings section showing.":::
-
-### [Azure CLI](#tab/azure-cli)
-
-To set the default repository, use the following command:
-
-```azurecli
-az spring application-configuration-service git repo add \
- --name default \
- --patterns api-gateway,customers-service \
- --uri https://github.com/Azure-Samples/spring-petclinic-microservices-config.git \
- --label master
-```
---
-## Clean up resources
-
-If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When no longer needed, delete the resource group, which deletes the resources in the resource group. To delete the resource group by using Azure CLI, use the following commands:
-
-```azurecli
-echo "Enter the Resource Group name:" &&
-read resourceGroupName &&
-az group delete --name $resourceGroupName &&
-echo "Press [ENTER] to continue ..."
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise tier](quickstart-deploy-apps-enterprise.md)
spring-cloud Quickstart Setup Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-setup-log-analytics.md
ms.devlang: azurecli
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Basic/Standard tier ❌ Enterprise tier
This quickstart explains how to set up a Log Analytics workspace in Azure Spring Apps for application development.
spring-cloud Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/reference-architecture.md
Previously updated : 02/16/2021 Last updated : 05/31/2022 Title: Azure Spring Apps reference architecture
description: This reference architecture is a foundation using a typical enterpr
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Standard tier ✔️ Enterprise tier
This reference architecture is a foundation using a typical enterprise hub and spoke design for the use of Azure Spring Apps. In the design, Azure Spring Apps is deployed in a single spoke that's dependent on shared services hosted in the hub. The architecture is built with components to achieve the tenets in the [Microsoft Azure Well-Architected Framework][16].
+There are two flavors of Azure Spring Apps: Standard tier and Enterprise tier.
+
+Azure Spring Apps Standard tier is composed of the Spring Cloud Config Server, the Spring Cloud Service Registry, and the kpack build service.
+
+Azure Spring Apps Enterprise tier is composed of the VMware Tanzu® Build Service™, Application Configuration Service for VMware Tanzu®, VMware Tanzu® Service Registry, Spring Cloud Gateway for VMware Tanzu®, and API portal for VMware Tanzu®.
+ For an implementation of this architecture, see the [Azure Spring Apps Reference Architecture][10] repository on GitHub. Deployment options for this architecture include Azure Resource Manager (ARM), Terraform, Azure CLI, and Bicep. The artifacts in this repository provide a foundation that you can customize for your environment. You can group resources such as Azure Firewall or Application Gateway into different resource groups or subscriptions. This grouping helps keep different functions separate, such as IT infrastructure, security, business application teams, and so on.
The following list describes the Azure services in this reference architecture:
* [Azure Spring Apps][1]: a managed service that's designed and optimized specifically for Java-based Spring Boot applications and .NET-based [Steeltoe][9] applications.
-The following diagram represents a well-architected hub and spoke design that addresses the above requirements:
+The following diagrams represent a well-architected hub and spoke design that addresses the above requirements:
+
+### [Standard tier](#tab/azure-spring-standard)
-![Reference architecture diagram for private applications](./media/spring-cloud-reference-architecture/architecture-private.png)
+
+### [Enterprise tier](#tab/azure-spring-enterprise)
+++ ## Public applications
-The following list describes the infrastructure requirements for public applications. These requirements are typical in highly regulated environments. These requirements are a superset of those in the preceding section. Additional items are indicated with italics.
+The following list describes the infrastructure requirements for public applications. These requirements are typical in highly regulated environments.
* A subnet must only have one instance of Azure Spring Apps. * Adherence to at least one Security Benchmark should be enforced. * Application host Domain Name Service (DNS) records should be stored in Azure Private DNS.
-* _Azure DDoS Protection standard should be enabled._
+* Azure DDoS Protection standard should be enabled.
* Azure service dependencies should communicate through Service Endpoints or Private Link. * Data at rest should be encrypted. * Data in transit should be encrypted. * DevOps deployment pipelines can be used (for example, Azure DevOps) and require network connectivity to Azure Spring Apps. * Egress traffic should travel through a central Network Virtual Appliance (NVA) (for example, Azure Firewall).
-* _Ingress traffic should be managed by at least Application Gateway or Azure Front Door._
-* _Internet routable addresses should be stored in Azure Public DNS._
+* Ingress traffic should be managed by at least Application Gateway or Azure Front Door.
+* Internet routable addresses should be stored in Azure Public DNS.
* Microsoft's Zero Trust security approach requires secrets, certificates, and credentials to be stored in a secure vault. The recommended service is Azure Key Vault. * Name resolution of hosts on-premises and in the Cloud should be bidirectional. * No direct egress to the public Internet except for control plane traffic.
The following list shows the components that make up the design:
The following list describes the Azure services in this reference architecture:
-* _[Azure Application Firewall][7]: a feature of Azure Application Gateway that provides centralized protection of applications from common exploits and vulnerabilities._
+* [Azure Application Firewall][7]: a feature of Azure Application Gateway that provides centralized protection of applications from common exploits and vulnerabilities.
-* _[Azure Application Gateway][6]: a load balancer responsible for application traffic with Transport Layer Security (TLS) offload operating at layer 7._
+* [Azure Application Gateway][6]: a load balancer responsible for application traffic with Transport Layer Security (TLS) offload operating at layer 7.
* [Azure Key Vault][2]: a hardware-backed credential management service that has tight integration with Microsoft identity services and compute resources.
The following list describes the Azure services in this reference architecture:
* [Azure Spring Apps][1]: a managed service that's designed and optimized specifically for Java-based Spring Boot applications and .NET-based [Steeltoe][9] applications.
-The following diagram represents a well-architected hub and spoke design that addresses the above requirements. Note that only the hub-virtual-network communicates with the internet:
+The following diagrams represent a well-architected hub and spoke design that addresses the above requirements. Only the hub-virtual-network communicates with the internet:
-![Reference architecture diagram for public applications](./media/spring-cloud-reference-architecture/architecture-public.png)
+### [Standard tier](#tab/azure-spring-standard)
++
+### [Enterprise tier](#tab/azure-spring-enterprise)
+++ ## Azure Spring Apps on-premises connectivity
The primary aspect of governance that this architecture addresses is segregation
The following list shows the control that addresses datacenter security in this reference:
-| CSA CCM Control ID | CSA CCM Control Domain |
-| :-- | :-|
-| DCS-08 | Datacenter Security Unauthorized Persons Entry |
+| CSA CCM Control ID | CSA CCM Control Domain |
+|:-|:--|
+| DCS-08 | Datacenter Security Unauthorized Persons Entry |
#### Network
The network implementation is further secured by defining controls from the MAFB
The following list shows the CIS controls that address network security in this reference:
-| CIS Control ID | CIS Control Description |
-| :- | :- |
-| 6.2 | Ensure that SSH access is restricted from the internet. |
-| 6.3 | Ensure no SQL Databases allow ingress 0.0.0.0/0 (ANY IP). |
-| 6.5 | Ensure that Network Watcher is 'Enabled'. |
-| 6.6 | Ensure that ingress using UDP is restricted from the internet. |
+| CIS Control ID | CIS Control Description |
+|:|:|
+| 6.2 | Ensure that SSH access is restricted from the internet. |
+| 6.3 | Ensure no SQL Databases allow ingress 0.0.0.0/0 (ANY IP). |
+| 6.5 | Ensure that Network Watcher is 'Enabled'. |
+| 6.6 | Ensure that ingress using UDP is restricted from the internet. |
-Azure Spring Apps requires management traffic to egress from Azure when deployed in a secured environment. To accomplish this, you must allow the network and application rules listed in [Customer responsibilities for running Azure Spring Apps in VNET](./vnet-customer-responsibilities.md).
+Azure Spring Apps requires management traffic to egress from Azure when deployed in a secured environment. You must allow the network and application rules listed in [Customer responsibilities for running Azure Spring Apps in VNET](./vnet-customer-responsibilities.md).
#### Application security
-This design principal covers the fundamental components of identity, data protection, key management, and application configuration. By design, an application deployed in Azure Spring Apps runs with least privilege required to function. The set of authorization controls is directly related to data protection when using the service. Key management strengthens this layered application security approach.
+This design principle covers the fundamental components of identity, data protection, key management, and application configuration. By design, an application deployed in Azure Spring Apps runs with least privilege required to function. The set of authorization controls is directly related to data protection when using the service. Key management strengthens this layered application security approach.
The following list shows the CCM controls that address key management in this reference:
-| CSA CCM Control ID | CSA CCM Control Domain |
-| :-- | : |
-| EKM-01 | Encryption and Key Management Entitlement |
-| EKM-02 | Encryption and Key Management Key Generation |
-| EKM-03 | Encryption and Key Management Sensitive Data Protection |
-| EKM-04 | Encryption and Key Management Storage and Access |
+| CSA CCM Control ID | CSA CCM Control Domain |
+|:-|:--|
+| EKM-01 | Encryption and Key Management Entitlement |
+| EKM-02 | Encryption and Key Management Key Generation |
+| EKM-03 | Encryption and Key Management Sensitive Data Protection |
+| EKM-04 | Encryption and Key Management Storage and Access |
From the CCM, EKM-02, and EKM-03 recommend policies and procedures to manage keys and to use encryption protocols to protect sensitive data. EKM-01 recommends that all cryptographic keys have identifiable owners so that they can be managed. EKM-04 recommends the use of standard algorithms. The following list shows the CIS controls that address key management in this reference:
-| CIS Control ID | CIS Control Description |
-| :- | :- |
-| 8.1 | Ensure that the expiration date is set on all keys. |
-| 8.2 | Ensure that the expiration date is set on all secrets. |
-| 8.4 | Ensure the key vault is recoverable. |
+| CIS Control ID | CIS Control Description |
+|:|:-|
+| 8.1 | Ensure that the expiration date is set on all keys. |
+| 8.2 | Ensure that the expiration date is set on all secrets. |
+| 8.4 | Ensure the key vault is recoverable. |
The CIS controls 8.1 and 8.2 recommend that expiration dates are set for credentials to ensure that rotation is enforced. CIS control 8.4 ensures that the contents of the key vault can be restored to maintain business continuity.
synapse-analytics Apache Spark Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-overview.md
Title: What is Apache Spark
+ Title: Apache Spark in Azure Synapse Analytics overview
description: This article provides an introduction to Apache Spark in Azure Synapse Analytics and the different scenarios in which you can use Spark.
Previously updated : 02/15/2022 Last updated : 05/23/2022 + # Apache Spark in Azure Synapse Analytics
-Apache Spark is a parallel processing framework that supports in-memory processing to boost the performance of big-data analytic applications. Apache Spark in Azure Synapse Analytics is one of Microsoft's implementations of Apache Spark in the cloud. Azure Synapse makes it easy to create and configure a serverless Apache Spark pool in Azure. Spark pools in Azure Synapse are compatible with Azure Storage and Azure Data Lake Generation 2 Storage. So you can use Spark pools to process your data stored in Azure.
+Apache Spark is a parallel processing framework that supports in-memory processing to boost the performance of big data analytic applications. Apache Spark in Azure Synapse Analytics is one of Microsoft's implementations of Apache Spark in the cloud. Azure Synapse makes it easy to create and configure a serverless Apache Spark pool in Azure. Spark pools in Azure Synapse are compatible with Azure Storage and Azure Data Lake Generation 2 Storage. So you can use Spark pools to process your data stored in Azure.
-![Spark: a unified framework](./media/apache-spark-overview/spark-overview.png)
+![Diagram shows Spark SQL, Spark MLib, and GraphX linked to the Spark core engine, above a YARN layer over storage services.](./media/apache-spark-overview/spark-overview.png)
## What is Apache Spark Apache Spark provides primitives for in-memory cluster computing. A Spark job can load and cache data into memory and query it repeatedly. In-memory computing is much faster than disk-based applications. Spark also integrates with multiple programming languages to let you manipulate distributed data sets like local collections. There's no need to structure everything as map and reduce operations.
-![Traditional MapReduce vs. Spark](./media/apache-spark-overview/map-reduce-vs-spark.png)
+![Diagram shows Traditional MapReduce, with disk-based apps and Spark, with cache-based operations.](./media/apache-spark-overview/map-reduce-vs-spark.png)
Spark pools in Azure Synapse offer a fully managed Spark service. The benefits of creating a Spark pool in Azure Synapse Analytics are listed here. | Feature | Description | | | |
-| Speed and efficiency |Spark instances start in approximately 2 minutes for fewer than 60 nodes and approximately 5 minutes for more than 60 nodes. The instance shuts down, by default, 5 minutes after the last job executed unless it is kept alive by a notebook connection. |
+| Speed and efficiency |Spark instances start in approximately 2 minutes for fewer than 60 nodes and approximately 5 minutes for more than 60 nodes. The instance shuts down, by default, 5 minutes after the last job runs unless it's kept alive by a notebook connection. |
| Ease of creation |You can create a new Spark pool in Azure Synapse in minutes using the Azure portal, Azure PowerShell, or the Synapse Analytics .NET SDK. See [Get started with Spark pools in Azure Synapse Analytics](../quickstart-create-apache-spark-pool-studio.md). |
-| Ease of use |Synapse Analytics includes a custom notebook derived from [Nteract](https://nteract.io/). You can use these notebooks for interactive data processing and visualization.|
+| Ease of use |Synapse Analytics includes a custom notebook derived from [nteract](https://nteract.io/). You can use these notebooks for interactive data processing and visualization.|
| REST APIs |Spark in Azure Synapse Analytics includes [Apache Livy](https://github.com/cloudera/hue/tree/master/apps/spark/java#welcome-to-livy-the-rest-spark-server), a REST API-based Spark job server to remotely submit and monitor jobs. |
-| Support for Azure Data Lake Storage Generation 2| Spark pools in Azure Synapse can use Azure Data Lake Storage Generation 2 as well as BLOB storage. For more information on Data Lake Storage, see [Overview of Azure Data Lake Storage](../../data-lake-store/data-lake-store-overview.md). |
+| Support for Azure Data Lake Storage Generation 2| Spark pools in Azure Synapse can use Azure Data Lake Storage Generation 2 and BLOB storage. For more information on Data Lake Storage, see [Overview of Azure Data Lake Storage](../../data-lake-store/data-lake-store-overview.md). |
| Integration with third-party IDEs | Azure Synapse provides an IDE plugin for [JetBrains' IntelliJ IDEA](https://www.jetbrains.com/idea/) that is useful to create and submit applications to a Spark pool. |
-| Pre-loaded Anaconda libraries |Spark pools in Azure Synapse come with Anaconda libraries pre-installed. [Anaconda](https://docs.continuum.io/anaconda/) provides close to 200 libraries for machine learning, data analysis, visualization, etc. |
+| Preloaded Anaconda libraries |Spark pools in Azure Synapse come with Anaconda libraries preinstalled. [Anaconda](https://docs.continuum.io/anaconda/) provides close to 200 libraries for machine learning, data analysis, visualization, and other technologies. |
| Scalability | Apache Spark in Azure Synapse pools can have Auto-Scale enabled, so that pools scale by adding or removing nodes as needed. Also, Spark pools can be shut down with no loss of data since all the data is stored in Azure Storage or Data Lake Storage. |
-Spark pools in Azure Synapse include the following components that are available on the pools by default.
+Spark pools in Azure Synapse include the following components that are available on the pools by default:
- [Spark Core](https://spark.apache.org/docs/2.4.5/). Includes Spark Core, Spark SQL, GraphX, and MLlib. - [Anaconda](https://docs.continuum.io/anaconda/) - [Apache Livy](https://github.com/cloudera/hue/tree/master/apps/spark/java#welcome-to-livy-the-rest-spark-server)-- [Nteract notebook](https://nteract.io/)
+- [nteract notebook](https://nteract.io/)
## Spark pool architecture
-It is easy to understand the components of Spark by understanding how Spark runs on Azure Synapse Analytics.
+Spark applications run as independent sets of processes on a pool, coordinated by the `SparkContext` object in your main program, called the *driver program*.
-Spark applications run as independent sets of processes on a pool, coordinated by the SparkContext object in your main program (called the driver program).
+The `SparkContext` can connect to the cluster manager, which allocates resources across applications. The cluster manager is [Apache Hadoop YARN](https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html). Once connected, Spark acquires executors on nodes in the pool, which are processes that run computations and store data for your application. Next, it sends your application code, defined by JAR or Python files passed to `SparkContext`, to the executors. Finally, `SparkContext` sends tasks to the executors to run.
-The SparkContext can connect to the cluster manager, which allocates resources across applications. The cluster manager is [Apache Hadoop YARN](https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html). Once connected, Spark acquires executors on nodes in the pool, which are processes that run computations and store data for your application. Next, it sends your application code (defined by JAR or Python files passed to SparkContext) to the executors. Finally, SparkContext sends tasks to the executors to run.
+The `SparkContext` runs the user's main function and executes the various parallel operations on the nodes. Then, the `SparkContext` collects the results of the operations. The nodes read and write data from and to the file system. The nodes also cache transformed data in-memory as Resilient Distributed Datasets (RDDs).
-The SparkContext runs the user's main function and executes the various parallel operations on the nodes. Then, the SparkContext collects the results of the operations. The nodes read and write data from and to the file system. The nodes also cache transformed data in-memory as Resilient Distributed Datasets (RDDs).
-
-The SparkContext connects to the Spark pool and is responsible for converting an application to a directed acyclic graph (DAG). The graph consists of individual tasks that get executed within an executor process on the nodes. Each application gets its own executor processes, which stay up for the duration of the whole application and run tasks in multiple threads.
+The `SparkContext` connects to the Spark pool and is responsible for converting an application to a directed acyclic graph (DAG). The graph consists of individual tasks that run within an executor process on the nodes. Each application gets its own executor processes, which stay up during the whole application and run tasks in multiple threads.
## Apache Spark in Azure Synapse Analytics use cases Spark pools in Azure Synapse Analytics enable the following key scenarios:
-### Data Engineering/Data Preparation
+- Data Engineering/Data Preparation
-Apache Spark includes many language features to support preparation and processing of large volumes of data so that it can be made more valuable and then consumed by other services within Azure Synapse Analytics. This is enabled through multiple languages (C#, Scala, PySpark, Spark SQL) and supplied libraries for processing and connectivity.
+ Apache Spark includes language features to support preparation and processing of large volumes of data so that it can be made more valuable and then consumed by other services within Azure Synapse Analytics. This approach is enabled through multiple languages, including C#, Scala, PySpark, and Spark SQL, and supplied libraries for processing and connectivity.
-### Machine Learning
+- Machine Learning
-Apache Spark comes with [MLlib](https://spark.apache.org/mllib/), a machine learning library built on top of Spark that you can use from a Spark pool in Azure Synapse Analytics. Spark pools in Azure Synapse Analytics also include Anaconda, a Python distribution with a variety of packages for data science including machine learning. When combined with built-in support for notebooks, you have an environment for creating machine learning applications.
+ Apache Spark comes with [MLlib](https://spark.apache.org/mllib/), a machine learning library built on top of Spark that you can use from a Spark pool in Azure Synapse Analytics. Spark pools in Azure Synapse Analytics also include Anaconda, a Python distribution with various packages for data science including machine learning. When combined with built-in support for notebooks, you have an environment for creating machine learning applications.
## Where do I start
Use the following articles to learn more about Apache Spark in Azure Synapse Ana
- [Apache Spark official documentation](https://spark.apache.org/docs/2.4.5/) > [!NOTE]
-> Some of the official Apache Spark documentation relies on using the spark console, this is not available on Azure Synapse Spark, use the notebook or IntelliJ experiences instead
+> Some of the official Apache Spark documentation relies on using the Spark console, which is not available on Azure Synapse Spark. Use the notebook or IntelliJ experiences instead.
## Next steps
-In this overview, you get a basic understanding of Apache Spark in Azure Synapse Analytics. Advance to the next article to learn how to create a Spark pool in Azure Synapse Analytics:
+This overview provided a basic understanding of Apache Spark in Azure Synapse Analytics. Advance to the next article to learn how to create a Spark pool in Azure Synapse Analytics:
- [Create a Spark pool in Azure Synapse](../quickstart-create-apache-spark-pool-portal.md)
virtual-desktop Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/authentication.md
Azure Virtual Desktop currently doesn't support [external identities](../active-
To access Azure Virtual Desktop resources, you must first authenticate to the service by signing in to an Azure AD account. Authentication happens when subscribing to a workspace to retrieve your resources or every time you connect to apps or desktops. You can use [third-party identity providers](../active-directory/devices/azureadjoin-plan.md#federated-environment) as long as they federate with Azure AD.
-### Multifactor authentication
+### Multi-factor authentication
-Follow the instructions in [Set up multifactor authentication in Azure Virtual Desktop](set-up-mfa.md) to learn how to enable multifactor authentication (MFA) for your deployment. That article will also tell you how to configure how often your users are prompted to enter their credentials. When deploying Azure AD-joined VMs, follow the configuration guide in [Enabling MFA for Azure AD-joined VMs](deploy-azure-ad-joined-vm.md#enabling-mfa-for-azure-ad-joined-vms).
+Follow the instructions in [Enforce Azure Active Directory Multi-Factor Authentication for Azure Virtual Desktop using Conditional Access](set-up-mfa.md) to learn how to enforce Azure AD Multi-Factor Authentication for your deployment. That article will also tell you how to configure how often your users are prompted to enter their credentials. When deploying Azure AD-joined VMs, note the extra steps for [Azure AD-joined session host VMs](set-up-mfa.md#azure-ad-joined-session-host-vms).
### Smart card authentication
virtual-desktop Deploy Azure Ad Joined Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/deploy-azure-ad-joined-vm.md
To enable access from Windows devices not joined to Azure AD, add **targetisaadj
To access Azure AD-joined VMs using the web, Android, macOS and iOS clients, you must add **targetisaadjoined:i:1** as a [custom RDP property](customize-rdp-properties.md) to the host pool. These connections are restricted to entering user name and password credentials when signing in to the session host.
-### Enabling MFA for Azure AD joined VMs
+### Enforcing Azure AD Multi-Factor Authentication for Azure AD-joined session VMs
-You can enable [multifactor authentication](set-up-mfa.md) for Azure AD-joined VMs by setting a Conditional Access policy on the Azure Virtual Desktop app. For connections to succeed, you must [disable the legacy per-user multifactor authentication](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md#mfa-sign-in-method-required). If you don't want to restrict signing in to strong authentication methods like Windows Hello for Business, you'll also need to [exclude the Azure Windows VM Sign-In app](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md#mfa-sign-in-method-required) from your Conditional Access policy.
+You can use Azure AD Multi-Factor Authentication with Azure AD-joined VMs. Follow the steps to [Enforce Azure Active Directory Multi-Factor Authentication for Azure Virtual Desktop using Conditional Access](set-up-mfa.md) and note the extra steps for [Azure AD-joined session host VMs](set-up-mfa.md#azure-ad-joined-session-host-vms).
## User profiles
virtual-desktop Set Up Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-mfa.md
Title: Azure multifactor authentication for Azure Virtual Desktop - Azure
-description: How to set up Azure multifactor authentication to make Azure Virtual Desktop more secure.
+ Title: Enforce Azure Active Directory Multi-Factor Authentication for Azure Virtual Desktop using Conditional Access - Azure
+description: How to enforce Azure Active Directory Multi-Factor Authentication for Azure Virtual Desktop using Conditional Access to help make it more secure.
Previously updated : 12/10/2020 Last updated : 05/27/2022
-# Enable Azure multifactor authentication for Azure Virtual Desktop
+# Enforce Azure Active Directory Multi-Factor Authentication for Azure Virtual Desktop using Conditional Access
->[!IMPORTANT]
+> [!IMPORTANT]
> If you're visiting this page from the Azure Virtual Desktop (classic) documentation, make sure to [return to the Azure Virtual Desktop (classic) documentation](./virtual-desktop-fall-2019/tenant-setup-azure-active-directory.md) once you're finished.
-The Windows client for Azure Virtual Desktop is an excellent option for integrating Azure Virtual Desktop with your local machine. However, when you configure your Azure Virtual Desktop account into the Windows Client, there are certain measures you'll need to take to keep yourself and your users safe.
+Users can sign into Azure Virtual Desktop from anywhere using different devices and clients. However, there are certain measures you should take to help keep yourself and your users safe. Using Azure Active Directory (AD) Multi-Factor Authentication with Azure Virtual Desktop prompts users during the sign-in process for an additional form of identification, in addition to their username and password. You can enforce Azure Active Directory Multi-Factor Authentication for Azure Virtual Desktop using Conditional Access and whether it applies for the web client or mobile apps and desktop clients, or both.
-When you first sign in, the client asks for your username, password, and Azure multifactor authentication. After that, the next time you sign in, the client will remember your token from your Azure Active Directory (AD) Enterprise Application. When you select **Remember me** on the prompt for credentials for the session host, your users can sign in after restarting the client without needing to reenter their credentials.
+How often a user is prompted to reauthenticate depends on [Azure AD session lifetime configuration settings](../active-directory/authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md#azure-ad-session-lifetime-configuration-settings). For example, if their Windows client device is registered with Azure AD, it will receive a [Primary Refresh Token](../active-directory/devices/concept-primary-refresh-token.md) (PRT) to use single sign-on (SSO) across applications. Once issued, a PRT is valid for 14 days and is continuously renewed as long as the user actively uses the device.
-While remembering credentials is convenient, it can also make deployments on Enterprise scenarios or personal devices less secure. To protect your users, you can make sure the client keeps asking for Azure multifactor authentication credentials more frequently. This article will show you how to configure the Conditional Access policy for Azure Virtual Desktop to enable this setting.
+While remembering credentials is convenient, it can also make deployments for Enterprise scenarios using personal devices less secure. To protect your users, you can make sure the client keeps asking for Azure AD Multi-Factor Authentication credentials more frequently. You can use Conditional Access to configure this behavior.
+
+Learn how to enforce Azure AD Multi-Factor Authentication for Azure Virtual Desktop and optionally configure sign-in frequency below.
## Prerequisites Here's what you'll need to get started: -- Assign users a license that includes Azure Active Directory Premium P1 or P2.-- An Azure Active Directory group with your users assigned as group members.-- Enable Azure multifactor authentication for all your users. For more information about how to do that, see [How to require two-step verification for a user](../active-directory/authentication/howto-mfa-userstates.md#view-the-status-for-a-user).-
-> [!NOTE]
-> The following setting also applies to the [Azure Virtual Desktop web client](https://rdweb.wvd.microsoft.com/arm/webclient/https://docsupdatetracker.net/index.html).
+- Assign users a license that includes [Azure Active Directory Premium P1 or P2](../active-directory/authentication/concept-mfa-licensing.md).
+- An [Azure Active Directory group](../active-directory/fundamentals/active-directory-groups-create-azure-portal.md) with your Azure Virtual Desktop users assigned as group members.
+- Enable Azure AD Multi-Factor Authentication for your users. For more information about how to do that, see [Enable Azure AD Multi-Factor Authentication](../active-directory/authentication/tutorial-enable-azure-mfa.md).
## Create a Conditional Access policy
-Here's how to create a Conditional Access policy that requires multifactor authentication when connecting to Azure Virtual Desktop:
-
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
-2. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-3. Select **New policy**.
-4. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
-5. Under **Assignments**, select **Users and groups**.
-6. Under **Include**, select **Select users and groups** > **Users and groups** > Choose the group you created in the [prerequisites](#prerequisites) stage.
-7. Select **Done**.
-8. Under **Cloud apps or actions** > **Include**, select **Select apps**.
-9. Select one of the following apps based on which version of Azure Virtual Desktop you're using.
+Here's how to create a Conditional Access policy that requires multi-factor authentication when connecting to Azure Virtual Desktop:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator, security administrator, or Conditional Access administrator.
+1. In the search bar, type *Azure Active Directory* and select the matching service entry.
+1. Browse to **Security** > **Conditional Access**.
+1. Select **New policy** > **Create new policy**.
+1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
+1. Under **Assignments**, select **Users or workload entities**.
+1. Under the **Include** tab, select **Select users and groups** and tick **Users and groups**. On the right, search for and choose the group that contains your Azure Virtual Desktop users as group members.
+1. Select **Select**.
+1. Under **Assignments**, select **Cloud apps or actions**.
+1. Under the **Include** tab, select **Select apps**.
+1. On the right, select one of the following apps based on which version of Azure Virtual Desktop you're using.
- - If you're using Azure Virtual Desktop (classic), choose these apps:
-
- - **Windows Virtual Desktop** (App ID 5a0aa725-4958-4b0c-80a9-34562e23f3b7)
- - **Windows Virtual Desktop Client** (App ID fa4345a4-a730-4230-84a8-7d9651b86739), which will let you set policies on the web client
+ - If you're using Azure Virtual Desktop (based on Azure Resource Manager), choose this app:
- After that, skip ahead to step 11.
+ - **Azure Virtual Desktop** (app ID 9cdead84-a844-4324-93f2-b2e6bb768d07)
- - If you're using Azure Virtual Desktop, choose this app instead:
+ > [!TIP]
+ > The app name was previously *Windows Virtual Desktop*. If you registered the *Microsoft.DesktopVirtualization* resource provider before the display name changed, the application will be named **Windows Virtual Desktop** with the same app ID as above.
+
+ After that, go to step 10.
+
+ - If you're using Azure Virtual Desktop (classic), choose these apps:
- - **Azure Virtual Desktop** (App ID 9cdead84-a844-4324-93f2-b2e6bb768d07)
+ - **Windows Virtual Desktop** (app ID 5a0aa725-4958-4b0c-80a9-34562e23f3b7)
+ - **Windows Virtual Desktop Client** (app ID fa4345a4-a730-4230-84a8-7d9651b86739), which will let you set policies on the web client
- After that, go to step 10.
+ > [!TIP]
+ > If you're using Azure Virtual Desktop (classic) and if the Conditional Access policy blocks all access excluding Azure Virtual Desktop app IDs, you can fix this by also adding the **Azure Virtual Desktop** (app ID 9cdead84-a844-4324-93f2-b2e6bb768d07) to the policy. Not adding this app ID will block feed discovery of Azure Virtual Desktop (classic) resources.
- >[!IMPORTANT]
- > Don't select the app called Azure Virtual Desktop Azure Resource Manager Provider (50e95039-b200-4007-bc97-8d5790743a63). This app is only used for retrieving the user feed and shouldn't have multifactor authentication.
- >
- > If you're using Azure Virtual Desktop (classic), if the Conditional Access policy blocks all access and only excludes Azure Virtual Desktop app IDs, you can fix this by adding the app ID 9cdead84-a844-4324-93f2-b2e6bb768d07 to the policy. Not adding this app ID will block feed discovery of Azure Virtual Desktop (classic) resources.
+ After that, skip ahead to step 11.
-10. Once you've selected your app, choose **Select**, and then select **Done**.
+ > [!IMPORTANT]
+ > Don't select the app called Azure Virtual Desktop Azure Resource Manager Provider (app ID 50e95039-b200-4007-bc97-8d5790743a63). This app is only used for retrieving the user feed and shouldn't have multi-factor authentication.
- > [!div class="mx-imgBorder"]
- > ![A screenshot of the Cloud apps or actions page. The Azure Virtual Desktop and Azure Virtual Desktop Client apps are highlighted in red.](media/cloud-apps-enterprise.png)
+1. Once you've selected your app, select **Select**.
- >[!NOTE]
- >To find the App ID of the app you want to select, go to **Enterprise Applications** and select **Microsoft Applications** from the application type drop-down menu.
-
-11. Go to **Conditions** > **Client apps**. In **Configure**, select **Yes**, and then select where to apply the policy:
+ > [!div class="mx-imgBorder"]
+ > ![A screenshot of the Conditional Access Cloud apps or actions page. The Azure Virtual Desktop app is shown.](media/cloud-apps-enterprise.png)
+1. Under **Assignments**, select **Conditions** > **Client apps**. On the right, for **Configure**, select **Yes**, and then select the client apps this policy will apply to:
+
+ - Select both check boxes if you want to apply the policy to all clients.
- Select **Browser** if you want the policy to apply to the web client. - Select **Mobile apps and desktop clients** if you want to apply the policy to other clients.
- - Select both check boxes if you want to apply the policy to all clients.
+ - Deselect values for legacy authentication clients.
> [!div class="mx-imgBorder"]
- > ![A screenshot of the Client apps page. The user has selected the mobile apps and desktop clients check box.](media/select-apply.png)
+ > ![A screenshot of the Conditional Access Client apps page. The user has selected the mobile apps and desktop clients, and browser check boxes.](media/conditional-access-client-apps.png)
+
+1. Once you've selected the client apps this policy will apply to, select **Done**.
+1. Under **Assignments**, select **Access controls** > **Grant**, select **Grant access**, **Require multi-factor authentication**, and then select **Select**.
+1. At the bottom of the page, set **Enable policy** to **On** and select **Create**.
+
+> [!NOTE]
+> When you use the web client to sign in to Azure Virtual Desktop through your browser, the log will list the client app ID as a85cf173-4192-42f8-81fa-777a763e6e2c (Azure Virtual Desktop client). This is because the client app is internally linked to the server app ID where the conditional access policy was set.
+
+> [!TIP]
+> Some users may see a prompt titled *Stay signed in to all your apps* if the Windows device they're using is not already registered with Azure AD. If they deselect **Allow my organization to manage my device** and select **No, sign in to this app only**, this may reappear frequently.
+
+## Configure sign-in frequency
+
+To optionally configure the time period before a user is asked to sign-in again:
+
+1. Open the policy you created previously.
+1. Under **Assignments**, select **Access controls** > **Session**. On the right, select **Sign-in frequency**. Set the value for the time period before a user is asked to sign-in again, and then select **Select**. For example, setting the value to **1** and the unit to **Hours**, will require multi-factor authentication if a connection is launched over an hour after the last one.
+1. At the bottom of the page, under **Enable policy** select **Save**.
-12. Under **Access controls** > **Grant**, select **Grant access**, **Require multi-factor authentication**, and then **Select**.
-13. Under **Access controls** > **Session**, select **Sign-in frequency**, set the value to the time you want between prompts, and then select **Select**. For example, setting the value to **1** and the unit to **Hours**, will require multifactor authentication if a connection is launched an hour after the last one.
-14. Confirm your settings and set **Enable policy** to **On**.
-15. Select **Create** to enable your policy.
+## Azure AD joined session host VMs
->[!NOTE]
->When you use the web client to sign in to Azure Virtual Desktop through your browser, the log will list the client app ID as a85cf173-4192-42f8-81fa-777a763e6e2c (Azure Virtual Desktop client). This is because the client app is internally linked to the server app ID where the conditional access policy was set.
+For connections to succeed, you must [disable the legacy per-user multi-factor authentication sign-in method](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md#mfa-sign-in-method-required). If you don't want to restrict signing in to strong authentication methods like Windows Hello for Business, you'll also need to [exclude the Azure Windows VM Sign-In app](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md#mfa-sign-in-method-required) from your Conditional Access policy.
## Next steps
virtual-desktop Troubleshoot Azure Ad Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-azure-ad-connections.md
If you come across an error saying **Your account is configured to prevent you f
If you can't sign in and keep receiving an error message that says your credentials are incorrect, first make sure you're using the right credentials. If you keep seeing error messages, ask yourself the following questions: -- Does your Conditional Access policy exclude multifactor authentication requirements for the Azure Windows VM sign-in cloud application?
+- Does your Conditional Access policy exclude multi-factor authentication requirements for the Azure Windows VM sign-in cloud application?
- Have you assigned the **Virtual Machine User Login** role-based access control (RBAC) permission to the VM or resource group for each user?
-If you answered "no" to either of these questions, follow the instructions in [Enable multifactor authentication](deploy-azure-ad-joined-vm.md#enabling-mfa-for-azure-ad-joined-vms) to reconfigure your multifactor authentication.
+If you answered "no" to either of these questions, follow the instructions in [Enforce Azure Active Directory Multi-Factor Authentication for Azure Virtual Desktop using Conditional Access](set-up-mfa.md#azure-ad-joined-session-host-vms) to reconfigure your multi-factor authentication.
> [!WARNING]
-> VM sign-ins don't support per-user enabled or enforced Azure AD multifactor authentication. If you try to sign in with multifactor authentication on a VM, you won't be able to sign in and will receive an error message.
+> VM sign-ins don't support per-user enabled or enforced Azure AD Multi-Factor Authentication. If you try to sign in with multi-factor authentication on a VM, you won't be able to sign in and will receive an error message.
-If you can access your Azure AD sign-in logs through Log Analytics, you can see if you've enabled multifactor authentication and which Conditional Access policy is triggering the event. The events shown are non-interactive user login events for the VM, which means the IP address will appear to come from the external IP address that your VM accesses Azure AD from.
+If you can access your Azure AD sign-in logs through Log Analytics, you can see if you've enabled multi-factor authentication and which Conditional Access policy is triggering the event. The events shown are non-interactive user login events for the VM, which means the IP address will appear to come from the external IP address that your VM accesses Azure AD from.
You can access your sign-in logs by running the following Kusto query:
If you come across an error saying **The logon attempt failed** on the Windows S
- You are on a device that is Azure AD-joined or hybrid Azure AD-joined to the same Azure AD tenant as the session host OR - You are on a device running Windows 10 2004 or later that is Azure AD registered to the same Azure AD tenant as the session host - The [PKU2U protocol is enabled](/windows/security/threat-protection/security-policy-settings/network-security-allow-pku2u-authentication-requests-to-this-computer-to-use-online-identities) on both the local PC and the session host-- [Per-user multifactor authentication is disabled](deploy-azure-ad-joined-vm.md#enabling-mfa-for-azure-ad-joined-vms) for the user account as it's not supported for Azure AD-joined VMs.
+- [Per-user multi-factor authentication is disabled](set-up-mfa.md#azure-ad-joined-session-host-vms) for the user account as it's not supported for Azure AD-joined VMs.
### The sign-in method you're trying to use isn't allowed
-If you come across an error saying **The sign-in method you're trying to use isn't allowed. Try a different sign-in method or contact your system administrator**, you have Conditional Access policies restricting access. Follow the instructions in [Enable multifactor authentication](deploy-azure-ad-joined-vm.md#enabling-mfa-for-azure-ad-joined-vms) to enable multifactor authentication for your Azure AD-joined VMs.
+If you come across an error saying **The sign-in method you're trying to use isn't allowed. Try a different sign-in method or contact your system administrator**, you have Conditional Access policies restricting access. Follow the instructions in [Enforce Azure Active Directory Multi-Factor Authentication for Azure Virtual Desktop using Conditional Access](set-up-mfa.md#azure-ad-joined-session-host-vms) to enforce Azure Active Directory Multi-Factor Authentication for your Azure AD-joined VMs.
## Web client
If you come across an error saying **Oops, we couldn't connect to NAME. Sign in
### We couldn't connect to the remote PC because of a security error
-If you come across an error saying **Oops, we couldn't connect to NAME. We couldn't connect to the remote PC because of a security error. If this keeps happening, ask your admin or tech support for help.**, you have Conditional Access policies restricting access. Follow the instructions in [Enable multifactor authentication](deploy-azure-ad-joined-vm.md#enabling-mfa-for-azure-ad-joined-vms) to enable multifactor authentication for your Azure AD-joined VMs.
+If you come across an error saying **Oops, we couldn't connect to NAME. We couldn't connect to the remote PC because of a security error. If this keeps happening, ask your admin or tech support for help.**, you have Conditional Access policies restricting access. Follow the instructions in [Enforce Azure Active Directory Multi-Factor Authentication for Azure Virtual Desktop using Conditional Access](set-up-mfa.md#azure-ad-joined-session-host-vms) to enforce Azure Active Directory Multi-Factor Authentication for your Azure AD-joined VMs.
## Android client
virtual-machines Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/quick-create-cli.md
az group delete --name myResourceGroup
## Next steps
-In this quickstart, you deployed a simple virtual machine, open a network port for web traffic, and installed a basic web server. To learn more about Azure virtual machines, continue to the tutorial for Linux VMs.
+In this quickstart, you deployed a simple virtual machine, opened a network port for web traffic, and installed a basic web server. To learn more about Azure virtual machines, continue to the tutorial for Linux VMs.
> [!div class="nextstepaction"]
virtual-machines Run Command Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/run-command-managed.md
> This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-The Run Command feature uses the virtual machine (VM) agent to scripts within an Azure Linux VM. You can use these scripts for general machine or application management. They can help you quickly diagnose and remediate VM access and network issues and get the VM back to a good state.
+The Run Command feature uses the virtual machine (VM) agent to run scripts within an Azure Linux VM. You can use these scripts for general machine or application management. They can help you quickly diagnose and remediate VM access and network issues and get the VM back to a good state.
-The *updated* managed Run Command uses the same VM agent channel to execute scripts and provides the following enhancements over the [original action orientated Run Command](run-command.md):
+The *updated* managed Run Command uses the same VM agent channel to execute scripts and provides the following enhancements over the [original action oriented Run Command](run-command.md):
- Support for updated Run Command through ARM deployment template - Parallel execution of multiple scripts - Sequential execution of scripts
az vm run-command list --vm-name "myVM" --resource-group "myRG"
This command will retrieve current execution progress, including latest output, start/end time, exit code, and terminal state of the execution. ```azurecli-interactive
-az vm run-command show --name "myRunCommand" --vm-name "myVM" --resource-group "myRG" ΓÇôexpand
+az vm run-command show --name "myRunCommand" --vm-name "myVM" --resource-group "myRG" --expand instanceView
``` ### Delete RunCommand resource from the VM
az vm run-command delete --name "myRunCommand" --vm-name "myVM" --resource-group
### Execute a script with the VM This command will deliver the script to the VM, execute it, and return the captured output.
-```powershell-interactive
-Set-AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM" -Name "RunCommandName" ΓÇô Script "echo Hello World!"
+```azurepowershell-interactive
+Set-AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM" -Location "EastUS" -RunCommandName "RunCommandName" ΓÇôSourceScript "echo Hello World!"
``` ### List all deployed RunCommand resources on a VM This command will return a full list of previously deployed Run Commands along with their properties.
-```powershell-interactive
-Get-AzVMRunCommand AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM"
+```azurepowershell-interactive
+Get-AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM"
``` ### Get execution status and results This command will retrieve current execution progress, including latest output, start/end time, exit code, and terminal state of the execution.
-```powershell-interactive
-Get-AzVMRunCommand AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM" -Name "RunCommandName" -Status
+```azurepowershell-interactive
+Get-AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM" -RunCommandName "RunCommandName" -Expand instanceView
``` ### Delete RunCommand resource from the VM Remove the RunCommand resource previously deployed on the VM. If the script execution is still in progress, execution will be terminated.
-```powershell-interactive
-Remove-AzVMRunCommand AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM" -Name "RunCommandName"
+```azurepowershell-interactive
+Remove-AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM" -RunCommandName "RunCommandName"
``` -- ## REST API To deploy a new Run Command, execute a PUT on the VM directly and specify a unique name for the Run Command instance.
virtual-machines Shared Image Galleries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/shared-image-galleries.md
To list all the Azure Compute Gallery resources across subscriptions that you ha
1. Select all the subscriptions under which you'd like to list all the resources. 1. Look for resources of the **Azure Compute Gallery** type.
+### [Azure CLI](#tab/azure-cli)
+ To list all the Azure Compute Gallery resources, across subscriptions that you have permissions to, use the following command in the Azure CLI: ```azurecli az account list -otsv --query "[].id" | xargs -n 1 az sig list --subscription ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+To list all the Azure Compute Gallery resources, across subscriptions that you have permissions to, use the following command in the Azure PowerShell:
+
+```azurepowershell
+$params = @{
+ Begin = { $currentContext = Get-AzContext }
+ Process = { $null = Set-AzContext -SubscriptionObject $_; Get-AzGallery }
+ End = { $null = Set-AzContext -Context $currentContext }
+}
+
+Get-AzSubscription | ForEach-Object @params
+```
+++ For more information, see [List, update, and delete image resources](update-image-resources.md). ### Can I move my existing image to an Azure Compute Gallery?
Source region is the region in which your image version will be created, and tar
### How do I specify the source region while creating the image version?
-While creating an image version, you can use the **--location** tag in CLI and the **-Location** tag in PowerShell to specify the source region. Please ensure the managed image that you are using as the base image to create the image version is in the same location as the location in which you intend to create the image version. Also, make sure that you pass the source region location as one of the target regions when you create an image version.
+While creating an image version, you can use the **--location** argument in CLI and the **-Location** parameter in PowerShell to specify the source region. Please ensure the managed image that you are using as the base image to create the image version is in the same location as the location in which you intend to create the image version. Also, make sure that you pass the source region location as one of the target regions when you create an image version.
### How do I specify the number of image version replicas to be created in each region?
There are two ways you can specify the number of image version replicas to be cr
1. The regional replica count which specifies the number of replicas you want to create per region. 2. The common replica count which is the default per region count in case regional replica count is not specified.
-To specify the regional replica count, pass the location along with the number of replicas you want to create in that region: "South Central US=2".
+### [Azure CLI](#tab/azure-cli)
-If regional replica count is not specified with each location, then the default number of replicas will be the common replica count that you specified.
+To specify the regional replica count, pass the location along with the number of replicas you want to create in that region: "South Central US=2".
-To specify the common replica count in CLI, use the **--replica-count** argument in the `az sig image-version create` command.
+If regional replica count is not specified with each location, then the default number of replicas will be the common replica count that you specified.
+
+To specify the common replica count in Azure CLI, use the **--replica-count** argument in the `az sig image-version create` command.
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+To specify the regional replica count, pass the location along with the number of replicas you want to create in that region, `@{Name = 'South Central US';ReplicaCount = 2}`, to the **-TargetRegion** parameter in the `New-AzGalleryImageVersion` command.
+
+If regional replica count is not specified with each location, then the default number of replicas will be the common replica count that you specified.
+
+To specify the common replica count in Azure PowerShell, use the **-ReplicaCount** parameter in the `New-AzGalleryImageVersion` command.
++ ### Can I create the gallery in a different location than the one for the image definition and image version?
virtual-machines Nsg Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/nsg-quickstart-powershell.md
$httprule = New-AzNetworkSecurityRuleConfig `
-Access "Allow" ` -Protocol "Tcp" ` -Direction "Inbound" `
- -Priority "100" `
+ -Priority 100 `
-SourceAddressPrefix "Internet" ` -SourcePortRange * ` -DestinationAddressPrefix * `
- -DestinationPortRange 80
+ -DestinationPortRange "80"
``` Next, create your Network Security group with [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup) and assign the HTTP rule you just created as follows. The following example creates a Network Security Group named *myNetworkSecurityGroup*:
virtual-machines Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/quick-create-cli.md
Using the example below, you will be prompted to enter a password at the command
az vm create \ --resource-group myResourceGroup \ --name myVM \
- --image Win2019Datacenter \
+ --image Win2022AzureEditionCore \
--public-ip-sku Standard \ --admin-username azureuser ```
It takes a few minutes to create the VM and supporting resources. The following
Note your own `publicIpAddress` in the output from your VM. This address is used to access the VM in the next steps.
-## Open port 80 for web traffic
+## Install web server
-By default, only RDP connections are opened when you create a Windows VM in Azure. Use [az vm open-port](/cli/azure/vm) to open TCP port 80 for use with the IIS web server:
+To see your VM in action, install the IIS web server.
```azurecli-interactive
-az vm open-port --port 80 --resource-group myResourceGroup --name myVM
-```
-
-## Connect to virtual machine
-
-Use the following command to create a remote desktop session from your local computer. Replace the IP address with the public IP address of your VM. When prompted, enter the credentials used when the VM was created:
-
-```powershell
-mstsc /v:publicIpAddress
+az vm run-command invoke -g MyResourceGroup -n MyVm --command-id RunPowerShellScript --scripts "Install-WindowsFeature -name Web-Server -IncludeManagementTools"
```
-## Install web server
+## Open port 80 for web traffic
-To see your VM in action, install the IIS web server. Open a PowerShell prompt on the VM and run the following command:
+By default, only RDP connections are opened when you create a Windows VM in Azure. Use [az vm open-port](/cli/azure/vm) to open TCP port 80 for use with the IIS web server:
-```powershell
-Install-WindowsFeature -name Web-Server -IncludeManagementTools
+```azurecli-interactive
+az vm open-port --port 80 --resource-group myResourceGroup --name myVM
```
-When done, close the RDP connection to the VM.
- ## View the web server in action With IIS installed and port 80 now open on your VM from the Internet, use a web browser of your choice to view the default IIS welcome page. Use the public IP address of your VM obtained in a previous step. The following example shows the default IIS web site:
virtual-machines Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/quick-create-powershell.md
To open the Cloud Shell, just select **Try it** from the upper right corner of a
Create an Azure resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). A resource group is a logical container into which Azure resources are deployed and managed. ```azurepowershell-interactive
-New-AzResourceGroup -Name myResourceGroup -Location 'EastUS'
+New-AzResourceGroup -Name 'myResourceGroup' -Location 'EastUS'
``` ## Create virtual machine
New-AzVm `
-OpenPorts 80,3389 ```
-## Connect to virtual machine
-
-After the deployment has completed, RDP to the VM. To see your VM in action, the IIS web server is then installed.
-
-To see the public IP address of the VM, use the [Get-AzPublicIpAddress](/powershell/module/az.network/get-azpublicipaddress) cmdlet:
-
-```azurepowershell-interactive
-Get-AzPublicIpAddress -ResourceGroupName 'myResourceGroup' | Select-Object -Property 'IpAddress'
-```
-
-Use the following command to create a remote desktop session from your local computer. Replace `publicIpAddress` with the public IP address of your VM.
-
-```powershell
-mstsc /v:publicIpAddress
-```
-
-In the **Windows Security** window, select **More choices**, and then select **Use a different account**. Type the username as **localhost**\\*username*, enter password you created for the virtual machine, and then click **OK**.
-
-You may receive a certificate warning during the sign-in process. Click **Yes** or **Continue** to create the connection
- ## Install web server To see your VM in action, install the IIS web server. Open a PowerShell prompt on the VM and run the following command:
-```powershell
-Install-WindowsFeature -Name Web-Server -IncludeManagementTools
+```azurepowershell-interactive
+Invoke-AzVMRunCommand -ResourceGroupName 'myResourceGroup' -VMName 'myVM' -CommandId 'RunPowerShellScript' -ScriptString
+'Install-WindowsFeature -Name Web-Server -IncludeManagementTools'
```
-When done, close the RDP connection to the VM.
+The `-ScriptString` parameter requires version `4.27.0` or later of the `Az.Compute` module.
+ ## View the web server in action
With IIS installed and port 80 now open on your VM from the Internet, use a web
When no longer needed, you can use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) cmdlet to remove the resource group, VM, and all related resources: ```azurepowershell-interactive
-Remove-AzResourceGroup -Name myResourceGroup
+Remove-AzResourceGroup -Name 'myResourceGroup'
``` ## Next steps
virtual-machines Automation Configure Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-control-plane.md
The control plane for the [SAP deployment automation framework on Azure](automat
## Deployer
-The [deployer](automation-deployment-framework.md#deployment-components) is the execution engine of the [SAP automation framework](automation-deployment-framework.md). It is a pre-configured virtual machine (VM) that is used for executing Terraform and Ansible commands.
+The [deployer](automation-deployment-framework.md#deployment-components) is the execution engine of the [SAP automation framework](automation-deployment-framework.md). It's a pre-configured virtual machine (VM) that is used for executing Terraform and Ansible commands.
The configuration of the deployer is performed in a Terraform tfvars variable file.
The configuration of the deployer is performed in a Terraform tfvars variable fi
The table below contains the Terraform parameters, these parameters need to be entered manually if not using the deployment scripts > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | -- | - | - |
+> | Variable | Description | Type |
+> | -- | | - |
> | `tfstate_resource_id` | Azure resource identifier for the storage account in the SAP Library that contains the Terraform state files | Required |
-### Generic Parameters
+### Environment Parameters
-The table below contains the parameters that define the resource group and the resource naming.
+The table below contains the parameters that define the resource naming.
> [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | -- | - | - |
-> | `environment` | A five-character identifier for the workload zone. For example, `PROD` for a production environment and `NP` for a non-production environment. | Mandatory |
-> | `location` | The Azure region in which to deploy. | Required |
-> | `resource_group_name` | Name of the resource group to be created | Optional |
+> | Variable | Description | Type | Notes |
+> | -- | - | - | - |
+> | `environment` | Identifier for the control plane (max 5 chars) | Mandatory | For example, `PROD` for a production environment and `NP` for a non-production environment. |
+> | `location` | The Azure region in which to deploy. | Required | Use lower case |
+> | 'name_override_file' | Name override file | Optional | see [Custom naming](automation-naming-module.md) |
+
+### Resource Group
+
+The table below contains the parameters that define the resource group.
+
+> [!div class="mx-tdCol2BreakAll "]
+> | Variable | Description | Type |
+> | -- | -- | - |
+> | `resource_group_name` | Name of the resource group to be created | Optional |
> | `resource_group_arm_id` | Azure resource identifier for an existing resource group | Optional |
+> | `resourcegroup_tags` | Tags to be associated with the resource group | Optional |
+ ### Network Parameters
The table below contains the networking parameters.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | | - | - | |
-> | `management_network_name` | The logical name of the network (DEV-WEEU-MGMT01-INFRASTRUCTURE) | Required | |
-> | `management_network_arm_id` | The Azure resource identifier for the virtual network | Optional | For existing environment deployments |
-> | `management_network_address_space` | The address range for the virtual network | Mandatory | For new environment deployments |
+> | `management_network_name` | The name of the VNet into which the deployer will be deployed | Optional | For green field deployments. |
+> | `management_network_logical_name` | The logical name of the network (DEV-WEEU-MGMT01-INFRASTRUCTURE) | Required | |
+> | `management_network_arm_id` | The Azure resource identifier for the virtual network | Optional | For brown field deployments. |
+> | `management_network_address_space` | The address range for the virtual network | Mandatory | For green field deployments. |
+> | | | | |
> | `management_subnet_name` | The name of the subnet | Optional | |
-> | `management_subnet_address_prefix` | The address range for the subnet | Mandatory | For new environment deployments |
-> | `management_subnet_arm_id` | The Azure resource identifier for the subnet | Mandatory | For existing environment deployments |
+> | `management_subnet_address_prefix` | The address range for the subnet | Mandatory | For green field deployments. |
+> | `management_subnet_arm_id` | The Azure resource identifier for the subnet | Mandatory | For brown field deployments. |
> | `management_subnet_nsg_name` | The name of the Network Security Group name | Optional | |
-> | `management_subnet_nsg_arm_id` | The Azure resource identifier for the Network Security Group | Mandatory | Mandatory for existing environment deployments |
+> | `management_subnet_nsg_arm_id` | The Azure resource identifier for the Network Security Group | Mandatory | Mandatory For brown field deployments. |
> | `management_subnet_nsg_allowed_ips` | Range of allowed IP addresses to add to Azure Firewall | Optional | |
-> | `management_firewall_subnet_arm_id` | The Azure resource identifier for the Network Security Group | Mandatory | For existing environment deployments |
-> | `management_firewall_subnet_address_prefix` | The address range for the subnet | Mandatory | For new environment deployments |
+> | | | | |
+> | `management_firewall_subnet_arm_id` | The Azure resource identifier for the Firewall subnet | Mandatory | For brown field deployments. |
+> | `management_firewall_subnet_address_prefix` | The address range for the subnet | Mandatory | For green field deployments. |
+> | | | | |
+> | `management_bastion_subnet_arm_id` | The Azure resource identifier for the Bastion subnet | Mandatory | For brown field deployments. |
+> | `management_bastion_subnet_address_prefix` | The address range for the subnet | Mandatory | For green field deployments. |
### Deployer Virtual Machine Parameters The table below contains the parameters related to the deployer virtual machine. > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | - | - | - |
-> | `deployer_size` | Defines the Virtual machine SKU to use, for example Standard_D4s_v3 | Optional |
-> | `deployer_image` | Defines the Virtual machine image to use, see below | Optional |
-> | `deployer_disk_type` | Defines the disk type, for example Premium_LRS | Optional |
-> | `deployer_use_DHCP` | Controls if Azure subnet provided IP addresses should be used (dynamic) true | Optional |
-> | `deployer_private_ip_address` | Defines the Private IP address to use | Optional |
-> | `deployer_enable_public_ip` | Defined if the deployer has a public IP | Optional |
+> | Variable | Description | Type |
+> | - | -- | - |
+> | `deployer_size` | Defines the Virtual machine SKU to use, for example Standard_D4s_v3 | Optional |
+> | `deployer_count` | Defines the number of Deployers | Optional |
+> | `deployer_image` | Defines the Virtual machine image to use, see below | Optional |
+> | `deployer_disk_type` | Defines the disk type, for example Premium_LRS | Optional |
+> | `deployer_use_DHCP` | Controls if Azure subnet provided IP addresses should be used (dynamic) true | Optional |
+> | `deployer_private_ip_address` | Defines the Private IP address to use | Optional |
+> | `deployer_enable_public_ip` | Defines if the deployer has a public IP | Optional |
+> | `auto_configure_deployer` | Defines deployer will be configured with the required software (Terraform and Ansible) | Optional |
+ The Virtual Machine image is defined using the following structure: ```python
The table below defines the parameters used for defining the Key Vault informati
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | | | - |
-> | `user_keyvault_id` | Azure resource identifier for the user key vault | Optional |
-> | `spn_keyvault_id` | Azure resource identifier for the user key vault containing the SPN details | Optional |
-> | `deployer_private_key_secret_name` | The Azure Key Vault secret name for the deployer private key | Optional |
-> | `deployer_public_key_secret_name` | The Azure Key Vault secret name for the deployer public key | Optional |
-> | `deployer_username_secret_name` | The Azure Key Vault secret name for the deployer username | Optional |
-> | `deployer_password_secret_name` | The Azure Key Vault secret name for the deployer password | Optional |
+> | `user_keyvault_id` | Azure resource identifier for the user key vault | Optional |
+> | `spn_keyvault_id` | Azure resource identifier for the user key vault containing the SPN details | Optional |
+> | `deployer_private_key_secret_name` | The Azure Key Vault secret name for the deployer private key | Optional |
+> | `deployer_public_key_secret_name` | The Azure Key Vault secret name for the deployer public key | Optional |
+> | `deployer_username_secret_name` | The Azure Key Vault secret name for the deployer username | Optional |
+> | `deployer_password_secret_name` | The Azure Key Vault secret name for the deployer password | Optional |
### Other parameters > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | | - | -- |
-> | `firewall_deployment` | Boolean flag controlling if an Azure firewall is to be deployed | Mandatory |
-> | `bastion_deployment` | Boolean flag controlling if Azure bastion host is to be deployed | Mandatory |
-> | `enable_purge_control_for_keyvaults` | Boolean flag controlling if purge control is enabled on the Key Vault. Use only for test deployments | Optional |
-> | `use_private_endpoint` | Boolean flag controlling if private endpoints are used. | Optional |
+> | Variable | Description | Type | Notes |
+> | | - | -- | -- |
+> | `firewall_deployment` | Boolean flag controlling if an Azure firewall is to be deployed | Optional | |
+> | `bastion_deployment` | Boolean flag controlling if Azure bastion host is to be deployed | Optional | |
+> | `enable_purge_control_for_keyvaults` | Boolean flag controlling if purge control is enabled on the Key Vault. | Optional | Use only for test deployments |
+> | `use_private_endpoint` | Boolean flag controlling if private endpoints are used. | Optional | Recommended |
### Example parameters file for deployer (required parameters only)
-```bash
+```terraform
# The environment value is a mandatory field, it is used for partitioning the environments, for example (PROD and NP) environment="MGMT"+ # The location/region value is a mandatory field, it is used to control where the resources are deployed location="westeurope" # management_network_address_space is the address space for management virtual network management_network_address_space="10.10.20.0/25"+ # management_subnet_address_prefix is the address prefix for the management subnet management_subnet_address_prefix="10.10.20.64/28"+ # management_firewall_subnet_address_prefix is the address prefix for the firewall subnet management_firewall_subnet_address_prefix="10.10.20.0/26"
-deployer_enable_public_ip=true
+# management_bastion_subnet_address_prefix is a mandatory parameter if bastion is deployed and if the subnets are not defined in the workload or if existing subnets are not used
+management_bastion_subnet_address_prefix = "10.10.20.128/26"
+
+deployer_enable_public_ip=false
+ firewall_deployment=true+
+bastion_deployment=true
```
The table below contains the Terraform parameters, these parameters need to be
> | -- | - | - | > | `deployer_tfstate_key` | The state file name for the deployer | Required |
-### Generic Parameters
+### Environment Parameters
-The table below contains the parameters that define the resource group and the resource naming.
+The table below contains the parameters that define the resource naming.
+
+> [!div class="mx-tdCol2BreakAll "]
+> | Variable | Description | Type | Notes |
+> | -- | - | - | - |
+> | `environment` | Identifier for the control plane (max 5 chars) | Mandatory | For example, `PROD` for a production environment and `NP` for a non-production environment. |
+> | `location` | The Azure region in which to deploy. | Required | Use lower case |
+> | 'name_override_file' | Name override file | Optional | see [Custom naming](automation-naming-module.md) |
+### Resource Group
+
+The table below contains the parameters that define the resource group.
> [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | -- | - | - |
-> | `environment` | A five-character identifier for the workload zone. For example, `PROD` for a production environment and `NP` for a non-production environment. | Mandatory |
-> | `location` | The Azure region in which to deploy. | Required |
-> | `resource_group_name` | Name of the resource group to be created | Optional |
+> | Variable | Description | Type |
+> | -- | -- | - |
+> | `resource_group_name` | Name of the resource group to be created | Optional |
> | `resource_group_arm_id` | Azure resource identifier for an existing resource group | Optional |
+> | `resourcegroup_tags` | Tags to be associated with the resource group | Optional |
+ ### Deployer Parameters The table below contains the parameters that define the resource group and the resource naming. > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | | - | -- |
-> | `deployer_environment` | A five-character identifier for the workload zone. For example, `PROD` for a production environment and `NP` for a non-production environment. | Mandatory |
-> | `deployer_location` | The Azure region in which to deploy. | Mandatory |
-> | `deployer_vnet` | The logical name for the deployer VNet | Mandatory |
+> | Variable | Description | Type | Notes |
+> | | - | | - |
+> | `deployer_environment` | Identifier for the control plane (max 5 chars) | Mandatory | For example, `PROD` for a production environment and `NP` for a non-production environment. |
+> | `deployer_location` | The Azure region in which to deploy. | Mandatory | |
+> | `deployer_vnet` | The logical name for the deployer VNet | Mandatory | |
### SAP Installation media storage account
The table below contains the parameters that define the resource group and the r
### Example parameters file for sap library (required parameters only)
-```bash
+```terraform
# The environment value is a mandatory field, it is used for partitioning the environments, for example (PROD and NP) environment="MGMT"+ # The location/region value is a mandatory field, it is used to control where the resources are deployed location="westeurope"+ # The deployer_environment value is a mandatory field, it is used for identifying the deployer deployer_environment="MGMT"+ # The deployer_location value is a mandatory field, it is used for identifying the deployer deployer_location="westeurope"+ # The deployer_vnet value is a mandatory field, it is used for identifying the deployer deployer_vnet="DEP00" ```
-## Next step
+## Next steps
> [!div class="nextstepaction"] > [Configure SAP system](automation-configure-system.md)
virtual-machines Automation Configure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-devops.md
# Use SAP Deployment Automation Framework from Azure DevOps Services
+Using Azure DevOps will streamline the deployment process by providing pipelines that can be executed to perform both the infrastructure deployment and the configuration and SAP installation activities.
You can use Azure Repos to store your configuration files and Azure Pipelines to deploy and configure the infrastructure and the SAP application. + ## Sign up for Azure DevOps Services To use Azure DevOps Services, you'll need an Azure DevOps organization. An organization is used to connect groups of related projects. Use your work or school account to automatically connect your organization to your Azure Active Directory (Azure AD). To create an account, open [Azure DevOps](https://azure.microsoft.com/services/devops/) and either _sign-in_ or create a new account.
Start by importing the SAP Deployment Automation Framework GitHub repository int
Navigate to the Repositories section and choose Import a repository, import the 'https://github.com/Azure/sap-automation.git' repository into Azure DevOps. For more info, see [Import a repository](/azure/devops/repos/git/import-git-repository?view=azure-devops&preserve-view=true)
-If you are unable to import a repository, you can create the 'sap-automation' repository and manually import the content from the SAP Deployment Automation Framework GitHub repository to it.
+If you're unable to import a repository, you can create the 'sap-automation' repository and manually import the content from the SAP Deployment Automation Framework GitHub repository to it.
### Create the repository for manual import
Clone the repository to a local folder by clicking the _Clone_ button in the Fi
### Manually importing the repository content using a local clone
-In case you were not able to import the content from the SAP Deployment Automation Framework GitHub repository you can download the content manually and add it to the folder of your local clone of the Azure DevOps repository.
+You can also download the content from the SAP Deployment Automation Framework repository manually and add it to your local clone of the Azure DevOps repository.
Navigate to 'https://github.com/Azure/SAP-automation' repository and download the repository content as a ZIP file by clicking the _Code_ button and choosing _Download ZIP_.
Open the local folder in Visual Studio code, you should see that there are chang
:::image type="content" source="./media/automation-devops/automation-vscode-changes.png" alt-text="Picture showing that source code has changed"::: Select the source control icon and provide a message about the change, for example: "Import from GitHub" and press Cntr-Enter to commit the changes. Next select the _Sync Changes_ button to synchronize the changes back to the repository.+ ### Create configuration root folder
-Create a top level folder called 'WORKSPACES', this folder will be the root folder for all the SAP deployment configuration files. Create the following folders in the 'WORKSPACES' folder: 'DEPLOYER', 'LIBRARY', 'LANDSCAPE' and 'SYSTEM'.
+> [!IMPORTANT]
+ > In order to ensure that your configuration files are not overwritten by changes in the SAP Deployment Automation Framework, store them in a separate folder hierarchy.
+
-Optionally you may copy the sample configuration files from the 'samples/WORKSPACES' folders to the WORKSPACES folder you just created, this will allow you to experiment with sample deployments.
+Create a top level folder called 'WORKSPACES', this folder will be the root folder for all the SAP deployment configuration files. Create the following folders in the 'WORKSPACES' folder: 'DEPLOYER', 'LIBRARY', 'LANDSCAPE' and 'SYSTEM'. These will contain the configuration files for the different components of the SAP Deployment Automation Framework.
-Push the changes to Azure DevOps repos by selecting the source control icon and providing a message about the change, for example: "Import of sample configurations" and press Cntr-Enter to commit the changes. Next select the _Sync Changes_ button to synchronize the changes back to the repository.
+Optionally you may copy the sample configuration files from the 'samples/WORKSPACES' folders to the WORKSPACES folder you created, this will allow you to experiment with sample deployments.
+
+Push the changes back to the repository by selecting the source control icon and providing a message about the change, for example: "Import of sample configurations" and press Cntr-Enter to commit the changes. Next select the _Sync Changes_ button to synchronize the changes back to the repository.
## Create Azure Pipelines
The pipelines use a custom task to run Ansible. The custom task can be installed
The pipelines use a custom task to perform cleanup activities post deployment. The custom task can be installed from [Post Build Cleanup](https://marketplace.visualstudio.com/items?itemName=mspremier.PostBuildCleanup). Install it to your Azure DevOps organization before running the pipelines. +
+## Preparations for self-hosted agent
++
+1. Create an Agent Pool by navigating to the Organizational Settings and selecting _Agent Pools_ from the Pipelines section. Click the _Add Pool_ button and choose Self-hosted as the pool type. Name the pool to align with the workload zone environment, for example `DEV-WEEU-POOL`. Ensure _Grant access permission to all pipelines_ is selected and create the pool using the _Create_ button.
+
+1. Sign in with the user account you plan to use in your Azure DevOps organization (https://dev.azure.com).
+
+1. From your home page, open your user settings, and then select _Personal access tokens_.
+
+ :::image type="content" source="./media/automation-devops/automation-select-personal-access-tokens.jpg" alt-text="Diagram showing the creation of the Personal Access Token (PAT).":::
+
+1. Create a personal access token. Ensure that _Read & manage_ is selected for _Agent Pools_ and _Read & write_ is selected for _Code_. Write down the created token value.
+
+ :::image type="content" source="./media/automation-devops/automation-new-pat.png" alt-text="Diagram showing the attributes of the Personal Access Token (PAT).":::
+ ## Variable definitions The deployment pipelines are configured to use a set of predefined parameter values. In Azure DevOps the variables are defined using variable groups. + ### Common variables There's a set of common variables that are used by all the deployment pipelines. These variables are stored in a variable group called 'SDAF-General'. Create a new variable group 'SDAF-General' using the Library page in the Pipelines section. Add the following variables:
-| Variable | Value | Notes |
-| - | | - |
-| `ANSIBLE_HOST_KEY_CHECKING` | false | |
-| Deployment_Configuration_Path | WORKSPACES | For testing the sample configuration use 'samples/WORKSPACES' instead of WORKSPACES. |
-| Branch | main | |
-| S-Username | `<SAP Support user account name>` | |
-| S-Password | `<SAP Support user password>` | Change variable type to secret by clicking the lock icon |
-| `advice.detachedHead` | false | |
-| `skipComponentGovernanceDetection` | true | |
-| `tf_version` | 1.1.7 | The Terraform version to use, see [Terraform download](https://www.terraform.io/downloads) |
+| Variable | Value | Notes |
+| - | | - |
+| `ANSIBLE_HOST_KEY_CHECKING` | false | |
+| Deployment_Configuration_Path | WORKSPACES | For testing the sample configuration use 'samples/WORKSPACES' instead of WORKSPACES. |
+| Branch | main | |
+| S-Username | `<SAP Support user account name>` | |
+| S-Password | `<SAP Support user password>` | Change variable type to secret by clicking the lock icon. |
+| `PAT` | `<Personal Access Token>` | Use the Personal Token defined in the previous step. |
+| `POOL` | `<Agent Pool name>` | Use the Agent pool defined in the previous step. |
+| `advice.detachedHead` | false | |
+| `skipComponentGovernanceDetection` | true | |
+| `tf_version` | 1.1.7 | The Terraform version to use, see [Terraform download](https://www.terraform.io/downloads) |
Save the variables.
As each environment may have different deployment credentials you'll need to cre
Create a new variable group 'SDAF-MGMT' for the control plane environment using the Library page in the Pipelines section. Add the following variables:
-| Variable | Value | Notes |
-| | -- | -- |
-| Agent | 'Azure Pipelines' or the name of the agent pool | Note, this pool will be created in a later step. |
-| ARM_CLIENT_ID | Enter the Service principal application id. | |
-| ARM_CLIENT_SECRET | Enter the Service principal password. | Change variable type to secret by clicking the lock icon |
-| ARM_SUBSCRIPTION_ID | Enter the target subscription id. | |
-| ARM_TENANT_ID | Enter the Tenant id for the service principal. | |
-| AZURE_CONNECTION_NAME | Previously created connection name | |
-| sap_fqdn | SAP Fully Qualified Domain Name, for example sap.contoso.net | Only needed if Private DNS isn't used. |
-| FENCING_SPN_ID | Enter the service principal application id for the fencing agent. | Required for highly available deployments |
-| FENCING_SPN_PWD | Enter the service principal password for the fencing agent. | Required for highly available deployments |
-| FENCING_SPN_TENANT | Enter the service principal tenant id for the fencing agent. | Required for highly available deployments |
+| Variable | Value | Notes |
+| | | -- |
+| Agent | 'Azure Pipelines' or the name of the agent pool | Note, this pool will be created in a later step. |
+| ARM_CLIENT_ID | Enter the Service principal application ID. | |
+| ARM_CLIENT_SECRET | Enter the Service principal password. | Change variable type to secret by clicking the lock icon |
+| ARM_SUBSCRIPTION_ID | Enter the target subscription ID. | |
+| ARM_TENANT_ID | Enter the Tenant ID for the service principal. | |
+| AZURE_CONNECTION_NAME | Previously created connection name. | |
+| sap_fqdn | SAP Fully Qualified Domain Name, for example 'sap.contoso.net'. | Only needed if Private DNS isn't used. |
+| FENCING_SPN_ID | Enter the service principal application ID for the fencing agent. | Required for highly available deployments. |
+| FENCING_SPN_PWD | Enter the service principal password for the fencing agent. | Required for highly available deployments. |
+| FENCING_SPN_TENANT | Enter the service principal tenant ID for the fencing agent. | Required for highly available deployments. |
Save the variables.
Enter a Service connection name, for instance 'Connection to MGMT subscription'
You must use the Deployer as a [self-hosted agent for Azure DevOps](/azure/devops/pipelines/agents/v2-linux) to perform the Ansible configuration activities. As a one-time step, you must register the Deployer as a self-hosted agent.
-### Prerequisites
-
-1. Connect to your Azure DevOps instance Sign-in to [Azure DevOps](https://dev.azure.com). Navigate to the Project you want to connect to and note the URL to the Azure DevOps project.
-
-1. Create an Agent Pool by navigating to the Organizational Settings and selecting _Agent Pools_ from the Pipelines section. Click the _Add Pool_ button and choose Self-hosted as the pool type. Name the pool to align with the workload zone environment, for example `DEV-WEEU-POOL`. Ensure _Grant access permission to all pipelines_ is selected and create the pool using the _Create_ button.
-
-1. Sign in with the user account you plan to use in your Azure DevOps organization (https://dev.azure.com).
-
-1. From your home page, open your user settings, and then select _Personal access tokens_.
-
- :::image type="content" source="./media/automation-devops/automation-select-personal-access-tokens.jpg" alt-text="Diagram showing the creation of the Personal Access Token (PAT).":::
-
-1. Create a personal access token. Ensure that _Read & manage_ is selected for _Agent Pools_ and _Read & write_ is selected for _Code_. Write down the created token value.
-
- :::image type="content" source="./media/automation-devops/automation-new-pat.png" alt-text="Diagram showing the attributes of the Personal Access Token (PAT).":::
## Deploy the Control Plane
The agent will now be configured and started.
## Next step > [!div class="nextstepaction"]
-> [DevOps Hands on Lab](automation-devops-tutorial.md)
+> [DevOps hands on lab](automation-devops-tutorial.md)
virtual-machines Automation Configure System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-system.md
Configuration for the [SAP deployment automation framework on Azure](automation-deployment-framework.md)] happens through parameters files. You provide information about your SAP system properties in a tfvars file, which the automation framework uses for deployment.
-The automation supports both creating resources (greenfield deployment) or using existing resources (brownfield deployment).
+The automation supports both creating resources (green field deployment) or using existing resources (brownfield deployment).
-For the greenfield scenario, the automation defines default names for resources, however some resource names may be defined in the tfvars file.
+For the green field scenario, the automation defines default names for resources, however some resource names may be defined in the tfvars file.
For the brownfield scenario, the Azure resource identifiers for the resources must be specified.
The automation framework can be used to deploy the following SAP architectures:
### Standalone
-In the Standalone architecture all the SAP roles are installed on a single server.
+In the Standalone architecture, all the SAP roles are installed on a single server.
+ To configure this topology, define the database tier values and set `enable_app_tier_deployment` to false.
To configure this topology, define the database tier values and define `scs_serv
### High Availability
-The Distributed (Highly Available) deployment is similar to the Distributed architecture but either the database or SAP Central Services are both highly available using two virtual machines each with Pacemaker clusters.
+The Distributed (Highly Available) deployment is similar to the Distributed architecture. In this deployment, the database and/or SAP Central Services can both be configured using a highly available configuration using two virtual machines each with Pacemaker clusters.
To configure this topology, define the database tier values and set `database_high_availability` to true. Set `scs_server_count = 1` and `scs_high_availability` = true and `application_server_count` >= 1 ## Environment parameters
-The table below contains the parameters that define the environment settings and the resource naming.
+The table below contains the parameters that define the environment settings.
> [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | -- | -- | - |
-> | `environment` | A five-character identifier for the workload zone. For example, `PROD` for a production environment and `NP` for a non-production environment. | Mandatory |
-> | `location` | The Azure region in which to deploy. | Required |
-> | `custom_prefix` | Specifies the custom prefix used in the resource naming | Optional |
-> | `use_prefix` | Controls if the resource naming includes the prefix, DEV-WEEU-SAP01-X00_xxxx | Optional |
-> | 'name_override_file' | Name override file | Optional |
+> | Variable | Description | Type | Notes |
+> | -- | -- | - | - |
+> | `environment` | Identifier for the workload zone (max 5 chars) | Mandatory | For example, `PROD` for a production environment and `NP` for a non-production environment. |
+> | `location` | The Azure region in which to deploy. | Required | |
+> | `custom_prefix` | Specifies the custom prefix used in the resource naming | Optional | |
+> | `use_prefix` | Controls if the resource naming includes the prefix | Optional | DEV-WEEU-SAP01-X00_xxxx |
+> | 'name_override_file' | Name override file | Optional | see [Custom naming](automation-naming-module.md) |
## Resource group parameters
The database tier defines the infrastructure for the database tier, supported da
> [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type | Notes |
-> | -- | --| -- | |
-> | `database_sid` | Defines the database SID | Required | |
-> | `database_platform` | Defines the database backend | Required | |
-> | `database_high_availability` | Defines if the database tier is deployed highly available | Optional | See [High availability configuration](automation-configure-system.md#high-availability-configuration) |
-> | `database_server_count` | Defines the number of database servers | Optional | Default value is 1 |
-> | `database_vm_zones` | Defines the Availability Zones | Optional | |
-> | `database_size` | Defines the database sizing information | Required | See [Custom Sizing](automation-configure-extra-disks.md) |
-> | `db_disk_sizes_filename` | Defines the custom database sizing | Optional | See [Custom Sizing](automation-configure-extra-disks.md) |
-> | `database_vm_use_DHCP` | Controls if Azure subnet provided IP addresses should be used (dynamic) true | Optional | |
-> | `database_vm_db_nic_ips` | Defines the static IP addresses for the database servers (database subnet) | Optional | |
-> | `database_vm_admin_nic_ips` | Defines the static IP addresses for the database servers (admin subnet) | Optional | |
-> | `database_vm_image` | Defines the Virtual machine image to use, see below | Optional | |
-> | `database_vm_authentication_type` | Defines the authentication type for the database virtual machines (key/password) | Optional | |
-> | `database_no_avset` | Controls if the database virtual machines are deployed without availability sets | Optional | default is false |
-> | `database_no_ppg` | Controls if the database servers will not be placed in a proximity placement group | Optional | default is false |
-> | `database_vm_avset_arm_ids` | Defines the existing availability sets Azure resource IDs | Optional | Primarily used together with ANF pinning|
-> | `hana_dual_nics` | Controls if the HANA database servers will have dual network interfaces | Optional | default is true |
+> | Variable | Description | Type | Notes |
+> | - | -- | -- | |
+> | `database_sid` | Defines the database SID. | Required | |
+> | `database_platform` | Defines the database backend. | Supported values are `HANA`, `DB2`, `ORACLE`, `ASE`, `SQLSERVER`, `NONE` |
+> | `database_high_availability` | Defines if the database tier is deployed highly available. | Optional | See [High availability configuration](automation-configure-system.md#high-availability-configuration) |
+> | `database_server_count` | Defines the number of database servers. | Optional | Default value is 1 |
+> | `database_vm_zones` | Defines the Availability Zones for the database servers. | Optional | |
+> | `database_size` | Defines the database sizing information. | Required | See [Custom Sizing](automation-configure-extra-disks.md) |
+> | `db_disk_sizes_filename` | Defines the custom database sizing. | Optional | See [Custom Sizing](automation-configure-extra-disks.md) |
+> | `database_vm_use_DHCP` | Controls if Azure subnet provided IP addresses should be used. | Optional | |
+> | `database_vm_db_nic_ips` | Defines the IP addresses for the database servers (database subnet). | Optional | |
+> | `database_vm_db_nic_secondary_ips` | Defines the secondary IP addresses for the database servers (database subnet). | Optional | |
+> | `database_vm_admin_nic_ips` | Defines the IP addresses for the database servers (admin subnet). | Optional | |
+> | `database_vm_image` | Defines the Virtual machine image to use, see below. | Optional | |
+> | `database_vm_authentication_type` | Defines the authentication type (key/password). | Optional | |
+> | `database_no_avset` | Controls if the database virtual machines are deployed without availability sets. | Optional | default is false |
+> | `database_no_ppg` | Controls if the database servers will not be placed in a proximity placement group. | Optional | default is false |
+> | `database_vm_avset_arm_ids` | Defines the existing availability sets Azure resource IDs. | Optional | Primarily used together with ANF pinning|
+> | `hana_dual_nics` | Controls if the HANA database servers will have dual network interfaces. | Optional | default is true |
The Virtual Machine and the operating system image is defined using the following structure:
The application tier defines the infrastructure for the application tier, which
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | -- | -| |
-> | `scs_server_count` | Defines the number of scs servers | Required | |
-> | `scs_high_availability` | Defines if the Central Services is highly available | Optional | See [High availability configuration](automation-configure-system.md#high-availability-configuration) |
-> | `scs_instance_number` | The instance number of SCS | Optional | |
-> | `ers_instance_number` | The instance number of ERS | Optional | |
-> | `scs_server_sku` | Defines the Virtual machine SKU to use | Optional | |
-> | `scs_server_image` | Defines the Virtual machine image to use | Required | |
-> | `scs_server_zones` | Defines the availability zones to which the scs servers are deployed | Optional | |
-> | `scs_server_app_nic_ips` | List of IP addresses for the scs server (app subnet) | Optional | Ignored if `app_tier_use_DHCP` is used |
-> | `scs_server_app_admin_nic_ips` | List of IP addresses for the scs server (admin subnet) | Optional | Ignored if `app_tier_use_DHCP` is used |
-> | `scs_server_loadbalancer_ips` | List of IP addresses for the scs load balancer (app subnet) | Optional | Ignored if `app_tier_use_DHCP` is used |
-> | `scs_server_no_ppg` | Controls scs server proximity placement group | Optional | |
-> | `scs_server_no_avset` | Controls scs server availability set placement | Optional | |
-> | `scs_server_tags` | Defines a list of tags to be applied to the scs servers | Optional | |
+> | `scs_server_count` | Defines the number of SCS servers. | Required | |
+> | `scs_high_availability` | Defines if the Central Services is highly available. | Optional | See [High availability configuration](automation-configure-system.md#high-availability-configuration) |
+> | `scs_instance_number` | The instance number of SCS. | Optional | |
+> | `ers_instance_number` | The instance number of ERS. | Optional | |
+> | `scs_server_sku` | Defines the Virtual machine SKU to use. | Optional | |
+> | `scs_server_image` | Defines the Virtual machine image to use. | Required | |
+> | `scs_server_zones` | Defines the availability zones of the SCS servers. | Optional | |
+> | `scs_server_app_nic_ips` | List of IP addresses for the SCS servers (app subnet). | Optional | |
+> | `scs_server_app_nic_secondary_ips[]` | List of secondary IP addresses for the SCS servers (app subnet). | Optional | |
+> | `scs_server_app_admin_nic_ips` | List of IP addresses for the SCS servers (admin subnet). | Optional | |
+> | `scs_server_loadbalancer_ips` | List of IP addresses for the scs load balancer (app subnet). | Optional | |
+> | `scs_server_no_ppg` | Controls SCS server proximity placement group. | Optional | |
+> | `scs_server_no_avset` | Controls SCS server availability set placement. | Optional | |
+> | `scs_server_tags` | Defines a list of tags to be applied to the SCS servers. | Optional | |
### Application server parameters > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type | Notes |
-> | -- | - | --| |
-> | `application_server_count` | Defines the number of application servers | Required | |
-> | `application_server_sku` | Defines the Virtual machine SKU to use | Optional | |
-> | `application_server_image` | Defines the Virtual machine image to use | Required | |
-> | `application_server_zones` | Defines the availability zones to which the application servers are deployed | Optional | |
-> | `application_server_app_nic_ips[]` | List of IP addresses for the application server (app subnet) | Optional | Ignored if `app_tier_use_DHCP` is used |
-> | `application_server_app_admin_nic_ips` | List of IP addresses for the application server (admin subnet) | Optional | Ignored if `app_tier_use_DHCP` is used |
-> | `application_server_no_ppg` | Controls application server proximity placement group | Optional | |
-> | `application_server_no_avset` | Controls application server availability set placement | Optional | |
-> | `application_server_tags` | Defines a list of tags to be applied to the application servers | Optional | |
+> | Variable | Description | Type | Notes |
+> | -- | - | --| |
+> | `application_server_count` | Defines the number of application servers. | Required | |
+> | `application_server_sku` | Defines the Virtual machine SKU to use. | Optional | |
+> | `application_server_image` | Defines the Virtual machine image to use. | Required | |
+> | `application_server_zones` | Defines the availability zones to which the application servers are deployed. | Optional | |
+> | `application_server_app_nic_ips[]` | List of IP addresses for the application servers (app subnet). | Optional | |
+> | `application_server_nic_secondary_ips[]` | List of secondary IP addresses for the application servers (app subnet). | Optional | |
+> | `application_server_app_admin_nic_ips` | List of IP addresses for the application server (admin subnet). | Optional | |
+> | `application_server_no_ppg` | Controls application server proximity placement group. | Optional | |
+> | `application_server_no_avset` | Controls application server availability set placement. | Optional | |
+> | `application_server_tags` | Defines a list of tags to be applied to the application servers. | Optional | |
### Web dispatcher parameters > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type | Notes |
-> | | | | |
-> | `webdispatcher_server_count` | Defines the number of web dispatcher servers | Required | |
-> | `webdispatcher_server_sku` | Defines the Virtual machine SKU to use | Optional | |
-> | `webdispatcher_server_image` | Defines the Virtual machine image to use | Optional | |
-> | `webdispatcher_server_zones` | Defines the availability zones to which the web dispatchers are deployed | Optional | |
-> | `webdispatcher_server_app_nic_ips[]` | List of IP addresses for the web dispatcher server (app subnet) | Optional | Ignored if `app_tier_use_DHCP` is used |
-> | `webdispatcher_server_app_admin_nic_ips`| List of IP addresses for the web dispatcher server (admin subnet) | Optional | Ignored if `app_tier_use_DHCP` is used |
-> | `webdispatcher_server_no_ppg` | Controls web proximity placement group placement | Optional | |
-> | `webdispatcher_server_no_avset` | Defines web dispatcher availability set placement | Optional | |
-> | `webdispatcher_server_tags` | Defines a list of tags to be applied to the web dispatcher servers | Optional | |
+> | Variable | Description | Type | Notes |
+> | | | | |
+> | `webdispatcher_server_count` | Defines the number of web dispatcher servers. | Required | |
+> | `webdispatcher_server_sku` | Defines the Virtual machine SKU to use. | Optional | |
+> | `webdispatcher_server_image` | Defines the Virtual machine image to use. | Optional | |
+> | `webdispatcher_server_zones` | Defines the availability zones to which the web dispatchers are deployed. | Optional | |
+> | `webdispatcher_server_app_nic_ips[]` | List of IP addresses for the web dispatcher server (app/web subnet). | Optional | |
+> | `webdispatcher_server_nic_secondary_ips[]` | List of secondary IP addresses for the web dispatcher server (app/web subnet). | Optional | |
+> | `webdispatcher_server_app_admin_nic_ips` | List of IP addresses for the web dispatcher server (admin subnet). | Optional | |
+> | `webdispatcher_server_no_ppg` | Controls web proximity placement group placement. | Optional | |
+> | `webdispatcher_server_no_avset` | Defines web dispatcher availability set placement. | Optional | |
+> | `webdispatcher_server_tags` | Defines a list of tags to be applied to the web dispatcher servers. | Optional | |
## Network parameters
-If the subnets are not deployed using the workload zone deployment, they can be added in the system's tfvars file.
+If the subnets aren't deployed using the workload zone deployment, they can be added in the system's tfvars file.
-The automation framework can either deploy the virtual network and the subnets for new environment deployments (greenfield) or using an existing virtual network and existing subnets for existing environment deployments (brownfield).
+The automation framework can either deploy the virtual network and the subnets (green field deployment) or using an existing virtual network and existing subnets (brown field deployments).
+ - For the green field scenario, the virtual network address space and the subnet address prefixes must be specified
- For the brownfield scenario, the Azure resource identifier for the virtual network and the subnets must be specified Ensure that the virtual network address space is large enough to host all the resources.
By default the SAP System deployment uses the credentials from the SAP Workload
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | - | --| -- | |
-> | `ANF_use_for_HANA_data` | Create Azure NetApp Files volume for HANA data | Optional | |
-> | `ANF_use_existing_data_volume` | Use existing Azure NetApp Files volume for HANA data | Optional | Use for pre-created volumes |
-> | `ANF_data_volume_name` | Azure NetApp Files volume name for HANA data | Optional | |
-> | `ANF_HANA_data_volume_size` | Azure NetApp Files volume size in GB for HANA data | Optional | default size 256 |
-> | `ANF_use_for_HANA_log` | Create Azure NetApp Files volume for HANA data | Optional | |
-> | `ANF_use_existing_log_volume` | Use existing Azure NetApp Files volume for HANA data | Optional | Use for pre-created volumes |
-> | `ANF_log_volume_name` | Azure NetApp Files volume name for HANA data | Optional | |
-> | `ANF_HANA_log_volume_size` | Azure NetApp Files volume size in GB for HANA data | Optional | default size 128 |
+> | `ANF_use_for_HANA_data` | Create Azure NetApp Files volume for HANA data. | Optional | |
+> | `ANF_use_existing_data_volume` | Use existing Azure NetApp Files volume for HANA data. | Optional | Use for pre-created volumes |
+> | `ANF_data_volume_name` | Azure NetApp Files volume name for HANA data. | Optional | |
+> | `ANF_HANA_data_volume_size` | Azure NetApp Files volume size in GB for HANA data. | Optional | default size 256 |
+> | | | | |
+> | `ANF_use_for_HANA_log` | Create Azure NetApp Files volume for HANA log. | Optional | |
+> | `ANF_use_existing_log_volume` | Use existing Azure NetApp Files volume for HANA log. | Optional | Use for pre-created volumes |
+> | `ANF_log_volume_name` | Azure NetApp Files volume name for HANA log. | Optional | |
+> | `ANF_HANA_log_volume_size` | Azure NetApp Files volume size in GB for HANA log. | Optional | default size 128 |
+> | | | | |
+> | `ANF_use_for_HANA_shared` | Create Azure NetApp Files volume for HANA shared. | Optional | |
+> | `ANF_use_existing_shared_volume` | Use existing Azure NetApp Files volume for HANA shared. | Optional | Use for pre-created volumes |
+> | `ANF_shared_volume_name` | Azure NetApp Files volume name for HANA shared. | Optional | |
+> | `ANF_HANA_shared_volume_size` | Azure NetApp Files volume size in GB for HANA shared. | Optional | default size 128 |
+> | | | | |
+> | `ANF_use_for_sapmnt` | Create Azure NetApp Files volume for sapmnt. | Optional | |
+> | `ANF_use_existing_sapmnt_volume` | Use existing Azure NetApp Files volume for sapmnt. | Optional | Use for pre-created volumes |
+> | `ANF_sapmnt_volume_name` | Azure NetApp Files volume name for sapmnt. | Optional | |
+> | `ANF_sapmnt_volume_size` | Azure NetApp Files volume size in GB for sapmnt. | Optional | default size 128 |
+> | | | | |
+> | `ANF_use_for_usrsap` | Create Azure NetApp Files volume for usrsap. | Optional | |
+> | `ANF_use_existing_usrsap_volume` | Use existing Azure NetApp Files volume for usrsap. | Optional | Use for pre-created volumes |
+> | `ANF_usrsap_volume_name` | Azure NetApp Files volume name for usrsap. | Optional | |
+> | `ANF_usrsap_volume_size` | Azure NetApp Files volume size in GB for usrsap. | Optional | default size 128 |
## Oracle parameters
-When deploying Oracle based systems these parameters need to be updated in the sap-parameters.yaml file.
+> [!NOTE]
+> These parameters need to be updated in the sap-parameters.yaml file when deploying Oracle based systems.
+ > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes |
az keyvault secret set --name "<prefix>-fencing-spn-tenant" --vault-name "<workl
## Next steps > [!div class="nextstepaction"]
-> [Deploy SAP System](automation-deploy-system.md)
+> [Deploy SAP system](automation-deploy-system.md)
virtual-machines Automation Configure Workload Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-workload-zone.md
The table below contains the Terraform parameters, these parameters need to be
| `tfstate_resource_id` | Required * | Azure resource identifier for the Storage account in the SAP Library that will contain the Terraform state files | | `deployer_tfstate_key` | Required * | The name of the state file for the Deployer |
-## Generic Parameters
+## Environment parameters
-The table below contains the parameters that define the resource group and the resource naming.
+The table below contains the parameters that define the environment settings.
> [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | -- | - | -- |
-> | `environment` | A five-character identifier for the workload zone. For example, `PROD` for a production environment and `NP` for a non-production environment.| Required |
-> | `location` | The Azure region in which to deploy. | Required |
-> | `resource_group_name` | Name of the resource group to be created | Optional |
-> | `resource_group_arm_id` | Azure resource identifier for an existing resource group | Optional |
+> | Variable | Description | Type | Notes |
+> | -- | -- | - | - |
+> | `environment` | Identifier for the workload zone (max 5 chars) | Mandatory | For example, `PROD` for a production environment and `NP` for a non-production environment. |
+> | `location` | The Azure region in which to deploy. | Required | |
+> | 'name_override_file' | Name override file | Optional | see [Custom naming](automation-naming-module.md) |
++
+## Resource group parameters
+
+The table below contains the parameters that define the resource group.
++
+> [!div class="mx-tdCol2BreakAll "]
+> | Variable | Description | Type |
+> | -- | -- | - |
+> | `resource_group_name` | Name of the resource group to be created | Optional |
+> | `resource_group_arm_id` | Azure resource identifier for an existing resource group | Optional |
## Network Parameters
-The automation framework supports both creating the virtual network and the subnets for new environment deployments (Green field) or using an existing virtual network and existing subnets for existing environment deployments (Brown field) or a combination of for new environment deployments and for existing environment deployments.
+The automation framework supports both creating the virtual network and the subnets For green field deployments. (Green field) or using an existing virtual network and existing subnets For brown field deployments. (Brown field) or a combination of For green field deployments. and For brown field deployments.
- For the green field scenario, the virtual network address space and the subnet address prefixes must be specified - For the brown field scenario, the Azure resource identifier for the virtual network and the subnets must be specified
Ensure that the virtual network address space is large enough to host all the re
The table below contains the networking parameters. > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type | Notes |
-> | -- | -- | | |
-> | `network_name` | The logical name of the network | Required | |
-> | `network_arm_id` | The Azure resource identifier for the virtual network | Optional | For existing environment deployments |
-> | `network_address_space` | The address range for the virtual network | Mandatory | For new environment deployments |
-> | `admin_subnet_name` | The name of the `admin` subnet | Optional | |
-> | `admin_subnet_address_prefix` | The address range for the `admin` subnet | Mandatory | For new environment deployments |
-> | `admin_subnet_arm_id` | The Azure resource identifier for the `admin` subnet | Mandatory | For existing environment deployments |
-> | `admin_subnet_nsg_name` | The name of the `admin`Network Security Group name | Optional | |
-> | `admin_subnet_nsg_arm_id` | The Azure resource identifier for the `admin` Network Security Group | Mandatory | For existing environment deployments |
-> | `db_subnet_name` | The name of the `db` subnet | Optional | |
-> | `db_subnet_address_prefix` | The address range for the `db` subnet | Mandatory | For new environment deployments |
-> | `db_subnet_arm_id` | The Azure resource identifier for the `db` subnet | Mandatory | For existing environment deployments |
-> | `db_subnet_nsg_name` | The name of the `db` Network Security Group name | Optional | |
-> | `db_subnet_nsg_arm_id` | The Azure resource identifier for the `db` Network Security Group | Mandatory | For existing environment deployments |
-> | `app_subnet_name` | The name of the `app` subnet | Optional | |
-> | `app_subnet_address_prefix` | The address range for the `app` subnet | Mandatory | For new environment deployments |
-> | `app_subnet_arm_id` | The Azure resource identifier for the `app` subnet | Mandatory | For existing environment deployments |
-> | `app_subnet_nsg_name` | The name of the `app` Network Security Group name | Optional | |
-> | `app_subnet_nsg_arm_id` | The Azure resource identifier for the `app` Network Security Group | Mandatory | For existing environment deployments |
-> | `web_subnet_name` | The name of the `web` subnet | Optional | |
-> | `web_subnet_address_prefix` | The address range for the `web` subnet | Mandatory | For new environment deployments |
-> | `web_subnet_arm_id` | The Azure resource identifier for the `web` subnet | Mandatory | For existing environment deployments |
-> | `web_subnet_nsg_name` | The name of the `web` Network Security Group name | Optional | |
-> | `web_subnet_nsg_arm_id` | The Azure resource identifier for the `web` Network Security Group | Mandatory | For existing environment deployments |
-
-## ISCSI Parameters
+> | Variable | Description | Type | Notes |
+> | -- | | | - |
+> | `network_name` | The name of the network. | Optional | |
+> | `network_logical_name` | The logical name of the network, for eaxmple 'SAP01' | Required | Used for resource naming. |
+> | `network_arm_id` | The Azure resource identifier for the virtual network. | Optional | For brown field deployments. |
+> | `network_address_space` | The address range for the virtual network. | Mandatory | For green field deployments. |
+> | | | | |
+> | `admin_subnet_name` | The name of the `admin` subnet. | Optional | |
+> | `admin_subnet_address_prefix` | The address range for the `admin` subnet. | Mandatory | For green field deployments. |
+> | `admin_subnet_arm_id` | The Azure resource identifier for the `admin` subnet. | Mandatory | For brown field deployments. |
+> | | | | |
+> | `admin_subnet_nsg_name` | The name of the `admin`Network Security Group name. | Optional | |
+> | `admin_subnet_nsg_arm_id` | The Azure resource identifier for the `admin` Network Security Group. | Mandatory | For brown field deployments. |
+> | | | | |
+> | `db_subnet_name` | The name of the `db` subnet. | Optional | |
+> | `db_subnet_address_prefix` | The address range for the `db` subnet. | Mandatory | For green field deployments. |
+> | `db_subnet_arm_id` | The Azure resource identifier for the `db` subnet. | Mandatory | For brown field deployments. |
+> | | | | |
+> | `db_subnet_nsg_name` | The name of the `db` Network Security Group name. | Optional | |
+> | `db_subnet_nsg_arm_id` | The Azure resource identifier for the `db` Network Security Group | Mandatory | For brown field deployments. |
+> | | | | |
+> | `app_subnet_name` | The name of the `app` subnet. | Optional | |
+> | `app_subnet_address_prefix` | The address range for the `app` subnet. | Mandatory | For green field deployments. |
+> | `app_subnet_arm_id` | The Azure resource identifier for the `app` subnet. | Mandatory | For brown field deployments. |
+> | | | | |
+> | `app_subnet_nsg_name` | The name of the `app` Network Security Group name. | Optional | |
+> | `app_subnet_nsg_arm_id` | The Azure resource identifier for the `app` Network Security Group. | Mandatory | For brown field deployments. |
+> | | | | |
+> | `web_subnet_name` | The name of the `web` subnet. | Optional | |
+> | `web_subnet_address_prefix` | The address range for the `web` subnet. | Mandatory | For green field deployments. |
+> | `web_subnet_arm_id` | The Azure resource identifier for the `web` subnet. | Mandatory | For brown field deployments. |
+> | | | | |
+> | `web_subnet_nsg_name` | The name of the `web` Network Security Group name. | Optional | |
+> | `web_subnet_nsg_arm_id` | The Azure resource identifier for the `web` Network Security Group | Mandatory | For brown field deployments. |
++
+The table below contains the networking parameters if Azure NetApp Files are used.
> [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type | Notes |
-> | -- | - | | -- |
-> | `iscsi_subnet_name` | The name of the `iscsi` subnet | Optional | |
-> | `iscsi_subnet_address_prefix` | The address range for the `iscsi` subnet | Mandatory | For new environment deployments |
-> | `iscsi_subnet_arm_id` | The Azure resource identifier for the `iscsi` subnet | Mandatory | For existing environment deployments |
-> | `iscsi_subnet_nsg_name` | The name of the `iscsi` Network Security Group name | Optional | |
-> | `iscsi_subnet_nsg_arm_id` | The Azure resource identifier for the `iscsi` Network Security Group | Mandatory | For existing environment deployments |
-> | `iscsi_count` | The number of iSCSI Virtual Machines | Optional | |
-> | `iscsi_use_DHCP` | Controls whether to use dynamic IP addresses provided by the Azure subnet | Optional | |
-> | `iscsi_image` | Defines the Virtual machine image to use, see below | Optional | |
-> | `iscsi_authentication_type` | Defines the default authentication for the iSCSI Virtual Machines | Optional | |
-> | `iscsi__authentication_username` | Administrator account name | Optional | |
-> | `iscsi_nic_ips` | IP addresses for the iSCSI Virtual Machines | Optional | ignored if `iscsi_use_DHCP` is defined |
-
+> | Variable | Description | Type | Notes |
+> | -- | -- | | - |
+> | `anf_subnet_name` | The name of the ANF subnet. | Optional | |
+> | `anf_subnet_arm_id` | The Azure resource identifier for the `ANF` subnet. | Required | When using existing subnets |
+> | `anf_subnet_address_prefix` | The address range for the `ANF` subnet. | Required | When using ANF for new deployments |
+
+**Minimum required network definition**
+
+```terraform
+network_logical_name = "SAP01"
+network_address_space = "10.110.0.0/16"
+
+db_subnet_address_prefix = "10.110.96.0/19"
+app_subnet_address_prefix = "10.110.32.0/19"
-```python
-{
-os_type=""
-source_image_id=""
-publisher="SUSE"
-offer="sles-sap-12-sp5"
-sku="gen1"
-version="latest"
-}
``` ### Authentication Parameters - The table below defines the credentials used for defining the Virtual Machine authentication > [!div class="mx-tdCol2BreakAll "]
The table below defines the credentials used for defining the Virtual Machine au
> | `automation_path_to_public_key` | Path to existing public key | Optional | > | `automation_path_to_private_key` | Path to existing private key | Optional |
+**Minimum required authentication definition**
+
+```terraform
+automation_username = "azureadm"
+
+```
+ ## Key Vault Parameters The table below defines the parameters used for defining the Key Vault information - > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | - | - | - |
The table below defines the parameters used for defining the Key Vault informati
> | `spn_keyvault_id` | Azure resource identifier for the deployment credentials (SPNs) key vault | Optional |
-## DNS
+## Private DNS
> [!div class="mx-tdCol2BreakAll "]
The table below defines the parameters used for defining the Key Vault informati
## NFS Support > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | - | -- | -- |
-> | `NFS_provider` | Defines what NFS backend to use, the options are 'AFS' for Azure Files NFS or 'ANF' for Azure NetApp files, 'NONE' for NFS from the SCS server or 'NFS' for an external NFS solution. | Optional |
-> | `transport_volume_size` | Defines the size (in GB) for the 'transport' volume | Optional |
+> | Variable | Description | Type | Notes |
+> | - | -- | -- | |
+> | `NFS_provider` | Defines what NFS backend to use, the options are 'AFS' for Azure Files NFS or 'ANF' for Azure NetApp files, 'NONE' for NFS from the SCS server or 'NFS' for an external NFS solution. | Optional | |
+> | `install_volume_size` | Defines the size (in GB) for the 'install' volume | Optional | |
+> | `install_private_endpoint_id` | Azure resource ID for the 'install' private endpoint | Optional | For existing endpoints|
+> | `transport_volume_size` | Defines the size (in GB) for the 'transport' volume | Optional | |
+> | `transport_private_endpoint_id` | Azure resource ID for the 'transport' private endpoint | Optional | For existing endpoints|
### Azure Files NFS Support > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | - | --| -- | |
-> | `azure_files_transport_storage_account_id` | Azure resource identifier for the 'transport' storage account. | Optional | For existing environment deployments |
+> | `install_storage_account_id` | Azure resource identifier for the 'install' storage account. | Optional | For brown field deployments. |
+> | `transport_storage_account_id` | Azure resource identifier for the 'transport' storage account. | Optional | For brown field deployments. |
+
+**Minimum required Azure Files NFS definition**
+
+```terraform
+NFS_provider = "AFS"
+use_private_endpoint = true
+
+```
+ ### Azure NetApp Files Support > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type | Notes |
-> | - | --| -- | |
-> | `ANF_account_arm_id` | Azure resource identifier for the Azure NetApp Files Account | Optional | For existing environment deployments |
-> | `ANF_account_name` | Name for the Azure NetApp Files Account | Optional | |
-> | `ANF_service_level` | Service level for the Azure NetApp Files Capacity Pool | Optional | |
-> | `ANF_pool_size` | The size (in GB) of the Azure NetApp Files Capacity Pool | Optional | |
-> | `anf_subnet_name` | The name of the ANF subnet | Optional | |
-> | `anf_subnet_arm_id` | The Azure resource identifier for the `ANF` subnet | Required | For existing environment deployments |
-> | `anf_subnet_address_prefix` | The address range for the `ANF` subnet | Required | For new environment deployments |
+> | Variable | Description | Type | Notes |
+> | | --| -- | |
+> | `ANF_account_arm_id` | Azure resource identifier for the Azure NetApp Files Account. | Optional | For brown field deployments. |
+> | `ANF_account_name` | Name for the Azure NetApp Files Account. | Optional | |
+> | `ANF_service_level` | Service level for the Azure NetApp Files Capacity Pool. | Optional | |
+> | `ANF_use_existing_pool` | Use existing the Azure NetApp Files Capacity Pool. | Optional | |
+> | `ANF_pool_size` | The size (in GB) of the Azure NetApp Files Capacity Pool. | Optional | |
+> | `ANF_pool_name` | The name of the Azure NetApp Files Capacity Pool. | Optional | |
+> | | | | |
+> | `ANF_use_existing_transport_volume` | Defines if an existing transport volume is used. | Optional | |
+> | `ANF_transport_volume_name` | Defines the transport volume name. | Optional | |
+> | `ANF_transport_volume_size` | Defines the size of the transport volume in GB. | Optional | |
+> | `ANF_transport_volume_throughput` | Defines the throughput of the transport volume. | Optional | |
+> | | | | |
+> | `ANF_use_existing_install_volume` | Defines if an existing install volume is used. | Optional | |
+> | `ANF_install_volume_name` | Defines the install volume name. | Optional | |
+> | `ANF_install_volume_size` | Defines the size of the install volume in GB. | Optional | |
+> | `ANF_install_volume_throughput` | Defines the throughput of the install volume. | Optional | |
++
+**Minimum required ANF definition**
+
+```terraform
+NFS_provider = "ANF"
+anf_subnet_address_prefix = "10.110.64.0/27"
+ANF_service_level = "Ultra"
+
+```
+
+## ISCSI Parameters
+> [!div class="mx-tdCol2BreakAll "]
+> | Variable | Description | Type | Notes |
+> | -- | - | | -- |
+> | `iscsi_subnet_name` | The name of the `iscsi` subnet. | Optional | |
+> | `iscsi_subnet_address_prefix` | The address range for the `iscsi` subnet. | Mandatory | For green field deployments. |
+> | `iscsi_subnet_arm_id` | The Azure resource identifier for the `iscsi` subnet. | Mandatory | For brown field deployments. |
+> | `iscsi_subnet_nsg_name` | The name of the `iscsi` Network Security Group name | Optional | |
+> | `iscsi_subnet_nsg_arm_id` | The Azure resource identifier for the `iscsi` Network Security Group | Mandatory | For brown field deployments. |
+> | `iscsi_count` | The number of iSCSI Virtual Machines | Optional | |
+> | `iscsi_use_DHCP` | Controls whether to use dynamic IP addresses provided by the Azure subnet | Optional | |
+> | `iscsi_image` | Defines the Virtual machine image to use, see below | Optional | |
+> | `iscsi_authentication_type` | Defines the default authentication for the iSCSI Virtual Machines | Optional | |
+> | `iscsi__authentication_username` | Administrator account name | Optional | |
+> | `iscsi_nic_ips` | IP addresses for the iSCSI Virtual Machines | Optional | ignored if `iscsi_use_DHCP` is defined |
+
+ ## Other Parameters > [!div class="mx-tdCol2BreakAll "]
The table below defines the parameters used for defining the Key Vault informati
> | | - | -- | - | > | `enable_purge_control_for_keyvaults` | Boolean flag controlling if purge control is enabled on the Key Vault. | Optional | Use only for test deployments | > | `use_private_endpoint` | Boolean flag controlling if private endpoints are used for storage accounts and key vaults. | Optional | |
-> | `diagnostics_storage_account_arm_id` | The Azure resource identifier for the diagnostics storage account | Required | For existing environment deployments |
-> | `witness_storage_account_arm_id` | The Azure resource identifier for the witness storage account | Required | For existing environment deployments |
+> | `diagnostics_storage_account_arm_id` | The Azure resource identifier for the diagnostics storage account | Required | For brown field deployments. |
+> | `witness_storage_account_arm_id` | The Azure resource identifier for the witness storage account | Required | For brown field deployments. |
## Next Step
virtual-machines Automation Deploy Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-deploy-control-plane.md
The control plane deployment for the [SAP deployment automation framework on Azu
The SAP Deployment Frameworks uses Service Principals when doing the deployments. You can create the Service Principal for the Control Plane deployment using the following steps using an account with permissions to create Service Principals: - ```azurecli az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/<subscriptionID>" --name="<environment>-Deployment-Account"
cp -Rp sap-automation/samples/WORKSPACES WORKSPACES
```
+Run the following command to deploy the control plane:
+ ```bash
-cd ~/Azure_SAP_Automated_Deployment/WORKSPACES
az logout az login
-export DEPLOYMENT_REPO_PATH=~/Azure_SAP_Automated_Deployment/sap-automation
-export ARM_SUBSCRIPTION_ID=<subscriptionID>
-export subscriptionID=<subscriptionID>
-export spn_id=<appID>
-export spn_secret=<password>
-export tenant_id=<tenant>
-export region_code=WEEU
-
-${DEPLOYMENT_REPO_PATH}/deploy/scripts/prepare_region.sh \
- --deployer_parameter_file DEPLOYER/MGMT-${region_code}-DEP00-INFRASTRUCTURE/MGMT-${region_code}-DEP00-INFRASTRUCTURE.tfvars \
- --library_parameter_file LIBRARY/MGMT-${region_code}-SAP_LIBRARY/MGMT-${region_code}-SAP_LIBRARY.tfvars \
- --subscription $subscriptionID \
- --spn_id "${spn_id}" \
- --spn_secret "${spn_secret}" \
- --tenant_id "${tenant_id}"
-```
+cd ~/Azure_SAP_Automated_Deployment/WORKSPACES
+
+ export subscriptionId="<subscriptionId>"
+ export spn_id="<appId>"
+ export spn_secret="<password>"
+ export tenant_id="<tenantId>"
+ export env_code="MGMT"
+ export region_code="<region_code>"
+
+ export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
+ export ARM_SUBSCRIPTION_ID="${subscriptionId}"
+
+ ${DEPLOYMENT_REPO_PATH}/deploy/scripts/prepare_region.sh \
+ --deployer_parameter_file DEPLOYER/${env_code}-${region_code}-DEP00-INFRASTRUCTURE/${env_code}-${region_code}-DEP00-INFRASTRUCTURE.tfvars \
+ --library_parameter_file LIBRARY/${env_code}-${region_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars \
+ --subscription "${subscriptionId}" \
+ --spn_id "${spn_id}" \
+ --spn_secret "${spn_secret}" \
+ --tenant_id "${tenant_id}" \
+ --auto-approve
+ ```
# [Windows](#tab/windows)
cd C:\Azure_SAP_Automated_Deployment\WORKSPACES
New-SAPAutomationRegion -DeployerParameterfile .\DEPLOYER\MGMT-WEEU-DEP00-INFRASTRUCTURE\MGMT-WEEU-DEP00-INFRASTRUCTURE.tfvars -LibraryParameterfile .\LIBRARY\MGMT-WEEU-SAP_LIBRARY\MGMT-WEEU-SAP_LIBRARY.tfvars -Subscription $subscription -SPN_id $appId -SPN_password $spn_secret -Tenant_id $tenant_id ```-+ > [!NOTE] > Be sure to replace the sample value `<subscriptionID>` with your subscription ID. > Replace the `<appID>`, `<password>`, `<tenant>` values with the output values of the SPN creation
+# [Azure DevOps](#tab/devops)
+
+Open (https://dev.azure.com) and and go to your Azure DevOps project.
+
+> [!NOTE]
+> Ensure that the 'Deployment_Configuration_Path' variable in the 'SDAF-General' variable group is set to the folder that contains your configuration files, for this example you can use 'samples/WORKSPACES'.
+
+The deployment will use the configuration defined in the Terraform variable files located in the 'samples/WORKSPACES/DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE' and 'samples/WORKSPACES/LIBRARY/MGMT-WEEU-SAP_LIBRARY' folders.
+
+Run the pipeline by selecting the _Deploy control plane_ pipeline from the Pipelines section. Enter the configuration names for the deployer and the SAP library. Use 'MGMT-WEEU-DEP00-INFRASTRUCTURE' as the Deployer configuration name and 'MGMT-WEEU-SAP_LIBRARY' as the SAP Library configuration name.
++
+You can track the progress in the Azure DevOps portal. Once the deployment is complete, you can see the Control Plane details in the _Extensions_ tab.
+
+ :::image type="content" source="media/automation-devops/automation-run-pipeline-control-plane.png" alt-text="Screenshot of the run Azure DevOps pipeline run results.":::
++ ### Manually configure the deployer using Azure Bastion
cd sap-automation/deploy/scripts
The script will install Terraform and Ansible and configure the deployer.
-### Manually configure the deployer (deployments without public IP)
-
-If you deploy the deployer without a public IP Terraform isn't able to configure the deployer Virtual Machine as it will not be able to connect to it.
+### Manually configure the deployer
> [!NOTE] >You need to connect to the deployer virtual Machine from a computer that is able to reach the Azure Virtual Network
virtual-machines Automation Deploy System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-deploy-system.md
New-SAPSystem -Parameterfile DEV-WEEU-SAP01-X01.tfvars
-Type sap_system ```
+# [Azure DevOps](#tab/devops)
+
+Open (https://dev.azure.com) and go to your Azure DevOps Services project.
+
+> [!NOTE]
+> Ensure that the 'Deployment_Configuration_Path' variable in the 'SDAF-General' variable group is set to the folder that contains your configuration files, for this example you can use 'samples/WORKSPACES'.
+
+The deployment will use the configuration defined in the Terraform variable file located in the 'samples/WORKSPACES/SYSTEM/DEV-WEEU-SAP01-X00' folder.
+
+Run the pipeline by selecting the _SAP system deployment_ pipeline from the Pipelines section. Enter 'DEV-WEEU-SAP01-X00' as the SAP System configuration name.
+
+You can track the progress in the Azure DevOps Services portal. Once the deployment is complete, you can see the SAP System details in the _Extensions_ tab.
+ ### Output files
-The deployment will create a Ansible hosts file (`SID_hosts.yaml`) and an Ansible parameter file (`sap-parameters.yaml`) that are required input for the Ansible playbooks.
+The deployment will create an Ansible hosts file (`SID_hosts.yaml`) and an Ansible parameter file (`sap-parameters.yaml`) that are required input for the Ansible playbooks.
## Next steps > [!div class="nextstepaction"]
virtual-machines Automation Deploy Workload Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-deploy-workload-zone.md
environment="DEV"
# The location value is a mandatory field, it is used to control where the resources are deployed location="westeurope"
-# The network logical name is mandatory - it is used in the naming convention and should map to the workload virtual network logical name
+# The network logical name is mandatory - it is used in the naming convention and should map to the workload virtual network logical name
network_name="SAP01" # network_address_space is a mandatory parameter when an existing Virtual network is not used
The SAP Deployment Frameworks uses Service Principals when doing the deployment.
```azurecli-interactive az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/<subscriptionID>" --name="<environment>-Deployment-Account"
-
+ ``` > [!IMPORTANT]
az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/<subscrip
> - password > - tenant
-Assign the correct permissions to the Service Principal:
+Assign the correct permissions to the Service Principal:
```azurecli az role assignment create --assignee <appId> \
az role assignment create --assignee <appId> \
``` ## Deploying the SAP Workload zone
-
+ The sample Workload Zone configuration file `DEV-WEEU-SAP01-INFRASTRUCTURE.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/WORKSPACES/LANDSCAPE/DEV-WEEU-SAP01-INFRASTRUCTURE` folder. Running the command below will deploy the SAP Workload Zone.
cp -R sap-automation/samples/WORKSPACES WORKSPACES
```bash
-export subscriptionID="<subscriptionID>"
-export spn_id="<appID>"
-export spn_secret="<password>"
-export tenant_id="<tenant>"
-export region_code="WEEU"
-export storageaccount="<storageaccount>"
-export keyvault="<keyvault>"
+export subscriptionId="<subscriptionId>"
+export spn_id="<appId>"
+export spn_secret="<password>"
+export tenant_id="<tenantId>"
+export env_code="MGMT"
+export region_code="<region_code>"
export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
-export ARM_SUBSCRIPTION_ID="${subscriptionID}"
-
-cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/DEV-${region_code}-SAP01-INFRASTRUCTURE
-
-${DEPLOYMENT_REPO_PATH}/deploy/scripts/install_workloadzone.sh \
- --parameterfile ./DEV-${region_code}-SAP01-INFRASTRUCTURE.tfvars \
- --deployer_environment MGMT \
- --deployer_tfstate_key MGMT-${region_code}-DEP00-INFRASTRUCTURE.terraform.tfstate \
- --subscription "${subscriptionID}" \
- --spn_id "${spn_id}" \
- --spn_secret "${spn_secret}" \
- --tenant_id "${tenant_id}" \
- --keyvault "${keyvault}" \
- --storageaccountname "${storageaccount}"
+export ARM_SUBSCRIPTION_ID="${subscriptionId}"
+
+${DEPLOYMENT_REPO_PATH}/deploy/scripts/prepare_region.sh \
+ --deployer_parameter_file DEPLOYER/${env_code}-${region_code}-DEP00-INFRASTRUCTURE/${env_code}-${region_code}-DEP00-INFRASTRUCTURE.tfvars \
+ --library_parameter_file LIBRARY/${env_code}-${region_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars \
+ --subscription "${subscriptionId}" \
+ --spn_id "${spn_id}" \
+ --spn_secret "${spn_secret}" \
+ --tenant_id "${tenant_id}" \
+ --auto-approve
``` # [Windows](#tab/windows)
$region_code="WEEU"
cd C:\Azure_SAP_Automated_Deployment\WORKSPACES\LANDSCAPE\DEV-$region_code-SAP01-INFRASTRUCTURE
-New-SAPWorkloadZone -Parameterfile DEV-$region_code-SAP01-INFRASTRUCTURE.tfvars
+New-SAPWorkloadZone -Parameterfile DEV-$region_code-SAP01-INFRASTRUCTURE.tfvars
-Subscription $subscription -SPN_id $spn_id -SPN_password $spn_secret -Tenant_id $tenant_id -State_subscription $statefile_subscription -Vault $keyvault -$StorageAccountName $storageaccount ```
New-SAPWorkloadZone -Parameterfile DEV-$region_code-SAP01-INFRASTRUCTURE.tfvars
> Replace `<storageaccount>` with the name of the storage account containing the Terraform state files > Replace `<statefile_subscription>` with the subscription ID for the storage account containing the Terraform state files
+# [Azure DevOps](#tab/devops)
+
+Open (https://dev.azure.com) and go to your Azure DevOps Services project.
+
+> [!NOTE]
+> Ensure that the 'Deployment_Configuration_Path' variable in the 'SDAF-General' variable group is set to the folder that contains your configuration files, for this example you can use 'samples/WORKSPACES'.
+
+The deployment will use the configuration defined in the Terraform variable file located in the 'samples/WORKSPACES/LANDSCAPE/DEV-WEEU-SAP01-INFRASTRUCTURE' folder.
+
+Run the pipeline by selecting the _Deploy workload zone_ pipeline from the Pipelines section. Enter the workload zone configuration name and the deployer environment name. Use 'DEV-WEEU-SAP01-INFRASTRUCTURE' as the Workload zone configuration name and 'MGMT' as the Deployer Environment Name.
+
+You can track the progress in the Azure DevOps Services portal. Once the deployment is complete, you can see the Workload Zone details in the _Extensions_ tab.
+ + > [!TIP] > If the scripts fail to run, it can sometimes help to clear the local cache files by removing `~/.sap_deployment_automation/` and `~/.terraform.d/` directories before running the scripts again.
-## Next step
+## Next steps
> [!div class="nextstepaction"] > [About SAP system deployment in automation framework](automation-configure-system.md)
virtual-machines Automation Deployment Framework https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-deployment-framework.md
description: Overview of the framework and tooling for the SAP deployment automa
Previously updated : 11/17/2021 Last updated : 05/29/2022 # SAP deployment automation framework on Azure
-The [SAP deployment automation framework on Azure](https://github.com/Azure/sap-automation) is an open-source orchestration tool for deploying, installing and maintaining SAP environments. You can create infrastructure for SAP landscapes based on SAP HANA and NetWeaver with AnyDB on any of the SAP-supported operating system versions and deploy them into any Azure region. The framework uses [Terraform](https://www.terraform.io/) for infrastructure deployment, and [Ansible](https://www.ansible.com/) for the operating system and application configuration.
+The [SAP deployment automation framework on Azure](https://github.com/Azure/sap-automation) is an open-source orchestration tool for deploying, installing and maintaining SAP environments. You can create infrastructure for SAP landscapes based on SAP HANA and NetWeaver with AnyDB. The framework uses [Terraform](https://www.terraform.io/) for infrastructure deployment, and [Ansible](https://www.ansible.com/) for the operating system and application configuration. The systems can be deployed on any of the SAP-supported operating system versions and deployed into any Azure region.
Hashicorp [Terraform](https://www.terraform.io/) is an open-source tool for provisioning and managing cloud infrastructure.
The [automation framework](https://github.com/Azure/sap-automation) has two main
- Deployment infrastructure (control plane) - SAP Infrastructure (SAP Workload)
-You will use the control plane of the SAP deployment automation framework to deploy the SAP Infrastructure and the SAP application infrastructure. The deployment uses Terraform templates to create the [infrastructure as a service (IaaS)](https://azure.microsoft.com/overview/what-is-iaas) defined infrastructure to host the SAP Applications.
+You'll use the control plane of the SAP deployment automation framework to deploy the SAP Infrastructure and the SAP application infrastructure. The deployment uses Terraform templates to create the [infrastructure as a service (IaaS)](https://azure.microsoft.com/overview/what-is-iaas) defined infrastructure to host the SAP Applications.
> [!NOTE] > This automation framework is based on Microsoft best practices and principles for SAP on Azure. Review the [get-started guide for SAP on Azure virtual machines (Azure VMs)](get-started.md) to understand how to use certified virtual machines and storage solutions for stability, reliability, and performance.
The automation framework can be used to deploy the following SAP architectures:
- Distributed - Distributed (Highly Available)
-In the Standalone architecture all the SAP roles are installed on a single server. In the distributed architecture you can separate the database server and the application tier. The application tier can further be separated in two by having SAP Central Services on a virtual machine and one or more application servers.
+In the Standalone architecture, all the SAP roles are installed on a single server. In the distributed architecture, you can separate the database server and the application tier. The application tier can further be separated in two by having SAP Central Services on a virtual machine and one or more application servers.
-The Distributed (Highly Available) deployment is similar to the Distributed architecture but either the datebase or SAP Central Services are both highly available using two virtual machines each with Pacemaker clusters.
+The Distributed (Highly Available) deployment is similar to the Distributed architecture. In this deployment, the database and/or SAP Central Services can both be configured using a highly available configuration using two virtual machines each with Pacemaker clusters.
-The dependency between the control plane and the application plane is illustrated in the diagram below. In a typical deployment a single control plane is used to manage multiple SAP deployments.
+The dependency between the control plane and the application plane is illustrated in the diagram below. In a typical deployment, a single control plane is used to manage multiple SAP deployments.
:::image type="content" source="./media/automation-deployment-framework/control-plane-sap-infrastructure.png" alt-text="Diagram showing the SAP deployment automation framework's dependency between the control plane and application plane.":::
-The following diagram shows the key components of the control plane and workload zone.
--
-The application configuration will be performed from the Ansible Controller in the Control plane using a set of pre-defined playbooks. These playbooks will:
--- Configure base operating system settings-- Configure SAP-specific operating system settings-- Make the installation media available in the system-- Install the SAP system-- Install the SAP database (SAP HANA, AnyDB)-- Configure high availability (HA) using Pacemaker-- Configure high availability (HA) for your SAP database-- ## About the control plane
-The control plane houses the deployment infrastructure from which other environments will be deployed. Once the control plane is deployed, it rarely needs to be redeployed, if ever.
+The control plane houses the deployment infrastructure from which other environments will be deployed. Once the control plane is deployed, it rarely needs to be redeployed, if ever.
The control plane provides the following services - Terraform Deployment Infrastructure
The key components of the control plane are:
- Storage account for SAP installation media - Azure Key Vault for deployment credentials
+The following diagram shows the key components of the control plane and workload zone.
++
+The application configuration will be performed from the Ansible Controller in the Control plane using a set of pre-defined playbooks. These playbooks will:
+
+- Configure base operating system settings
+- Configure SAP-specific operating system settings
+- Make the installation media available in the system
+- Install the SAP system
+- Install the SAP database (SAP HANA, AnyDB)
+- Configure high availability (HA) using Pacemaker
+- Configure high availability (HA) for your SAP database
++
+For more information of how to configure and deploy the control plane, see [Configuring the control plane](automation-configure-control-plane.md) and [Deploying the control plane](automation-deploy-control-plane.md).
+ ### Deployer Virtual Machine
-This virtual machine is used to run the orchestration scripts that will deploy the Azure resources using Terraform. It is also the Ansible Controller and is used to execute the Ansible playbooks on all the managed nodes, i.e the virtual machines of an SAP deployment.
+This virtual machine is used to run the orchestration scripts that will deploy the Azure resources using Terraform. It's also the Ansible Controller and is used to execute the Ansible playbooks on all the managed nodes, i.e the virtual machines of an SAP deployment.
## About the SAP Workload
The SAP Workload has two main components:
## About the SAP Workload Zone
-The workload zone allows for partitioning of the deployments into different environments (Development,
-Test, Production)
+The workload zone allows for partitioning of the deployments into different environments (Development, Test, Production). The Workload zone will provide the shared services (networking, credentials management) to the SAP systems.
+ The SAP Workload Zone provides the following services to the SAP Systems - Virtual Networking infrastructure-- Secure storage for system credentials (Virtual Machines and SAP)
+- Azure Key Vault for system credentials (Virtual Machines and SAP)
- Shared Storage (optional)
+For more information of how to configure and deploy the SAP Workload zone, see [Configuring the workload zone](automation-configure-workload-zone.md) and [Deploying the SAP workload zone](automation-deploy-workload-zone.md).
## About the SAP System
The system deployment consists of the virtual machines that will be running the
The SAP System provides the following services - Virtual machine, storage, and supporting infrastructure to host the SAP applications.
+For more information of how to configure and deploy the SAP System, see [Configuring the SAP System](automation-configure-system.md) and [Deploying the SAP system](automation-deploy-system.md).
+ ## Glossary The following terms are important concepts for understanding the automation framework. ### SAP concepts
-| Term | Description |
-| - | -- |
-| System | An instance of an SAP application that contains the resources the application needs to run. Defined by a unique three-letter identifier, the **SID**.
-| Landscape | A collection of systems in different environments within an SAP application. For example, SAP ERP Central Component (ECC), SAP customer relationship management (CRM), and SAP Business Warehouse (BW). |
-| Workload zone | Partitions the SAP applications to environments, such as non-production and production environments or development, quality assurance, and production environments. Provides shared resources, such as virtual networks and key vault, to all systems within. |
+> [!div class="mx-tdCol2BreakAll "]
+> | Term | Description |
+> | - | -- |
+> | System | An instance of an SAP application that contains the resources the application needs to run. Defined by a unique three-letter identifier, the **SID**.
+> | Landscape | A collection of systems in different environments within an SAP application. For example, SAP ERP Central Component (ECC), SAP customer relationship management (CRM), and SAP Business Warehouse (BW). |
+> | Workload zone | Partitions the SAP applications to environments, such as non-production and production environments or development, quality assurance, and production environments. Provides shared resources, such as virtual networks and key vault, to all systems within. |
The following diagram shows the relationships between SAP systems, workload zones (environments), and landscapes. In this example setup, the customer has three SAP landscapes: ECC, CRM, and BW. Each landscape contains three workload zones: production, quality assurance, and development. Each workload zone contains one or more systems.
The following diagram shows the relationships between SAP systems, workload zone
### Deployment components
-| Term | Description | Scope |
-| - | -- | -- |
-| Deployer | A virtual machine that can execute Terraform and Ansible commands. Deployed to a virtual network, either new or existing, that is peered to the SAP virtual network. | Region |
-| Library | Provides storage for the Terraform state files and SAP installation media. | Region |
-| Workload zone | Contains the virtual network into which you deploy the SAP system or systems. Also contains a key vault that holds the credentials for the systems in the environment. | Workload zone |
-| System | The deployment unit for the SAP application (SID). Contains virtual machines and supporting infrastructure artifacts, such as load balancers and availability sets. | Workload zone |
+> [!div class="mx-tdCol2BreakAll "]
+> | Term | Description | Scope |
+> | - | -- | -- |
+> | Deployer | A virtual machine that can execute Terraform and Ansible commands. | Region |
+> | Library | Provides storage for the Terraform state files and the SAP installation media. | Region |
+> | Workload zone | Contains the virtual network for the SAP systems and a key vault that holds the system credentials | Workload zone |
+> | System | The deployment unit for the SAP application (SID). Contains all infrastructure assets | Workload zone |
## Next steps > [!div class="nextstepaction"] > [Get started with the deployment automation framework](automation-get-started.md)
+> [Configuring Azure DevOps for the automation framwework](automation-configure-devops.md)
+> [Configuring the control plane](automation-configure-control-plane.md)
+> [Configuring the workload zone](automation-configure-workload-zone.md)
+> [Configuring the SAP System](automation-configure-system.md)
+
virtual-machines Automation Devops Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-devops-tutorial.md
You'll perform the following tasks during this lab:
## Overview These steps reference and use the [default naming convention](automation-naming.md) for the automation framework. Example values are also used for naming throughout the configurations. In this tutorial, the following names are used:-- Azure DevOps project name is `SAP-Deployment` -- Azure DevOps repository name is `sap-automation`
+- Azure DevOps Services project name is `SAP-Deployment`
+- Azure DevOps Services repository name is `sap-automation`
- The control plane environment is named `MGMT`, in the region West Europe (`WEEU`) and installed in the virtual network `DEP00`, giving a deployer configuration name: `MGMT-WEEU-DEP00-INFRASTRUCTURE` - The SAP workload zone has the environment name `DEV` and is in the same region as the control plane using the virtual network `SAP01`, giving the SAP workload zone configuration name: `DEV-WEEU-SAP01-INFRASTRUCTURE`
Ensure that the 'Deployment_Configuration_Path' variable in the 'SDAF-General' v
Run the pipeline by selecting the _Deploy control plane_ pipeline from the Pipelines section. Enter 'MGMT-WEEU-DEP00-INFRASTRUCTURE' as the Deployer configuration name and 'MGMT-WEEU-SAP_LIBRARY' as the SAP Library configuration name.
-You can track the progress in the Azure DevOps portal. Once the deployment is complete, you can see the Control Plane details in the _Extensions_ tab.
+You can track the progress in the Azure DevOps Services portal. Once the deployment is complete, you can see the Control Plane details in the _Extensions_ tab.
+
+ :::image type="content" source="media/automation-devops/automation-run-pipeline-control-plane.png" alt-text="Screenshot of the DevOps run pipeline results.":::
- :::image type="content" source="media/automation-devops/automation-run-pipeline-control-plane.png" alt-text="Picture showing the DevOps tutorial run pipeline results":::
## Deploy the Workload zone
The deployment will use the configuration defined in the Terraform variable file
Run the pipeline by selecting the _Deploy workload zone_ pipeline from the Pipelines section. Enter 'DEV-WEEU-SAP01-INFRASTRUCTURE' as the Workload zone configuration name and 'MGMT' as the Deployer Environment Name.
-You can track the progress in the Azure DevOps portal. Once the deployment is complete, you can see the Workload Zone details in the _Extensions_ tab.
+You can track the progress in the Azure DevOps Services portal. Once the deployment is complete, you can see the Workload Zone details in the _Extensions_ tab.
## Deploy the SAP System
The deployment will use the configuration defined in the Terraform variable file
Run the pipeline by selecting the _SAP system deployment_ pipeline from the Pipelines section. Enter 'DEV-WEEU-SAP01-X00' as the SAP System configuration name.
-You can track the progress in the Azure DevOps portal. Once the deployment is complete, you can see the SAP System details in the _Extensions_ tab.
+You can track the progress in the Azure DevOps Services portal. Once the deployment is complete, you can see the SAP System details in the _Extensions_ tab.
## Download the SAP Software
-Run the pipeline by selecting the _SAP software acquisition_ pipeline from the Pipelines section. Enter 'S41909SPS03_v0010ms' as the Name of Bill of Materials (BoM), 'MGMT' as the Control Plane Environment name: MGMT and 'WEEU' as the
+Run the pipeline by selecting the _SAP software acquisition_ pipeline from the Pipelines section. Enter 'S41909SPS03_v0011ms' as the Name of Bill of Materials (BoM), 'MGMT' as the Control Plane Environment name: MGMT and 'WEEU' as the
Control Plane (SAP Library) location code. You can track the progress in the Azure DevOps portal.
Run the pipeline by selecting the _Configuration and SAP installation_ pipeline
Choose the playbooks to execute.
-You can track the progress in the Azure DevOps portal.
+You can track the progress in the Azure DevOps Services portal.
## Run the Repository update pipeline
Enter 'DEV-WEEU-SAP01-INFRASTRUCTURE' as the SAP workload zone configuration nam
Enter 'MGMT-WEEU-DEP00-INFRASTRUCTURE' as the Deployer configuration name and 'MGMT-WEEU-SAP_LIBRARY' as the SAP Library configuration name.
-## Next step
+## Next steps
> [!div class="nextstepaction"] > [Configure Control Plane](automation-configure-control-plane.md)
virtual-network Routing Preference Azure Kubernetes Service Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-azure-kubernetes-service-cli.md
Title: 'Tutorial: Configure routing preference for an Azure Kubernetes service - Azure CLI'
+ Title: 'Tutorial: Configure routing preference for an Azure Kubernetes Service - Azure CLI'
-description: Use this tutorial to learn how to configure routing preference for an Azure Kubernetes service.
+description: Use this tutorial to learn how to configure routing preference for an Azure Kubernetes Service.
ms.devlang: azurecli
-# Tutorial: Configure routing preference for an Azure Kubernetes service using the Azure CLI
+# Tutorial: Configure routing preference for an Azure Kubernetes Service using the Azure CLI
This article shows you how to configure routing preference via ISP network (**Internet** option) for a Kubernetes cluster using Azure CLI. Routing preference is set by creating a public IP address of routing preference type **Internet** and then using it while creating the AKS cluster.