Updates from: 04/22/2022 01:07:02
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Partner Azure Web Application Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-azure-web-application-firewall.md
- Title: Tutorial to configure Azure Active Directory B2C with Azure Web Application Firewall-
-description: Tutorial to configure Azure Active Directory B2C with Azure Web application firewall to protect your applications from malicious attacks
-------- Previously updated : 08/17/2021----
-# Tutorial: Configure Azure Web Application Firewall with Azure Active Directory B2C
-
-In this sample tutorial, learn how to enable [Azure Web Application Firewall (WAF)](https://azure.microsoft.com/services/web-application-firewall/#overview) solution for Azure Active Directory (AD) B2C tenant with custom domain. Azure WAF provides centralized protection of your web applications from common exploits and vulnerabilities.
-
->[!NOTE]
->This feature is in public preview.
-
-## Prerequisites
-
-To get started, you'll need:
--- An Azure subscription ΓÇô If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).--- [An Azure AD B2C tenant](tutorial-create-tenant.md) ΓÇô The authorization server, responsible for verifying the userΓÇÖs credentials using the custom policies defined in the tenant. It's also known as the identity provider.--- [Azure Front Door (AFD)](../frontdoor/index.yml) ΓÇô Responsible for enabling custom domains for Azure AD B2C tenant. --- [Azure WAF](https://azure.microsoft.com/services/web-application-firewall/#overview) ΓÇô Manages all traffic that is sent to the authorization server.-
-## Azure AD B2C setup
-
-To use custom domains in Azure AD B2C, it's required to use custom domain feature provided by AFD. Learn how to [enable Azure AD B2C custom domains](./custom-domain.md?pivots=b2c-user-flow).
-
-After custom domain for Azure AD B2C is successfully configured using AFD, [test the custom domain](./custom-domain.md?pivots=b2c-custom-policy#test-your-custom-domain) before proceeding further.
-
-## Onboard with Azure WAF
-
-To enable Azure WAF, configure a WAF policy and associate that policy to the AFD for protection.
-
-### Create a WAF policy
-
-Create a basic WAF policy with managed Default Rule Set (DRS) in the [Azure portal](https://portal.azure.com).
-
-1. Go to the [Azure portal](https://portal.azure.com). Select **Create a resource** and then search for Azure WAF. Select **Azure Web Application Firewall (WAF)** > **Create**.
-
-2. Go to the **Create a WAF policy** page, select the **Basics** tab. Enter the following information, accept the defaults for the remaining settings.
-
-| Value | Description |
-|:--|:-|
-| Policy for | Global WAF (Front Door)|
-| Front Door SKU | Select between Basic, Standard, or Premium SKU |
-|Subscription | Select your Front Door subscription name |
-| Resource group | Select your Front Door resource group name |
-| Policy name | Enter a unique name for your WAF policy |
-| Policy state | Set as Enabled |
-| Policy mode | Set as Detection |
-
-3. Select **Review + create**
-
-4. Go to the **Association** tab of the Create a WAF policy page, select + **Associate a Front Door profile**, enter the following settings
-
-| Value | Description |
-|:-|:|
-| Front Door | Select your Front Door name associated with Azure AD B2C custom domain |
-| Domains | Select the Azure AD B2C custom domains you want to associate the WAF policy to|
-
-5. Select **Add**.
-
-6. Select **Review + create**, then select **Create**.
-
-### Change policy mode from detection to prevention
-
-When a WAF policy is created, by default the policy is in Detection mode. In Detection mode, WAF doesn't block any requests, instead, requests matching the WAF rules are logged in the WAF logs. For more information about WAF logging, see [Azure WAF monitoring and logging](../web-application-firewall/afds/waf-front-door-monitor.md).
-
-The sample query shows all the requests that were blocked by the WAF policy in the past 24 hours. The details include, rule name, request data, action taken by the policy, and the policy mode.
-
-![Image shows the blocked requests](./media/partner-azure-web-application-firewall/blocked-requests-query.png)
-
-![Image shows the blocked requests details](./media/partner-azure-web-application-firewall/blocked-requests-details.png)
-
-It's recommended that you let the WAF capture requests in Detection mode. Review the WAF logs to determine if there are any rules in the policy that are causing false positive results. Then after [exclude the WAF rules based on the WAF logs](../web-application-firewall/afds/waf-front-door-exclusion.md#define-exclusion-based-on-web-application-firewall-logs).
-
-To see WAF in action, use Switch to prevention mode to change from Detection to Prevention mode. All requests that match the rules defined in the Default Rule Set (DRS) are blocked and logged in the WAF logs.
-
-![Image shows the switch to prevention mode](./media/partner-azure-web-application-firewall/switch-to-prevention-mode.png)
-
-In case you want to switch back to the detection mode, you can do so by using Switch to detection mode option.
-
-![Image shows the switch to detection mode](./media/partner-azure-web-application-firewall/switch-to-detection-mode.png)
-
-## Next steps
--- [Azure WAF monitoring and logging](../web-application-firewall/afds/waf-front-door-monitor.md)--- [WAF with Front Door service exclusion lists](../web-application-firewall/afds/waf-front-door-exclusion.md)
active-directory-b2c Partner Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-gallery.md
Previously updated : 01/11/2021 Last updated : 04/21/2022
Microsoft partners with the following ISVs for Web Application Firewall (WAF).
| ISV partner | Description and integration walkthroughs | |:-|:--| | ![Screenshot of Akamai logo](./medi) allows fine grained manipulation of traffic to protect and secure your identity infrastructure against malicious attacks. |
-| ![Screenshot of Azure WAF logo](./medi) provides centralized protection of your web applications from common exploits and vulnerabilities. |
![Screenshot of Cloudflare logo](./medi) is a WAF provider that helps organizations protect against malicious attacks that aim to exploit vulnerabilities such as SQLi, and XSS. |
active-directory How To Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-certificate-based-authentication.md
description: Topic that shows how to configure Azure AD certificate-based authen
Previously updated : 02/18/2022 Last updated : 04/21/2022
The username binding policy helps determine the user in the tenant. By default,
An admin can override the default and create a custom mapping. Currently, we support two certificate fields, SAN (Subject Alternate Name) Principal Name and SAN RFC822Name, to map against the user object attribute userPrincipalName and onPremisesUserPrincipalName.
+>[!IMPORTANT]
+>If a username binding policy uses synced attributes, such as onPremisesUserPrincipalName attribute of the user object, be aware that any user with administrative access to the Azure AD Connect server can change the sync attribute mapping, and in turn change the value of the synced attribute to their needs. The user does not need to be a cloud admin.
+ 1. Create the username binding by selecting one of the X.509 certificate fields to bind with one of the user attributes. The username binding order represents the priority level of the binding. The first one has the highest priority and so on. :::image type="content" border="true" source="./media/how-to-certificate-based-authentication/username-binding-policy.png" alt-text="Screenshot of a username binding policy.":::
active-directory Howto Authentication Passwordless Security Key Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key-windows.md
Previously updated : 02/22/2021 Last updated : 04/20/2022
This document focuses on enabling FIDO2 security key based passwordless authenti
| [Hybrid Azure AD joined devices](../devices/concept-azure-ad-join-hybrid.md) require Windows 10 version 2004 or higher | | X | | Fully patched Windows Server 2016/2019 Domain Controllers. | | X | | [Azure AD Connect](../hybrid/how-to-connect-install-roadmap.md#install-azure-ad-connect) version 1.4.32.0 or later | | X |
-| [Microsoft Intune](/intune/fundamentals/what-is-intune) (Optional) | X | X |
+| [Microsoft Endpoint Manager](/intune/fundamentals/what-is-intune) (Optional) | X | X |
| Provisioning package (Optional) | X | X | | Group Policy (Optional) | | X |
Hybrid Azure AD joined devices must run Windows 10 version 2004 or newer.
Organizations may choose to use one or more of the following methods to enable the use of security keys for Windows sign-in based on their organization's requirements: -- [Enable with Intune](#enable-with-intune)-- [Targeted Intune deployment](#targeted-intune-deployment)
+- [Enable with Endpoint Manager](#enable-with-endpoint-manager)
+- [Targeted Endpoint Manager deployment](#targeted-endpoint-manager-deployment)
- [Enable with a provisioning package](#enable-with-a-provisioning-package) - [Enable with Group Policy (Hybrid Azure AD joined devices only)](#enable-with-group-policy)
Organizations may choose to use one or more of the following methods to enable t
> > Organizations with **Azure AD joined devices** must do this before their devices can authenticate to on-premises resources with FIDO2 security keys.
-### Enable with Intune
+### Enable with Endpoint Manager
-To enable the use of security keys using Intune, complete the following steps:
+To enable the use of security keys using Endpoint Manager, complete the following steps:
1. Sign in to the [Microsoft Endpoint Manager admin center](https://endpoint.microsoft.com). 1. Browse to **Devices** > **Enroll Devices** > **Windows enrollment** > **Windows Hello for Business**.
To enable the use of security keys using Intune, complete the following steps:
Configuration of security keys for sign-in isn't dependent on configuring Windows Hello for Business.
-### Targeted Intune deployment
+### Targeted Endpoint Manager deployment
-To target specific device groups to enable the credential provider, use the following custom settings via Intune:
+To target specific device groups to enable the credential provider, use the following custom settings via Endpoint
1. Sign in to the [Microsoft Endpoint Manager admin center](https://endpoint.microsoft.com).
-1. Browse to **Devices** > **Windows** > **Configuration Profiles** > **Create profile**.
+1. Browse to **Devices** > **Windows** > **Configuration profiles** > **Create profile**.
1. Configure the new profile with the following settings: - Platform: Windows 10 and later
- - Profile type: Template > Custom
+ - Profile type: Templates > Custom
- Name: Security Keys for Windows Sign-In - Description: Enables FIDO Security Keys to be used during Windows Sign In
-1. Click **Add* and in **Add Row**, add the following Custom OMA-URI Settings:
+1. Click **Next** > **Add** and in **Add Row**, add the following Custom OMA-URI Settings:
- Name: Turn on FIDO Security Keys for Windows Sign-In - Description: (Optional) - OMA-URI: ./Device/Vendor/MSFT/PassportForWork/SecurityKey/UseSecurityKeyForSignin - Data Type: Integer - Value: 1
-1. The remainder of the policy settings include assigning to specific users, devices, or groups. For more information, see [Assign user and device profiles in Microsoft Intune](/intune/device-profile-assign).
-
-![Intune custom device configuration policy creation](./media/howto-authentication-passwordless-security-key/intune-custom-profile.png)
+1. The remainder of the policy settings include assigning to specific users, devices, or groups. For more information, see [Assign user and device profiles in Microsoft Endpoint Manager](/intune/device-profile-assign).
### Enable with a provisioning package
active-directory Howto Mfa Adfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-adfs.md
If your organization is federated with Azure Active Directory, use Azure AD Multi-Factor Authentication or Active Directory Federation Services (AD FS) to secure resources that are accessed by Azure AD. Use the following procedures to secure Azure Active Directory resources with either Azure AD Multi-Factor Authentication or Active Directory Federation Services. >[!NOTE]
->Set the domain setting [federatedIdpMfaBehavior](/graph/api/resources/federatedIdpMfaBehavior?view=graph-rest-beta&preserve-view=true) to `enforceMfaByFederatedIdp` (recommended) or **SupportsMFA** to `$True`. The **federatedIdpMfaBehavior** setting overrides **SupportsMFA** when both are set.
+>Set the domain setting [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-beta#federatedidpmfabehavior-values) to `enforceMfaByFederatedIdp` (recommended) or **SupportsMFA** to `$True`. The **federatedIdpMfaBehavior** setting overrides **SupportsMFA** when both are set.
## Secure Azure AD resources using AD FS
active-directory Cloudknox Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-faqs.md
Previously updated : 02/23/2022 Last updated : 04/20/2022
> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW. > Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+> [!NOTE]
+> The CloudKnox Permissions Management (CloudKnox) PREVIEW is currently not available for tenants hosted in the European Union (EU).
+ This article answers frequently asked questions (FAQs) about CloudKnox Permissions Management (CloudKnox).
No, CloudKnox is a hosted cloud offering.
Yes, non-Azure customers can use our solution. CloudKnox is a multi-cloud solution so even customers who have no subscription to Azure can benefit from it.
+## Is CloudKnox available for tenants hosted in the European Union (EU)?
+
+No, the CloudKnox Permissions Management (CloudKnox) PREVIEW is currently not available for tenants hosted in the European Union (EU).
+ ## If IΓÇÖm already using Azure AD Privileged Identity Management (PIM) for Azure, what value does CloudKnox provide? CloudKnox complements Azure AD PIM. Azure AD PIM provides just-in-time access for admin roles in Azure (as well as Microsoft Online Services and apps that use groups), while CloudKnox allows multi-cloud discovery, remediation, and monitoring of privileged access across Azure, AWS, and GCP.
Customers only need to track the evolution of their Permission Creep Index to mo
## Can customers generate permissions usage reports? Yes, CloudKnox has various types of system report available that capture specific data sets. These reports allow customers to:-- Make timely decisions-- Analyze usage trends and system/user performance-- Identify high-risk areas
+- Make timely decisions.
+- Analyze usage trends and system/user performance.
+- Identify high-risk areas.
For information about permissions usage reports, see [Generate and download the Permissions analytics report](cloudknox-product-permissions-analytics-reports.md).
active-directory Cloudknox Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-aws.md
Previously updated : 03/10/2022 Last updated : 04/20/2022
> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW. > Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+> [!NOTE]
+> The CloudKnox Permissions Management (CloudKnox) PREVIEW is currently not available for tenants hosted in the European Union (EU).
+ This article describes how to onboard an Amazon Web Services (AWS) account on CloudKnox Permissions Management (CloudKnox).
active-directory Cloudknox Onboard Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-azure.md
Previously updated : 03/10/2022 Last updated : 04/20/2022
> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW. > Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+> [!NOTE]
+> The CloudKnox Permissions Management (CloudKnox) PREVIEW is currently not available for tenants hosted in the European Union (EU).
+ This article describes how to onboard a Microsoft Azure subscription or subscriptions on CloudKnox Permissions Management (CloudKnox). Onboarding a subscription creates a new authorization system to represent the Azure subscription in CloudKnox. > [!NOTE]
active-directory Cloudknox Onboard Enable Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-enable-tenant.md
Previously updated : 03/14/2022 Last updated : 04/20/2022
> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW. > Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here. +
+> [!NOTE]
+> The CloudKnox Permissions Management (CloudKnox) PREVIEW is currently not available for tenants hosted in the European Union (EU).
+++ This article describes how to enable CloudKnox Permissions Management (CloudKnox) in your organization. Once you've enabled CloudKnox, you can connect it to your Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) platforms. > [!NOTE]
active-directory Cloudknox Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-onboard-gcp.md
Previously updated : 03/14/2022 Last updated : 04/20/2022
> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW. > Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here. +
+> [!NOTE]
+> The CloudKnox Permissions Management (CloudKnox) PREVIEW is currently not available for tenants hosted in the European Union (EU).
++ This article describes how to onboard a Google Cloud Platform (GCP) project on CloudKnox Permissions Management (CloudKnox). > [!NOTE]
active-directory Cloudknox Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-overview.md
Previously updated : 03/10/2022 Last updated : 04/20/2022
> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW. > Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
+> [!NOTE]
+> The CloudKnox Permissions Management (CloudKnox) PREVIEW is currently not available for tenants hosted in the European Union (EU).
+ ## Overview CloudKnox Permissions Management (CloudKnox) is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. For example, over-privileged workload and user identities, actions, and resources across multi-cloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP).
active-directory Cloudknox Training Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-training-videos.md
Previously updated : 03/14/2022 Last updated : 04/20/2022
To view a video on how to configure and onboard Amazon Web Services (AWS) accoun
To view a video on how to configure and onboard Google Cloud Platform (GCP) accounts in CloudKnox, select [Configure and onboard GCP accounts](https://www.youtube.com/watch?app=desktop&v=W3epcOaec28).
-<!## Privilege on demand (POD) work flows
-- View a step-by-step video on the [privilege on demand (POD) work flow from the Just Enough Permissions (JEP) Controller](https://vimeo.com/461508166/3d88107f41).-
-## Usage analytics
--- View a step-by-step video on [usage analytics](https://vimeo.com/461509556/b7bb392b83).-
-## Just Enough Permissions (JEP) roles and policies
--- View a step-by-step video on [how to use and interpret data on the Role/Policy tab under the JEP Controller](https://vimeo.com/461510754/3dd31d85b7).-
-## Attach or detach permissions for users, roles, and resources
--- View a step-by-step video on [how to attach and detach permissions for users, roles, and resources](https://vimeo.com/461512552/6f6a06e6c1).-
-## Audit trails
--- View a step-by-step video on [how to use the audit trail](https://vimeo.com/461513290/b431a38b6c).-
-## Alert triggers
--- View a step-by-step video on [how to create an alert trigger](https://vimeo.com/461881849/019c843cc6).-
-## Group permissions
--- View a step-by-step video on [how to create group-based permissions](https://vimeo.com/462797947/d041de9157).> ## Next steps
active-directory Concept Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
Token expiration and refresh are a standard mechanism in the industry. When a cl
Customers have expressed concerns about the lag between when conditions change for a user, and when policy changes are enforced. Azure AD has experimented with the "blunt object" approach of reduced token lifetimes but found they can degrade user experiences and reliability without eliminating risks.
-Timely response to policy violations or security issues really requires a "conversation" between the token issuer (Azure AD), and the relying party (enlightened app). This two-way conversation gives us two important capabilities. The relying party can see when properties change, like network location, and tell the token issuer. It also gives the token issuer a way to tell the relying party to stop respecting tokens for a given user because of account compromise, disablement, or other concerns. The mechanism for this conversation is continuous access evaluation (CAE). The goal is for response to be near real time, but latency of up to 15 minutes may be observed because of event propagation time.
+Timely response to policy violations or security issues really requires a "conversation" between the token issuer (Azure AD), and the relying party (enlightened app). This two-way conversation gives us two important capabilities. The relying party can see when properties change, like network location, and tell the token issuer. It also gives the token issuer a way to tell the relying party to stop respecting tokens for a given user because of account compromise, disablement, or other concerns. The mechanism for this conversation is continuous access evaluation (CAE). The goal for critical event evaluation is for response to be near real time, but latency of up to 15 minutes may be observed because of event propagation time; however, IP locations policy enforcement is instant.
The initial implementation of continuous access evaluation focuses on Exchange, Teams, and SharePoint Online.
active-directory V2 Permissions And Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-permissions-and-consent.md
Title: Microsoft identity platform scopes, permissions, & consent description: Learn about authorization in the Microsoft identity platform endpoint, including scopes, permissions, and consent. -+ Previously updated : 01/14/2022-- Last updated : 04/21/2022++
Applications that integrate with the Microsoft identity platform follow an autho
## Scopes and permissions
-The Microsoft identity platform implements the [OAuth 2.0](active-directory-v2-protocols.md) authorization protocol. OAuth 2.0 is a method through which a third-party app can access web-hosted resources on behalf of a user. Any web-hosted resource that integrates with the Microsoft identity platform has a resource identifier, or *application ID URI*.
+The Microsoft identity platform implements the [OAuth 2.0](active-directory-v2-protocols.md) authorization protocol. OAuth 2.0 is a method through which a third-party app can access web-hosted resources on behalf of a user. Any web-hosted resource that integrates with the Microsoft identity platform has a resource identifier, or *application ID URI*.
Here are some examples of Microsoft web-hosted resources:
If you request the OpenID Connect scopes and a token, you'll get a token to call
### openid
-If an app signs in by using [OpenID Connect](active-directory-v2-protocols.md), it must request the `openid` scope. The `openid` scope appears on the work account consent page as the **Sign you in** permission. On the personal Microsoft account consent page, it appears as the **View your profile and connect to apps and services using your Microsoft account** permission.
+If an app signs in by using [OpenID Connect](active-directory-v2-protocols.md), it must request the `openid` scope. The `openid` scope appears on the work account consent page as the **Sign you in** permission.
By using this permission, an app can receive a unique identifier for the user in the form of the `sub` claim. The permission also gives the app access to the UserInfo endpoint. The `openid` scope can be used at the Microsoft identity platform token endpoint to acquire ID tokens. The app can use these tokens for authentication.
active-directory Concept Fundamentals Block Legacy Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-fundamentals-block-legacy-authentication.md
Before you can begin enabling modern authentication on-premises, please be sure
Steps for enabling modern authentication can be found in the following articles: * [How to configure Exchange Server on-premises to use Hybrid Modern Authentication](/office365/enterprise/configure-exchange-server-for-hybrid-modern-authentication)
-* [How to use Modern Authentication (ADAL) with Skype for Business](/skypeforbusiness/manage/authentication/use-adal)
+* [How to use Modern Authentication with Skype for Business](/skypeforbusiness/manage/authentication/use-adal)
## Next steps - [How to configure Exchange Server on-premises to use Hybrid Modern Authentication](/office365/enterprise/configure-exchange-server-for-hybrid-modern-authentication)-- [How to use Modern Authentication (ADAL) with Skype for Business](/skypeforbusiness/manage/authentication/use-adal)-- [Block legacy authentication](../conditional-access/block-legacy-authentication.md)
+- [How to use Modern Authentication with Skype for Business](/skypeforbusiness/manage/authentication/use-adal)
+- [Block legacy authentication](../conditional-access/block-legacy-authentication.md)
active-directory Service Accounts Introduction Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-introduction-azure.md
Previously updated : 3/1/2021 Last updated : 04/21/2022 -
-# Introduction to securing Azure service accounts
+# Securing cloud-based service accounts
There are three types of service accounts native to Azure Active Directory: Managed identities, service principals, and user-based service accounts. Service accounts are a special type of account that is intended to represent a non-human entity such as an application, API, or other service. These entities operate within the security context provided by the service account. ## Types of Azure Active Directory service accounts For services hosted in Azure, we recommend using a managed identity if possible, and a service principal if not. Managed identities canΓÇÖt be used for services hosted outside of Azure. In that case, we recommend a service principal. If you can use a managed identity or a service principal, do so. We recommend that you not use an Azure Active Directory user account as a service account. See the following table for a summary.
-
| Service hosting| Managed identity| Service principal| Azure user account | | - | - | - | - |
For services hosted in Azure, we recommend using a managed identity if possible,
| Service is not hosted in Azure.| No| Yes. Recommended.| Not recommended. | | Service is multi-tenant| No| Yes. Recommended.| No. | - ## Managed identities Managed identities are secure Azure Active Directory (Azure AD) identities created to provide identities for Azure resources. There are [two types of managed identities](../managed-identities-azure-resources/overview.md#managed-identity-types):
A service principal is the local representation of an application object in a si
There are two mechanisms for authentication using service principalsΓÇöclient certificates and client secrets. Certificates are more secure: use client certificates if possible. Unlike client secrets, client certificates cannot accidentally be embedded in code. For information on securing service principals, see [Securing service principals](service-accounts-principal.md).- ## Next steps - For more information on securing Azure service accounts, see: [Securing managed identities](service-accounts-managed-identities.md)
active-directory Service Accounts On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-on-premises.md
Title: Introduction to Active Directory service accounts | Azure Active Directory
+ Title: Introduction to Active Directory service accounts
description: An introduction to the types of service accounts in Active Directory, and how to secure them.
Previously updated : 2/15/2021 Last updated : 04/21/2022 -
-# Introduction to Active Directory service accounts
+# Securing on-premises service accounts
A service has a primary security identity that determines the access rights for local and network resources. The security context for a Microsoft Win32 service is determined by the service account that's used to start the service. You use a service account to: * Identify and authenticate a service.
A local user account (name format: *.\UserName*) exists only in the Security Acc
## Choose the right type of service account - | Criterion| gMSA| sMSA| Computer&nbsp;account| User&nbsp;account | | - | - | - | - | - | | App runs on a single server| Yes| Yes. Use a gMSA if possible.| Yes. Use an MSA if possible.| Yes. Use an MSA if possible. |
A local user account (name format: *.\UserName*) exists only in the Security Acc
| Requirement to restrict service account to single server| No| Yes| Yes. Use an sMSA if possible.| No | | | | - ### Use server logs and PowerShell to investigate You can use server logs to determine which servers, and how many servers, an application is running on.
After you've found the service accounts in your on-premises environment, documen
* **Password security**: For user and local computer accounts, where the password is stored. Ensure that passwords are kept secure, and document who has access. Consider using [Privileged Identity Management](../privileged-identity-management/pim-configure.md) to secure stored passwords.
-
- ## Next steps To learn more about securing service accounts, see the following articles:
To learn more about securing service accounts, see the following articles:
* [Secure computer accounts](service-accounts-computer.md) * [Secure user accounts](service-accounts-user-on-premises.md) * [Govern on-premises service accounts](service-accounts-govern-on-premises.md)-
active-directory Howto Identity Protection Graph Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-graph-api.md
# Azure Active Directory Identity Protection and the Microsoft Graph PowerShell SDK
-Microsoft Graph is the Microsoft unified API endpoint and the home of [Azure Active Directory Identity Protection](./overview-identity-protection.md) APIs. This article will show you how to use the [Microsoft Graph PowerShell SDK](/graph/powershell/get-started) to get risky user details using PowerShell. Organizations that want to query the Microsoft Graph APIs directly can use the article, [Tutorial: Identify and remediate risks using Microsoft Graph APIs](/graph/tutorial-riskdetection-api) to begin that journey.
+Microsoft Graph is the Microsoft unified API endpoint and the home of [Azure Active Directory Identity Protection](./overview-identity-protection.md) APIs. This article will show you how to use the [Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/get-started) to get risky user details using PowerShell. Organizations that want to query the Microsoft Graph APIs directly can use the article, [Tutorial: Identify and remediate risks using Microsoft Graph APIs](/graph/tutorial-riskdetection-api) to begin that journey.
## Connect to Microsoft Graph
Get-MgRiskyUser -All
## Next steps -- [Get started with the Microsoft Graph PowerShell SDK](/graph/powershell/get-started)
+- [Get started with the Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/get-started)
- [Tutorial: Identify and remediate risks using Microsoft Graph APIs](/graph/tutorial-riskdetection-api) - [Overview of Microsoft Graph](https://developer.microsoft.com/graph/docs) - [Get access without a user](/graph/auth-v2-service)
active-directory Grant Consent Single User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-consent-single-user.md
Before you start, record the following details from the Azure portal:
- The API permissions that are required by the client application. Find out the app ID of the API and the permission IDs or claim values. - The username or object ID for the user on whose behalf access will be granted.
-For this example, we'll use [Microsoft Graph PowerShell](/graph/powershell/get-started) to grant consent on behalf of a single user. The client application is [Microsoft Graph Explorer](https://aka.ms/ge), and we grant access to the Microsoft Graph API.
+For this example, we'll use [Microsoft Graph PowerShell](/powershell/microsoftgraph/get-started) to grant consent on behalf of a single user. The client application is [Microsoft Graph Explorer](https://aka.ms/ge), and we grant access to the Microsoft Graph API.
```powershell # The app for which consent is being granted. In this example, we're granting access
active-directory Pim Resource Roles Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-assign-roles.md
na Previously updated : 02/02/2022 Last updated : 04/18/2022
Follow these steps to make a user eligible for an Azure resource role.
1. To specify a specific assignment duration, change the start and end dates and times.
+1. If the role has been defined with actions that permit assignments to that role with conditions, then you can select **Add condition** to add a condition based on the principal user and resource attributes that are part of the assignment.
+
+ ![New assignment - Conditions](./media/pim-resource-roles-assign-roles/new-assignment-conditions.png)
+
+ Conditions can be entered in the expression builder.
+
+ ![New assignment - Condition built from an expression](./media/pim-resource-roles-assign-roles/new-assignment-condition-expression.png)
+ 1. When finished, select **Assign**. 1. After the new role assignment is created, a status notification is displayed.
active-directory Atlassian Cloud Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atlassian-cloud-provisioning-tutorial.md
Once you've configured provisioning, use the following resources to monitor your
## Change log * 06/15/2020 - Added support for batch PATCH for groups.
+* 04/21/2021 - Added support for **Schema Discovery**.
## Additional resources
active-directory Gong Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/gong-provisioning-tutorial.md
Once you've configured provisioning, use the following resources to monitor your
## Change Log * 03/23/2022 - Added support for **Group Provisioning**.
-* 04/06/2022 - **emails[type eq "work"].value** is made a required attribute.
+* 04/21/2022 - **emails[type eq "work"].value** has been marked as required attribute.
## More resources
active-directory Nordpass Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/nordpass-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure NordPass for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to NordPass.
+
+documentationcenter: ''
+
+writer: Thwimmer
++
+ms.assetid: ad92f598-f6f6-4ee4-8de4-a488d4e07126
+++
+ms.devlang: na
+ Last updated : 04/21/2022+++
+# Tutorial: Configure NordPass for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both NordPass and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [NordPass](https://nordpass.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in NordPass.
+> * Remove users in NordPass when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and NordPass.
+> * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to NordPass.
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in NordPass with Admin permissions.
++
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and NordPass](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure NordPass to support provisioning with Azure AD
+1. Log in to [NordPass Admin Panel](https://panel.nordpass.com).
+1. Navigate to **Settings > User provisioning** and select **Get Credentials**.
+1. In the new window, you will see admin credentials:
+
+ ![NordPass Admin Credentials](media/nordpass-provisioning-tutorial/nordpass-admin-credentials.png)
+
+1. Copy and save the **Tenant Url** and **Secret Token** that you see in the new window.This value will be entered in the **Tenant Url** and **Secret Token** field in the Provisioning tab of your NordPass application in the Azure portal.
+
+## Step 3. Add NordPass from the Azure AD application gallery
+
+Add NordPass from the Azure AD application gallery to start managing provisioning to NordPass. If you have previously setup NordPass for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to NordPass
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in NordPass based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for NordPass in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+1. In the applications list, select **NordPass**.
+
+ ![The NordPass link in the Applications list](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your NordPass Tenant URL and corresponding **Secret Token** which was retrieved earlier. Click **Test Connection** to ensure Azure AD can connect to NordPass. If the connection fails, ensure your NordPass account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to NordPass**.
+
+1. Review the user attributes that are synchronized from Azure AD to NordPass in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in NordPass for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the NordPass API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by NordPass|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||&check;
+ |externalId|String||&check;
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for NordPass, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to NordPass by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+1. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Openidoauth Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/openidoauth-tutorial.md
Last updated 02/02/2022 -+ # Configure an OpenID Connect OAuth application from Azure AD app gallery
As an administrator, you can also consent to an application's delegated permissi
![Grant Permissions button](./media/openidoauth-tutorial/grantpermission.png) > [!NOTE]
-> Granting explicit consent by using the **Grant admin consent** button is now required for single-page applications (SPAs) that use ADAL.js. Otherwise, the application fails when the access token is requested.
+> Granting explicit consent by using the **Grant admin consent** button is now required for single-page applications (SPAs) that use MSAL.js. Otherwise, the application fails when the access token is requested.
App-only permissions always require a tenant administrator's consent. If your application requests an app-only permission and a user tries to sign in to the application, an error message appears. The message says the user isn't able to consent.
active-directory Visibly Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/visibly-provisioning-tutorial.md
# Tutorial: Configure Visibly for automatic user provisioning
-This tutorial describes the steps you need to perform in both Visibly and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Visibly](https://www.visibly.io/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+This tutorial describes the steps you need to perform in both Visibly and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Visibly](https://visibly.io/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities Supported
The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
-* A [Visibly](https://www.visibly.io/) tenant
+* A [Visibly](https://visibly.io/) tenant
## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
Once you've configured provisioning, use the following resources to monitor your
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
aks Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-configuration.md
Additionally not all VM images support Gen2, on AKS Gen2 VMs will use the new [A
## Ephemeral OS
-By default, Azure automatically replicates the operating system disk for an virtual machine to Azure storage to avoid data loss should the VM need to be relocated to another host. However, since containers aren't designed to have local state persisted, this behavior offers limited value while providing some drawbacks, including slower node provisioning and higher read/write latency.
+By default, Azure automatically replicates the operating system disk for a virtual machine to Azure storage to avoid data loss should the VM need to be relocated to another host. However, since containers aren't designed to have local state persisted, this behavior offers limited value while providing some drawbacks, including slower node provisioning and higher read/write latency.
By contrast, ephemeral OS disks are stored only on the host machine, just like a temporary disk. This provides lower read/write latency, along with faster node scaling and cluster upgrades.
app-service Configure Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-custom-container.md
The only exception is the `C:\home\LogFiles` directory, which is used to store t
::: zone pivot="container-linux"
-You can use the */home* directory in your custom container file system to persist files across restarts and share them across instances. The `/home` directory is provided to enable your custom container to access persistent storage.
+You can use the */home* directory in your custom container file system to persist files across restarts and share them across instances. The `/home` directory is provided to enable your custom container to access persistent storage. Saving data within `/home` will contribute to the [storage space quota](https://docs.microsoft.com/azure/azure-resource-manager/management/azure-subscription-service-limits#app-service-limits) included with your App Service Plan.
When persistent storage is disabled, then writes to the `/home` directory are not persisted across app restarts or across multiple instances. When persistent storage is enabled, all writes to the `/home` directory are persisted and can be accessed by all instances of a scaled-out app. Additionally, any contents inside the `/home` directory of the container are overwritten by any existing files already present on the persistent storage when the container starts. The only exception is the `/home/LogFiles` directory, which is used to store the container and application logs. This folder will always persist upon app restarts if [application logging is enabled](troubleshoot-diagnostic-logs.md#enable-application-logging-linuxcontainer) with the **File System** option, independently of the persistent storage being enabled or disabled. In other words, enabling or disabling the persistent storage will not affect the application logging behavior.
+It is recommended to write data to `/home` or a [mounted azure storage path](configure-connect-to-azure-storage.md?tabs=portal&pivots=container-linux). Data written outside these paths will not be persistent during restarts and will be saved to platform-managed host disk space separate from the App Service Plans file storage quota.
+ ::: zone-end By default, persistent storage is disabled on custom containers and the setting is exposed in the app settings. To enable it, set the `WEBSITES_ENABLE_APP_SERVICE_STORAGE` app setting value to `true` via the [Cloud Shell](https://shell.azure.com). In Bash:
application-gateway Migrate V1 V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/migrate-v1-v2.md
An Azure PowerShell script is available that does the following:
* If you have FIPS mode enabled for your V1 gateway, it won't be migrated to your new v2 gateway. FIPS mode isn't supported in v2. * v2 doesn't support IPv6, so IPv6 enabled v1 gateways aren't migrated. If you run the script, it may not complete. * If the v1 gateway has only a private IP address, the script creates a public IP address and a private IP address for the new v2 gateway. v2 gateways currently don't support only private IP addresses.
-* Headers with names containing anything other than letters, digits, hyphens and underscores are not passed to your application. This only applies to header names, not header values. This is a breaking change from v1.
+* Headers with names containing anything other than letters, digits, and hyphens are not passed to your application. This only applies to header names, not header values. This is a breaking change from v1.
* NTLM and Kerberos authentication is not supported by Application Gateway v2. The script is unable to detect if the gateway is serving this type of traffic and may pose as a breaking change from v1 to v2 gateways if run. ## Download the script
azure-arc Tutorial Akv Secrets Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-akv-secrets-provider.md
The Azure Key Vault Provider for Secrets Store CSI Driver allows for the integra
- Google Kubernetes Engine - OpenShift Kubernetes Distribution - Canonical Kubernetes Distribution
+ - Elastic Kubernetes Service
[!INCLUDE [preview features note](./includes/preview/preview-callout.md)]
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
The tenant ID of the app registration used to access the vault where keys are st
The URI of a key vault instance used to store keys. Supported in version 4.x and later versions of the Functions runtime. This is the recommended setting for using a key vault instance for key storage. Requires that `AzureWebJobsSecretStorageType` be set to `keyvault`.
-The `AzureWebJobsSecretStorageKeyVaultTenantId` value should be the full value of **Vault URI** displayed in the **Key Vault overview** tab, including `https://`.
+The `AzureWebJobsSecretStorageKeyVaultUri` value should be the full value of **Vault URI** displayed in the **Key Vault overview** tab, including `https://`.
The vault must have an access policy corresponding to the system-assigned managed identity of the hosting resource. The access policy should grant the identity the following secret permissions: `Get`,`Set`, `List`, and `Delete`. <br/>When running locally, the developer identity is used, and settings must be in the [local.settings.json file](functions-develop-local.md#local-settings-file).
azure-netapp-files Cross Region Replication Manage Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-manage-disaster-recovery.md
na Previously updated : 09/29/2021 Last updated : 04/21/2021 # Manage disaster recovery using cross-region replication
When you need to activate the destination volume (for example, when you want to
## <a name="resync-replication"></a>Resync volumes after disaster recovery
-After disaster recovery, you can reactivate the source volume by performing a resync operation. The resync operation reverses the replication process and synchronizes data from the destination volume to the source volume.
+After disaster recovery, you can reactivate the source volume by performing a reverse resync operation. The reverse resync operation reverses the replication process and synchronizes data from the destination volume to the source volume.
> [!IMPORTANT]
-> The resync operation synchronizes the source and destination volumes by incrementally updating the source volume with the latest updates from the destination volume, based on the last available common snapshots. This operation avoids the need to synchronize the entire volume in most cases because only changes to the destination volume *after* the most recent common snapshot will have to be replicated to the source volume.
+> The reverse resync operation synchronizes the source and destination volumes by incrementally updating the source volume with the latest updates from the destination volume, based on the last available common snapshots. This operation avoids the need to synchronize the entire volume in most cases because only changes to the destination volume *after* the most recent common snapshot will have to be replicated to the source volume.
>
-> The resync operation overwrites any newer data (than the most common snapshot) in the source volume with the updated destination volume data. The UI warns you about the potential for data loss. You will be prompted to confirm the resync action before the operation starts.
+> The reverse resync operation overwrites any newer data (than the most common snapshot) in the source volume with the updated destination volume data. The UI warns you about the potential for data loss. You will be prompted to confirm the resync action before the operation starts.
> > In case the source volume did not survive the disaster and therefore no common snapshot exists, all data in the destination will be resynchronized to a newly created source volume.
-1. To resync replication, select the *source* volume. Click **Replication** under Storage Service. Then click **Resync**.
+1. To reverse resync replication, select the *source* volume. Click **Replication** under Storage Service. Then click **Reverse Resync**.
-2. Type **Yes** when prompted and click **Resync**.
+2. Type **Yes** when prompted and click **OK**.
![Resync replication](../media/azure-netapp-files/cross-region-replication-resync-replication.png) 3. Monitor the source volume health status by following steps in [Display health status of replication relationship](cross-region-replication-display-health-status.md).
- When the source volume health status shows the following values, the resync operation is complete, and changes made at the destination volume are now captured on the source volume:
+ When the source volume health status shows the following values, the reverse resync operation is complete, and changes made at the destination volume are now captured on the source volume:
* Mirrored State: *Mirrored* * Transfer State: *Idle*
After the resync operation from destination to source is complete, you need to b
d. Type **Yes** when prompted and click the **Break** button. 2. Resync the source volume with the destination volume:
- a. Select the *destination* volume. Click **Replication** under Storage Service. Then click **Resync**.
- b. Type **Yes** when prompted and click the **Resync** button.
+ a. Select the *destination* volume. Click **Replication** under Storage Service. Then click **Reverse Resync**.
+ b. Type **Yes** when prompted and click the **OK** button.
3. Remount the source volume by following the steps in [Mount a volume for Windows or Linux virtual machines](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md). This step enables a client to access the source volume.
azure-portal Get Subscription Tenant Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/get-subscription-tenant-id.md
+
+ Title: Get subscription and tenant IDs in the Azure portal
+description: To get them
Last updated : 04/21/2022+++
+# Get subscription and tenant IDs in the Azure portal
+
+A tenant is an [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) entity that typically encompasses an organization. Tenants can have one or more subscriptions, which are agreements with Microsoft to use cloud services, including Azure. Every Azure resource is associated with a subscription.
+
+Each subscription has an ID associated with it, as does the tenant to which a subscription belongs. As you perform different tasks, you may need the ID for a subscription or tenant. You can find these values in the Azure portal.
+
+## Find your Azure subscription
+
+Follow these steps to retrieve the ID for a subscription in the Azure portal.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Under the Azure services heading, select **Subscriptions**. If you don't see **Subscriptions** here, use the search box to find it.
+1. Find the **Subscription ID** for the subscription shown in the second column. If no subscriptions appear, or you don't see the right one, you may need to [switch directories](set-preferences.md#switch-and-manage-directories) to show the subscriptions from a different Azure AD tenant.
+1. Copy the **Subscription ID**. You can paste this value into a text document or other location.
+
+> [!TIP]
+> You can also list your subscriptions and view their IDs programmatically by using [Get-AzSubscription](/powershell/module/az.accounts/get-azsubscription?view=latest) (Azure PowerShell) or [az account list](/cli/azure/account?view=azure-cli-latest) (Azure CLI).
+
+## Find your Azure AD tenant
+
+Follow these steps to retrieve the ID for an Azure AD tenant in the Azure portal.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Confirm that you are signed into the tenant for which you want to retrieve the ID. If not, [switch directories](set-preferences.md#switch-and-manage-directories) so that you're working in the right tenant.
+1. Under the Azure services heading, select **Azure Active Directory**. If you don't see **Azure Active Directory** here, use the search box to find it.
+1. Find the **Tenant ID** in the **Basic information** section of the **Overview** screen.
+1. Copy the **Tenant ID**. You can paste this value into a text document or other location.
+
+> [!TIP]
+> You can also find your tenant programmatically by using [Azure Powershell](../active-directory/fundamentals/active-directory-how-to-find-tenant.md#find-tenant-id-with-powershell) or [Azure CLI](../active-directory/fundamentals/active-directory-how-to-find-tenant.md#find-tenant-id-with-cli).
+
+## Next steps
+
+- Learn more about [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md).
+- Learn how to [manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
+- Learn how to [manage Azure portal settings and preferences](set-preferences.md).
azure-resource-manager Extension Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/extension-resource-types.md
Title: Extension resource types description: Lists the Azure resource types are used to extend the capabilities of other resource types. Previously updated : 04/19/2022 Last updated : 04/20/2022 # Resource types that extend capabilities of other resources
azure-resource-manager Resources Without Resource Group Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resources-without-resource-group-limit.md
Title: Resources without 800 count limit description: Lists the Azure resource types that can have more than 800 instances in a resource group. Previously updated : 04/19/2022 Last updated : 04/20/2022 # Resources not limited to 800 instances per resource group
azure-resource-manager Tag Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-support.md
Title: Tag support for resources description: Shows which Azure resource types support tags. Provides details for all Azure services. Previously updated : 02/18/2022 Last updated : 04/20/2022 # Tag support for Azure resources+ This article describes whether a resource type supports [tags](tag-resources.md). The column labeled **Supports tags** indicates whether the resource type has a property for the tag. The column labeled **Tag in cost report** indicates whether that resource type passes the tag to the cost report. You can view costs by tags in the [Cost Management cost analysis](../../cost-management-billing/costs/group-filter.md) and the [Azure billing invoice and daily usage data](../../cost-management-billing/manage/download-azure-invoice-daily-usage-date.md). To get the same data as a file of comma-separated values, download [tag-support.csv](https://github.com/tfitzmac/resource-capabilities/blob/master/tag-support.csv).
-Jump to a resource provider namespace:
-> [!div class="op_single_selector"]
-> - [Microsoft.AAD](#microsoftaad)
-> - [Microsoft.Addons](#microsoftaddons)
-> - [Microsoft.ADHybridHealthService](#microsoftadhybridhealthservice)
-> - [Microsoft.Advisor](#microsoftadvisor)
-> - [Microsoft.AgFoodPlatform](#microsoftagfoodplatform)
-> - [Microsoft.AlertsManagement](#microsoftalertsmanagement)
-> - [Microsoft.AnalysisServices](#microsoftanalysisservices)
-> - [Microsoft.AnyBuild](#microsoftanybuild)
-> - [Microsoft.ApiManagement](#microsoftapimanagement)
-> - [Microsoft.AppAssessment](#microsoftappassessment)
-> - [Microsoft.AppConfiguration](#microsoftappconfiguration)
-> - [Microsoft.AppPlatform](#microsoftappplatform)
-> - [Microsoft.Attestation](#microsoftattestation)
-> - [Microsoft.Authorization](#microsoftauthorization)
-> - [Microsoft.Automanage](#microsoftautomanage)
-> - [Microsoft.Automation](#microsoftautomation)
-> - [Microsoft.AVS](#microsoftavs)
-> - [Microsoft.Azure.Geneva](#microsoftazuregeneva)
-> - [Microsoft.AzureActiveDirectory](#microsoftazureactivedirectory)
-> - [Microsoft.AzureArcData](#microsoftazurearcdata)
-> - [Microsoft.AzureCIS](#microsoftazurecis)
-> - [Microsoft.AzureData](#microsoftazuredata)
-> - [Microsoft.AzurePercept](#microsoftazurepercept)
-> - [Microsoft.AzureSphere](#microsoftazuresphere)
-> - [Microsoft.AzureStack](#microsoftazurestack)
-> - [Microsoft.AzureStackHCI](#microsoftazurestackhci)
-> - [Microsoft.BackupSolutions](#microsoftbackupsolutions)
-> - [Microsoft.BareMetalInfrastructure](#microsoftbaremetalinfrastructure)
-> - [Microsoft.Batch](#microsoftbatch)
-> - [Microsoft.Billing](#microsoftbilling)
-> - [Microsoft.BillingBenefits](#microsoftbillingbenefits)
-> - [Microsoft.Blockchain](#microsoftblockchain)
-> - [Microsoft.BlockchainTokens](#microsoftblockchaintokens)
-> - [Microsoft.Blueprint](#microsoftblueprint)
-> - [Microsoft.BotService](#microsoftbotservice)
-> - [Microsoft.Cache](#microsoftcache)
-> - [Microsoft.Capacity](#microsoftcapacity)
-> - [Microsoft.Cascade](#microsoftcascade)
-> - [Microsoft.Cdn](#microsoftcdn)
-> - [Microsoft.CertificateRegistration](#microsoftcertificateregistration)
-> - [Microsoft.ChangeAnalysis](#microsoftchangeanalysis)
-> - [Microsoft.Chaos](#microsoftchaos)
-> - [Microsoft.ClassicCompute](#microsoftclassiccompute)
-> - [Microsoft.ClassicInfrastructureMigrate](#microsoftclassicinfrastructuremigrate)
-> - [Microsoft.ClassicNetwork](#microsoftclassicnetwork)
-> - [Microsoft.ClassicStorage](#microsoftclassicstorage)
-> - [Microsoft.ClusterStor](#microsoftclusterstor)
-> - [Microsoft.CodeSigning](#microsoftcodesigning)
-> - [Microsoft.Codespaces](#microsoftcodespaces)
-> - [Microsoft.CognitiveServices](#microsoftcognitiveservices)
-> - [Microsoft.Commerce](#microsoftcommerce)
-> - [Microsoft.Compute](#microsoftcompute)
-> - [Microsoft.Communication](#microsoftcommunication)
-> - [Microsoft.ConfidentialLedger](#microsoftconfidentialledger)
-> - [Microsoft.ConnectedCache](#microsoftconnectedcache)
-> - [Microsoft.ConnectedVehicle](#microsoftconnectedvehicle)
-> - [Microsoft.ConnectedVMwarevSphere](#microsoftconnectedvmwarevsphere)
-> - [Microsoft.Consumption](#microsoftconsumption)
-> - [Microsoft.ContainerInstance](#microsoftcontainerinstance)
-> - [Microsoft.ContainerRegistry](#microsoftcontainerregistry)
-> - [Microsoft.ContainerService](#microsoftcontainerservice)
-> - [Microsoft.CostManagement](#microsoftcostmanagement)
-> - [Microsoft.CustomerLockbox](#microsoftcustomerlockbox)
-> - [Microsoft.CustomProviders](#microsoftcustomproviders)
-> - [Microsoft.D365CustomerInsights](#microsoftd365customerinsights)
-> - [Microsoft.Dashboard](#microsoftdashboard)
-> - [Microsoft.DataBox](#microsoftdatabox)
-> - [Microsoft.DataBoxEdge](#microsoftdataboxedge)
-> - [Microsoft.Databricks](#microsoftdatabricks)
-> - [Microsoft.DataCatalog](#microsoftdatacatalog)
-> - [Microsoft.DataFactory](#microsoftdatafactory)
-> - [Microsoft.DataLakeAnalytics](#microsoftdatalakeanalytics)
-> - [Microsoft.DataLakeStore](#microsoftdatalakestore)
-> - [Microsoft.DataMigration](#microsoftdatamigration)
-> - [Microsoft.DataProtection](#microsoftdataprotection)
-> - [Microsoft.DataShare](#microsoftdatashare)
-> - [Microsoft.DBforMariaDB](#microsoftdbformariadb)
-> - [Microsoft.DBforMySQL](#microsoftdbformysql)
-> - [Microsoft.DBforPostgreSQL](#microsoftdbforpostgresql)
-> - [Microsoft.DelegatedNetwork](#microsoftdelegatednetwork)
-> - [Microsoft.DeploymentManager](#microsoftdeploymentmanager)
-> - [Microsoft.DesktopVirtualization](#microsoftdesktopvirtualization)
-> - [Microsoft.DevAI](#microsoftdevai)
-> - [Microsoft.Devices](#microsoftdevices)
-> - [Microsoft.DeviceUpdate](#microsoftdeviceupdate)
-> - [Microsoft.DevOps](#microsoftdevops)
-> - [Microsoft.DevSpaces](#microsoftdevspaces)
-> - [Microsoft.DevTestLab](#microsoftdevtestlab)
-> - [Microsoft.Diagnostics](#microsoftdiagnostics)
-> - [Microsoft.DigitalTwins](#microsoftdigitaltwins)
-> - [Microsoft.DocumentDB](#microsoftdocumentdb)
-> - [Microsoft.DomainRegistration](#microsoftdomainregistration)
-> - [Microsoft.DynamicsLcs](#microsoftdynamicslcs)
-> - [Microsoft.EdgeOrder](#microsoftedgeorder)
-> - [Microsoft.EnterpriseKnowledgeGraph](#microsoftenterpriseknowledgegraph)
-> - [Microsoft.EventGrid](#microsofteventgrid)
-> - [Microsoft.EventHub](#microsofteventhub)
-> - [Microsoft.Experimentation](#microsoftexperimentation)
-> - [Microsoft.Falcon](#microsoftfalcon)
-> - [Microsoft.Features](#microsoftfeatures)
-> - [Microsoft.Fidalgo](#microsoftfidalgo)
-> - [Microsoft.FluidRelay](#microsoftfluidrelay)
-> - [Microsoft.Gallery](#microsoftgallery)
-> - [Microsoft.Genomics](#microsoftgenomics)
-> - [Microsoft.Graph](#microsoftgraph)
-> - [Microsoft.GuestConfiguration](#microsoftguestconfiguration)
-> - [Microsoft.HanaOnAzure](#microsofthanaonazure)
-> - [Microsoft.HardwareSecurityModules](#microsofthardwaresecuritymodules)
-> - [Microsoft.HDInsight](#microsofthdinsight)
-> - [Microsoft.HealthBot](#microsofthealthbot)
-> - [Microsoft.HealthcareApis](#microsofthealthcareapis)
-> - [Microsoft.HpcWorkbench](#microsofthpcworkbench)
-> - [Microsoft.HybridCompute](#microsofthybridcompute)
-> - [Microsoft.HybridConnectivity](#microsofthybridconnectivity)
-> - [Microsoft.HybridContainerService](#microsofthybridcontainerservice)
-> - [Microsoft.HybridData](#microsofthybriddata)
-> - [Microsoft.HybridNetwork](#microsofthybridnetwork)
-> - [Microsoft.Hydra](#microsofthydra)
-> - [Microsoft.ImportExport](#microsoftimportexport)
-> - [Microsoft.Insights](#microsoftinsights)
-> - [Microsoft.Intune](#microsoftintune)
-> - [Microsoft.IoTCentral](#microsoftiotcentral)
-> - [Microsoft.IoTFirmwareDefense](#microsoftiotfirmwaredefense)
-> - [Microsoft.IoTSecurity](#microsoftiotsecurity)
-> - [Microsoft.IoTSpaces](#microsoftiotspaces)
-> - [Microsoft.KeyVault](#microsoftkeyvault)
-> - [Microsoft.Kubernetes](#microsoftkubernetes)
-> - [Microsoft.KubernetesConfiguration](#microsoftkubernetesconfiguration)
-> - [Microsoft.Kusto](#microsoftkusto)
-> - [Microsoft.LabServices](#microsoftlabservices)
-> - [Microsoft.LocationServices](#microsoftlocationservices)
-> - [Microsoft.Logic](#microsoftlogic)
-> - [Microsoft.MachineLearning](#microsoftmachinelearning)
-> - [Microsoft.MachineLearningServices](#microsoftmachinelearningservices)
-> - [Microsoft.Maintenance](#microsoftmaintenance)
-> - [Microsoft.ManagedIdentity](#microsoftmanagedidentity)
-> - [Microsoft.ManagedServices](#microsoftmanagedservices)
-> - [Microsoft.Management](#microsoftmanagement)
-> - [Microsoft.Maps](#microsoftmaps)
-> - [Microsoft.Marketplace](#microsoftmarketplace)
-> - [Microsoft.MarketplaceApps](#microsoftmarketplaceapps)
-> - [Microsoft.MarketplaceNotifications](#microsoftmarketplacenotifications)
-> - [Microsoft.MarketplaceOrdering](#microsoftmarketplaceordering)
-> - [Microsoft.Media](#microsoftmedia)
-> - [Microsoft.Migrate](#microsoftmigrate)
-> - [Microsoft.MixedReality](#microsoftmixedreality)
-> - [Microsoft.MobileNetwork](#microsoftmobilenetwork)
-> - [Microsoft.Monitor](#microsoftmonitor)
-> - [Microsoft.NetApp](#microsoftnetapp)
-> - [Microsoft.NetworkFunction](#microsoftnetworkfunction)
-> - [Microsoft.Network](#microsoftnetwork)
-> - [Microsoft.Notebooks](#microsoftnotebooks)
-> - [Microsoft.NotificationHubs](#microsoftnotificationhubs)
-> - [Microsoft.ObjectStore](#microsoftobjectstore)
-> - [Microsoft.OffAzure](#microsoftoffazure)
-> - [Microsoft.OpenEnergyPlatform](#microsoftopenenergyplatform)
-> - [Microsoft.OperationalInsights](#microsoftoperationalinsights)
-> - [Microsoft.OperationsManagement](#microsoftoperationsmanagement)
-> - [Microsoft.Peering](#microsoftpeering)
-> - [Microsoft.PlayFab](#microsoftplayfab)
-> - [Microsoft.PolicyInsights](#microsoftpolicyinsights)
-> - [Microsoft.Portal](#microsoftportal)
-> - [Microsoft.PowerBI](#microsoftpowerbi)
-> - [Microsoft.PowerBIDedicated](#microsoftpowerbidedicated)
-> - [Microsoft.PowerPlatform](#microsoftpowerplatform)
-> - [Microsoft.ProjectBabylon](#microsoftprojectbabylon)
-> - [Microsoft.ProviderHub](#microsoftproviderhub)
-> - [Microsoft.Purview](#microsoftpurview)
-> - [Microsoft.Quantum](#microsoftquantum)
-> - [Microsoft.Quota](#microsoftquota)
-> - [Microsoft.RecommendationsService](#microsoftrecommendationsservice)
-> - [Microsoft.RecoveryServices](#microsoftrecoveryservices)
-> - [Microsoft.RedHatOpenShift](#microsoftredhatopenshift)
-> - [Microsoft.Relay](#microsoftrelay)
-> - [Microsoft.ResourceConnector](#microsoftresourceconnector)
-> - [Microsoft.ResourceGraph](#microsoftresourcegraph)
-> - [Microsoft.ResourceHealth](#microsoftresourcehealth)
-> - [Microsoft.Resources](#microsoftresources)
-> - [Microsoft.SaaS](#microsoftsaas)
-> - [Microsoft.Scheduler](#microsoftscheduler)
-> - [Microsoft.Scom](#microsoftscom)
-> - [Microsoft.ScVmm](#microsoftscvmm)
-> - [Microsoft.Search](#microsoftsearch)
-> - [Microsoft.Security](#microsoftsecurity)
-> - [Microsoft.SecurityGraph](#microsoftsecuritygraph)
-> - [Microsoft.SecurityInsights](#microsoftsecurityinsights)
-> - [Microsoft.SerialConsole](#microsoftserialconsole)
-> - [Microsoft.ServiceBus](#microsoftservicebus)
-> - [Microsoft.ServiceFabric](#microsoftservicefabric)
-> - [Microsoft.ServiceFabricMesh](#microsoftservicefabricmesh)
-> - [Microsoft.ServiceLinker](#microsoftservicelinker)
-> - [Microsoft.Services](#microsoftservices)
-> - [Microsoft.SignalRService](#microsoftsignalrservice)
-> - [Microsoft.Singularity](#microsoftsingularity)
-> - [Microsoft.SoftwarePlan](#microsoftsoftwareplan)
-> - [Microsoft.Solutions](#microsoftsolutions)
-> - [Microsoft.SQL](#microsoftsql)
-> - [Microsoft.SqlVirtualMachine](#microsoftsqlvirtualmachine)
-> - [Microsoft.Storage](#microsoftstorage)
-> - [Microsoft.StorageCache](#microsoftstoragecache)
-> - [Microsoft.StorageReplication](#microsoftstoragereplication)
-> - [Microsoft.StorageSync](#microsoftstoragesync)
-> - [Microsoft.StorSimple](#microsoftstorsimple)
-> - [Microsoft.StreamAnalytics](#microsoftstreamanalytics)
-> - [Microsoft.Subscription](#microsoftsubscription)
-> - [Microsoft.Synapse](#microsoftsynapse)
-> - [Microsoft.TestBase](#microsofttestbase)
-> - [Microsoft.TimeSeriesInsights](#microsofttimeseriesinsights)
-> - [Microsoft.VideoIndexer](#microsoftvideoindexer)
-> - [Microsoft.VirtualMachineImages](#microsoftvirtualmachineimages)
-> - [Microsoft.VMware](#microsoftvmware)
-> - [Microsoft.VMwareCloudSimple](#microsoftvmwarecloudsimple)
-> - [Microsoft.VSOnline](#microsoftvsonline)
-> - [Microsoft.WindowsDefenderATP](#microsoftwindowsdefenderatp)
-> - [Microsoft.Web](#microsoftweb)
-> - [Microsoft.WindowsESU](#microsoftwindowsesu)
-> - [Microsoft.WindowsIoT](#microsoftwindowsiot)
-> - [Microsoft.WorkloadBuilder](#microsoftworkloadbuilder)
-> - [Microsoft.WorkloadMonitor](#microsoftworkloadmonitor)
-> - [Microsoft.Workloads](#microsoftworkloads)
- ## Microsoft.AAD > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | DomainServices | Yes | Yes | > | DomainServices / oucontainer | No | No |
-## Microsoft.Addons
+## microsoft.aadiam
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | supportProviders | No | No |
+> | azureADMetrics | Yes | Yes |
+> | diagnosticSettings | No | No |
+> | diagnosticSettingsCategories | No | No |
+> | privateLinkForAzureAD | Yes | Yes |
+> | tenants | Yes | Yes |
## Microsoft.ADHybridHealthService
Jump to a resource provider namespace:
> | - | -- | -- | > | actionRules | Yes | Yes | > | alerts | No | No |
-> | alertsList | No | No |
> | alertsMetaData | No | No |
-> | alertsSummary | No | No |
-> | alertsSummaryList | No | No |
> | migrateFromSmartDetection | No | No | > | prometheusRuleGroups | Yes | Yes |
-> | resourceHealthAlertRules | Yes | Yes |
> | smartDetectorAlertRules | Yes | Yes | > | smartGroups | No | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | clusters | No | No |
+> | clusters | Yes | Yes |
## Microsoft.ApiManagement
Jump to a resource provider namespace:
> [!NOTE] > Azure API Management only supports creating a maximum of 15 tag name/value pairs for each service.
+## Microsoft.App
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | containerApps | Yes | Yes |
+> | managedEnvironments | Yes | Yes |
+> | managedEnvironments / certificates | Yes | Yes |
+ ## Microsoft.AppAssessment > [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | migrateProjects | No | No |
+> | migrateProjects | Yes | Yes |
> | migrateProjects / assessments | No | No | > | migrateProjects / assessments / assessedApplications | No | No | > | migrateProjects / assessments / assessedApplications / machines | No | No |
Jump to a resource provider namespace:
> | configurationStores | Yes | No | > | configurationStores / eventGridFilters | No | No | > | configurationStores / keyValues | No | No |
+> | configurationStores / replicas | No | No |
> | deletedConfigurationStores | No | No | ## Microsoft.AppPlatform
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | accessReviewScheduleDefinitions | No | No |
-> | accessReviewScheduleSettings | No | No |
-> | batchResourceCheckAccess | No | No |
+> | accessReviewHistoryDefinitions | No | No |
> | classicAdministrators | No | No | > | dataAliases | No | No | > | dataPolicyManifests | No | No |
Jump to a resource provider namespace:
> | diagnosticSettingsCategories | No | No | > | elevateAccess | No | No | > | eligibleChildResources | No | No |
-> | findOrphanRoleAssignments | No | No |
> | locks | No | No |
-> | permissions | No | No |
> | policyAssignments | No | No | > | policyDefinitions | No | No | > | policyExemptions | No | No | > | policySetDefinitions | No | No | > | privateLinkAssociations | No | No |
-> | providerOperations | No | No |
> | resourceManagementPrivateLinks | Yes | Yes | > | roleAssignmentApprovals | No | No | > | roleAssignments | No | No | > | roleAssignmentScheduleInstances | No | No | > | roleAssignmentScheduleRequests | No | No | > | roleAssignmentSchedules | No | No |
-> | roleAssignmentsUsageMetrics | No | No |
> | roleDefinitions | No | No | > | roleEligibilityScheduleInstances | No | No | > | roleEligibilityScheduleRequests | No | No |
Jump to a resource provider namespace:
> | configurationProfilePreferences | Yes | Yes | > | configurationProfiles | Yes | Yes | > | configurationProfiles / versions | Yes | Yes |
+> | patchJobConfigurations | Yes | Yes |
+> | patchJobConfigurations / patchJobs | No | No |
+> | patchTiers | Yes | Yes |
+> | servicePrincipals | No | No |
## Microsoft.Automation
Jump to a resource provider namespace:
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | automationAccounts | Yes | Yes |
+> | automationAccounts / agentRegistrationInformation | No | No |
> | automationAccounts / configurations | Yes | Yes | > | automationAccounts / hybridRunbookWorkerGroups | No | No | > | automationAccounts / hybridRunbookWorkerGroups / hybridRunbookWorkers | No | No |
Jump to a resource provider namespace:
> | automationAccounts / privateEndpointConnections | No | No | > | automationAccounts / privateLinkResources | No | No | > | automationAccounts / runbooks | Yes | Yes |
+> | automationAccounts / softwareUpdateConfigurationMachineRuns | No | No |
+> | automationAccounts / softwareUpdateConfigurationRuns | No | No |
> | automationAccounts / softwareUpdateConfigurations | No | No | > | automationAccounts / webhooks | No | No |
+> | deletedAutomationAccounts | No | No |
> [!NOTE] > Azure Automation only supports creating a maximum of 15 tag name/value pairs for each Automation resource.
+## Microsoft.AutonomousDevelopmentPlatform
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | accounts | Yes | Yes |
+> | accounts / datapools | No | No |
+
+## Microsoft.AutonomousSystems
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | workspaces | Yes | Yes |
+> | workspaces / validateCreateRequest | No | No |
+ ## Microsoft.AVS > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | privateClouds / workloadNetworks / virtualMachines | No | No | > | privateClouds / workloadNetworks / vmGroups | No | No |
-## Microsoft.Azure.Geneva
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Supports tags | Tag in cost report |
-> | - | -- | -- |
-> | environments | No | No |
-> | environments / accounts | No | No |
-> | environments / accounts / namespaces | No | No |
-> | environments / accounts / namespaces / configurations | No | No |
- ## Microsoft.AzureActiveDirectory > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | DataControllers | No | No |
-> | PostgresInstances | No | No |
-> | SqlManagedInstances | No | No |
-> | SqlServerInstances | No | No |
+> | DataControllers | Yes | Yes |
+> | DataControllers / ActiveDirectoryConnectors | No | No |
+> | PostgresInstances | Yes | Yes |
+> | sqlManagedInstances | Yes | Yes |
+> | SqlServerInstances | Yes | Yes |
## Microsoft.AzureCIS > [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | autopilotEnvironments | No | No |
-> | dstsServiceAccounts | No | No |
-> | dstsServiceClientIdentities | No | No |
+> | autopilotEnvironments | Yes | Yes |
+> | dstsServiceAccounts | Yes | Yes |
+> | dstsServiceClientIdentities | Yes | Yes |
## Microsoft.AzureData
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | accounts | No | No |
+> | accounts | Yes | Yes |
> | accounts / devices | No | No | > | accounts / devices / sensors | No | No | > | accounts / solutioninstances | No | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | catalogs | No | No |
+> | catalogs | Yes | Yes |
> | catalogs / certificates | No | No | > | catalogs / deployments | No | No | > | catalogs / devices | No | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | clusters | No | No |
+> | clusters | Yes | Yes |
> | clusters / arcSettings | No | No | > | clusters / arcSettings / extensions | No | No |
-> | galleryimages | No | No |
-> | networkinterfaces | No | No |
-> | virtualharddisks | No | No |
-> | virtualmachines | No | No |
-> | virtualmachines / extensions | No | No |
+> | galleryimages | Yes | Yes |
+> | networkinterfaces | Yes | Yes |
+> | virtualharddisks | Yes | Yes |
+> | virtualmachines | Yes | Yes |
+> | virtualmachines / extensions | Yes | Yes |
> | virtualmachines / hybrididentitymetadata | No | No |
-> | virtualnetworks | No | No |
+> | virtualnetworks | Yes | Yes |
## Microsoft.BackupSolutions
Jump to a resource provider namespace:
> | billingAccounts / billingProfiles / invoiceSections / transactions | No | No | > | billingAccounts / billingProfiles / invoiceSections / transfers | No | No | > | billingAccounts / billingProfiles / invoiceSections / validateDeleteInvoiceSectionEligibility | No | No |
-> | billingAccounts / BillingProfiles / patchOperations | No | No |
> | billingAccounts / billingProfiles / paymentMethodLinks | No | No | > | billingAccounts / billingProfiles / paymentMethods | No | No | > | billingAccounts / billingProfiles / policies | No | No | > | billingAccounts / billingProfiles / pricesheet | No | No |
-> | billingAccounts / billingProfiles / pricesheetDownloadOperations | No | No |
> | billingAccounts / billingProfiles / products | No | No | > | billingAccounts / billingProfiles / reservations | No | No | > | billingAccounts / billingProfiles / transactions | No | No |
Jump to a resource provider namespace:
> | billingAccounts / billingSubscriptions / elevateRole | No | No | > | billingAccounts / billingSubscriptions / invoices | No | No | > | billingAccounts / createBillingRoleAssignment | No | No |
-> | billingAccounts / createInvoiceSectionOperations | No | No |
> | billingAccounts / customers | No | No | > | billingAccounts / customers / billingPermissions | No | No | > | billingAccounts / customers / billingSubscriptions | No | No |
Jump to a resource provider namespace:
> | billingAccounts / invoices / transactions | No | No | > | billingAccounts / invoices / transactionSummary | No | No | > | billingAccounts / invoiceSections | No | No |
-> | billingAccounts / invoiceSections / billingSubscriptionMoveOperations | No | No |
> | billingAccounts / invoiceSections / billingSubscriptions | No | No | > | billingAccounts / invoiceSections / billingSubscriptions / transfer | No | No | > | billingAccounts / invoiceSections / elevate | No | No | > | billingAccounts / invoiceSections / initiateTransfer | No | No |
-> | billingAccounts / invoiceSections / patchOperations | No | No |
-> | billingAccounts / invoiceSections / productMoveOperations | No | No |
> | billingAccounts / invoiceSections / products | No | No | > | billingAccounts / invoiceSections / products / transfer | No | No | > | billingAccounts / invoiceSections / products / updateAutoRenew | No | No | > | billingAccounts / invoiceSections / transactions | No | No | > | billingAccounts / invoiceSections / transfers | No | No | > | billingAccounts / lineOfCredit | No | No |
-> | billingAccounts / patchOperations | No | No |
> | billingAccounts / payableOverage | No | No | > | billingAccounts / paymentMethods | No | No | > | billingAccounts / payNow | No | No |
Jump to a resource provider namespace:
> | transfers | No | No | > | transfers / acceptTransfer | No | No | > | transfers / declineTransfer | No | No |
-> | transfers / operationStatus | No | No |
> | transfers / validateTransfer | No | No | > | validateAddress | No | No |
Jump to a resource provider namespace:
> | savingsPlans | No | No | > | validate | No | No |
-## Microsoft.Blockchain
+## Microsoft.Bing
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | blockchainMembers | Yes | Yes |
+> | accounts | Yes | Yes |
+> | accounts / usages | No | No |
+> | registeredSubscriptions | No | No |
## Microsoft.BlockchainTokens
Jump to a resource provider namespace:
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | blueprintAssignments | No | No |
-> | blueprintAssignments / assignmentOperations | No | No |
-> | blueprintAssignments / operations | No | No |
> | blueprints | No | No | > | blueprints / artifacts | No | No | > | blueprints / versions | No | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | sites | No | No |
+> | sites | Yes | Yes |
## Microsoft.Cdn
Jump to a resource provider namespace:
> | profiles / secrets | No | No | > | profiles / securitypolicies | No | No | > | validateProbe | No | No |
+> | validateSecret | No | No |
## Microsoft.CertificateRegistration
Jump to a resource provider namespace:
> | changeSnapshots | No | No | > | computeChanges | No | No | > | profile | No | No |
-> | resourceChanges | No | No |
## Microsoft.Chaos
Jump to a resource provider namespace:
> | storageAccounts / vmImages | No | No | > | vmImages | No | No |
-## Microsoft.ClusterStor
+## Microsoft.CloudTest
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | nodes | Yes | Yes |
+> | accounts | Yes | Yes |
+> | hostedpools | Yes | Yes |
+> | images | Yes | Yes |
+> | pools | Yes | Yes |
## Microsoft.CodeSigning > [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | codeSigningAccounts | No | No |
+> | codeSigningAccounts | Yes | Yes |
> | codeSigningAccounts / certificateProfiles | No | No | ## Microsoft.Codespaces
Jump to a resource provider namespace:
> | RateCard | No | No | > | UsageAggregates | No | No |
+## Microsoft.Communication
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | CommunicationServices | Yes | Yes |
+> | CommunicationServices / eventGridFilters | No | No |
+> | EmailServices | Yes | Yes |
+> | EmailServices / Domains | Yes | Yes |
+> | registeredSubscriptions | No | No |
## Microsoft.Compute
Jump to a resource provider namespace:
> | restorePointCollections / restorePoints | No | No | > | restorePointCollections / restorePoints / diskRestorePoints | No | No | > | sharedVMExtensions | Yes | Yes |
-> | sharedVMExtensions / versions | No | No |
+> | sharedVMExtensions / versions | Yes | Yes |
> | sharedVMImages | Yes | Yes |
-> | sharedVMImages / versions | No | No |
+> | sharedVMImages / versions | Yes | Yes |
> | snapshots | Yes | Yes | > | sshPublicKeys | Yes | Yes | > | virtualMachines | Yes | Yes |
Jump to a resource provider namespace:
> | virtualMachineScaleSets | Yes | Yes | > | virtualMachineScaleSets / extensions | No | No | > | virtualMachineScaleSets / networkInterfaces | No | No |
-> | virtualMachineScaleSets / publicIPAddresses | Yes | No |
+> | virtualMachineScaleSets / publicIPAddresses | No | No |
> | virtualMachineScaleSets / virtualMachines | No | No | > | virtualMachineScaleSets / virtualMachines / extensions | No | No | > | virtualMachineScaleSets / virtualMachines / networkInterfaces | No | No |
Jump to a resource provider namespace:
> [!NOTE] > You can't add a tag to a virtual machine that has been marked as generalized. You mark a virtual machine as generalized with [Set-AzVm -Generalized](/powershell/module/Az.Compute/Set-AzVM) or [az vm generalize](/cli/azure/vm#az-vm-generalize).
-## Microsoft.Communication
+
+## Microsoft.ConfidentialLedger
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | CommunicationServices | No | No |
-> | CommunicationServices / eventGridFilters | No | No |
-> | EmailServices | No | No |
-> | EmailServices / Domains | No | No |
-> | registeredSubscriptions | No | No |
+> | Ledgers | Yes | Yes |
-## Microsoft.ConfidentialLedger
+## Microsoft.Confluent
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | Ledgers | No | No |
+> | agreements | No | No |
+> | organizations | Yes | Yes |
+> | validations | No | No |
## Microsoft.ConnectedCache > [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | CacheNodes | No | No |
-> | enterpriseCustomers | No | No |
+> | CacheNodes | Yes | Yes |
+> | enterpriseCustomers | Yes | Yes |
+
+## microsoft.connectedopenstack
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | flavors | Yes | Yes |
+> | heatStacks | Yes | Yes |
+> | heatStackTemplates | Yes | Yes |
+> | images | Yes | Yes |
+> | keypairs | Yes | Yes |
+> | networkPorts | Yes | Yes |
+> | networks | Yes | Yes |
+> | openStackIdentities | Yes | Yes |
+> | securityGroupRules | Yes | Yes |
+> | securityGroups | Yes | Yes |
+> | subnets | Yes | Yes |
+> | virtualMachines | Yes | Yes |
+> | volumes | Yes | Yes |
+> | volumeSnapshots | Yes | Yes |
+> | volumeTypes | Yes | Yes |
## Microsoft.ConnectedVehicle > [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | platformAccounts | No | No |
+> | platformAccounts | Yes | Yes |
> | registeredSubscriptions | No | No | ## Microsoft.ConnectedVMwarevSphere
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | Clusters | No | No |
-> | Datastores | No | No |
-> | Hosts | No | No |
-> | ResourcePools | No | No |
-> | VCenters | No | No |
+> | Clusters | Yes | Yes |
+> | Datastores | Yes | Yes |
+> | Hosts | Yes | Yes |
+> | ResourcePools | Yes | Yes |
+> | VCenters | Yes | Yes |
> | VCenters / InventoryItems | No | No |
-> | VirtualMachines | No | No |
+> | VirtualMachines | Yes | Yes |
> | VirtualMachines / Extensions | Yes | Yes | > | VirtualMachines / GuestAgents | No | No | > | VirtualMachines / HybridIdentityMetadata | No | No |
-> | VirtualMachineTemplates | No | No |
-> | VirtualNetworks | No | No |
+> | VirtualMachineTemplates | Yes | Yes |
+> | VirtualNetworks | Yes | Yes |
## Microsoft.Consumption
Jump to a resource provider namespace:
> | ReservationRecommendations | No | No | > | ReservationSummaries | No | No | > | ReservationTransactions | No | No |
-> | Tags | No | No |
-> | tenants | No | No |
-> | Terms | No | No |
-> | UsageDetails | No | No |
## Microsoft.ContainerInstance
Jump to a resource provider namespace:
> | containerServices | Yes | Yes | > | managedClusters | Yes | Yes | > | ManagedClusters / eventGridFilters | No | No |
+> | managedclustersnapshots | Yes | Yes |
> | openShiftManagedClusters | Yes | Yes | > | snapshots | Yes | Yes |
Jump to a resource provider namespace:
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | Alerts | No | No |
+> | BenefitRecommendations | No | No |
> | BenefitUtilizationSummaries | No | No | > | BillingAccounts | No | No | > | Budgets | No | No |
-> | calculatePrice | No | No |
> | CloudConnectors | No | No | > | Connectors | Yes | Yes |
-> | costAllocationRules | No | No |
> | Departments | No | No | > | Dimensions | No | No | > | EnrollmentAccounts | No | No |
Jump to a resource provider namespace:
> | ExternalSubscriptions / Dimensions | No | No | > | ExternalSubscriptions / Forecast | No | No | > | ExternalSubscriptions / Query | No | No |
+> | fetchMarketplacePrices | No | No |
> | fetchPrices | No | No | > | Forecast | No | No | > | GenerateDetailedCostReport | No | No |
-> | GenerateReservationDetailsReport | No | No |
> | Insights | No | No |
+> | Pricesheets | No | No |
+> | Publish | No | No |
> | Query | No | No | > | register | No | No | > | Reportconfigs | No | No | > | Reports | No | No | > | ScheduledActions | No | No | > | Settings | No | No |
-> | showbackRules | No | No |
> | Views | No | No | ## Microsoft.CustomerLockbox
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | grafana | No | No |
+> | grafana | Yes | Yes |
## Microsoft.DataBox
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
+> | accessConnectors | Yes | Yes |
> | workspaces | Yes | Yes | > | workspaces / dbWorkspaces | No | No | > | workspaces / virtualNetworkPeerings | No | No |
Jump to a resource provider namespace:
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | catalogs | Yes | Yes |
+> | datacatalogs | Yes | Yes |
+
+## Microsoft.DataCollaboration
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | workspaces | Yes | Yes |
+> | workspaces / constrainedResources | No | No |
+> | workspaces / contracts | No | No |
+> | workspaces / contracts / entitlements | No | No |
+> | workspaces / dataAssets | No | No |
+> | workspaces / dataAssets / dataSets | No | No |
+> | workspaces / pipelineRuns | No | No |
+> | workspaces / pipelineRuns / pipelineStepRuns | No | No |
+> | workspaces / pipelines | No | No |
+> | workspaces / pipelines / pipelineSteps | No | No |
+> | workspaces / pipelines / runs | No | No |
+> | workspaces / proposals | No | No |
+> | workspaces / proposals / dataAssetReferences | No | No |
+> | workspaces / proposals / entitlements | No | No |
+> | workspaces / proposals / entitlements / constraints | No | No |
+> | workspaces / proposals / entitlements / policies | No | No |
+> | workspaces / proposals / invitations | No | No |
+> | workspaces / proposals / scriptReferences | No | No |
+> | workspaces / resourceReferences | No | No |
+> | workspaces / scripts | No | No |
+> | workspaces / scripts / scriptrevisions | No | No |
+
+## Microsoft.Datadog
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | agreements | No | No |
+> | monitors | Yes | Yes |
+> | monitors / getDefaultKey | No | No |
+> | monitors / refreshSetPasswordLink | No | No |
+> | monitors / setDefaultKey | No | No |
+> | monitors / singleSignOnConfigurations | No | No |
+> | monitors / tagRules | No | No |
+> | registeredSubscriptions | No | No |
## Microsoft.DataFactory
Jump to a resource provider namespace:
> | DatabaseMigrations | No | No | > | services | Yes | Yes | > | services / projects | Yes | Yes |
+> | slots | Yes | Yes |
> | SqlMigrationServices | Yes | Yes | ## Microsoft.DataProtection
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
+> | backupInstances | No | No |
> | BackupVaults | Yes | Yes | > | ResourceGuards | Yes | Yes |
+## Microsoft.DataReplication
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | replicationFabrics | Yes | Yes |
+> | replicationVaults | Yes | Yes |
+ ## Microsoft.DataShare > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | servers / queryTexts | No | No | > | servers / recoverableServers | No | No | > | servers / resetQueryPerformanceInsightData | No | No |
-> | servers / start | No | No |
-> | servers / stop | No | No |
> | servers / topQueryStatistics | No | No | > | servers / virtualNetworkRules | No | No | > | servers / waitStatistics | No | No |
Jump to a resource provider namespace:
> | servers / queryTexts | No | No | > | servers / recoverableServers | No | No | > | servers / resetQueryPerformanceInsightData | No | No |
-> | servers / start | No | No |
-> | servers / stop | No | No |
> | servers / topQueryStatistics | No | No |
-> | servers / upgrade | No | No |
> | servers / virtualNetworkRules | No | No | > | servers / waitStatistics | No | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | instances | No | No |
-> | instances / experiments | No | No |
-> | instances / sandboxes | No | No |
-> | instances / sandboxes / experiments | No | No |
+> | instances | Yes | Yes |
+> | instances / experiments | Yes | Yes |
+> | instances / sandboxes | Yes | Yes |
+> | instances / sandboxes / experiments | Yes | Yes |
## Microsoft.Devices
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | accounts | No | No |
-> | accounts / instances | No | No |
+> | accounts | Yes | Yes |
+> | accounts / instances | Yes | Yes |
> | accounts / privateEndpointConnectionProxies | No | No | > | accounts / privateEndpointConnections | No | No | > | accounts / privateLinkResources | No | No |
Jump to a resource provider namespace:
> | - | -- | -- | > | pipelines | Yes | Yes |
-## Microsoft.DevSpaces
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Supports tags | Tag in cost report |
-> | - | -- | -- |
-> | controllers | Yes | Yes |
- ## Microsoft.DevTestLab > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | AzureKB | No | No |
-> | InsightDiagnostics | No | No |
+> | apollo | No | No |
+> | azureKB | No | No |
+> | insights | No | No |
> | solutions | No | No | ## Microsoft.DigitalTwins
Jump to a resource provider namespace:
> | topLevelDomains | No | No | > | validateDomainRegistrationInformation | No | No |
-## Microsoft.DynamicsLcs
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Supports tags | Tag in cost report |
-> | - | -- | -- |
-> | lcsprojects | No | No |
-> | lcsprojects / clouddeployments | No | No |
-> | lcsprojects / connectors | No | No |
- ## Microsoft.EdgeOrder > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | orders | No | No | > | productFamiliesMetadata | No | No |
-## Microsoft.EnterpriseKnowledgeGraph
+## Microsoft.Elastic
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | services | Yes | Yes |
+> | monitors | Yes | Yes |
+> | monitors / tagRules | No | No |
## Microsoft.EventGrid
Jump to a resource provider namespace:
> | domains / topics | No | No | > | eventSubscriptions | No | No | > | extensionTopics | No | No |
+> | partnerConfigurations | Yes | Yes |
> | partnerDestinations | Yes | Yes | > | partnerNamespaces | Yes | Yes | > | partnerNamespaces / channels | No | No |
Jump to a resource provider namespace:
> | systemTopics / eventSubscriptions | No | No | > | topics | Yes | Yes | > | topicTypes | No | No |
+> | verifiedPartners | No | No |
## Microsoft.EventHub
Jump to a resource provider namespace:
> | - | -- | -- | > | clusters | Yes | Yes | > | namespaces | Yes | Yes |
+> | namespaces / applicationGroups | No | No |
> | namespaces / authorizationrules | No | No | > | namespaces / disasterrecoveryconfigs | No | No | > | namespaces / eventhubs | No | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | devcenters | No | No |
+> | devcenters | Yes | Yes |
+> | devcenters / attachednetworks | No | No |
> | devcenters / catalogs | No | No | > | devcenters / catalogs / items | No | No |
+> | devcenters / devboxdefinitions | Yes | Yes |
> | devcenters / environmentTypes | No | No |
+> | devcenters / galleries | No | No |
+> | devcenters / galleries / images | No | No |
+> | devcenters / galleries / images / versions | No | No |
+> | devcenters / images | No | No |
> | devcenters / mappings | No | No |
-> | machinedefinitions | No | No |
-> | networksettings | No | No |
-> | networksettings / healthchecks | No | No |
-> | projects | No | No |
+> | machinedefinitions | Yes | Yes |
+> | networksettings | Yes | Yes |
+> | projects | Yes | Yes |
+> | projects / attachednetworks | No | No |
> | projects / catalogItems | No | No |
-> | projects / environments | No | No |
+> | projects / devboxdefinitions | No | No |
+> | projects / environments | Yes | Yes |
> | projects / environments / deployments | No | No | > | projects / environmentTypes | No | No |
-> | projects / pools | No | No |
+> | projects / pools | Yes | Yes |
## Microsoft.FluidRelay > [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | fluidRelayServers | No | No |
+> | fluidRelayServers | Yes | Yes |
> | fluidRelayServers / fluidRelayContainers | No | No |
-## Microsoft.Gallery
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Supports tags | Tag in cost report |
-> | - | -- | -- |
-> | enroll | No | No |
-> | galleryitems | No | No |
-> | generateartifactaccessuri | No | No |
-> | myareas | No | No |
-> | myareas / areas | No | No |
-> | myareas / areas / areas | No | No |
-> | myareas / areas / areas / galleryitems | No | No |
-> | myareas / areas / galleryitems | No | No |
-> | myareas / galleryitems | No | No |
-> | register | No | No |
-> | resources | No | No |
-> | retrieveresourcesbyid | No | No |
-
-## Microsoft.Genomics
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Supports tags | Tag in cost report |
-> | - | -- | -- |
-> | accounts | Yes | Yes |
-
-## Microsoft.Graph
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Supports tags | Tag in cost report |
-> | - | -- | -- |
-> | AzureAdApplication | No | No |
- ## Microsoft.GuestConfiguration > [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | autoManagedAccounts | Yes | Yes |
-> | autoManagedVmConfigurationProfiles | Yes | Yes |
-> | configurationProfileAssignments | No | No |
> | guestConfigurationAssignments | No | No |
-> | software | No | No |
-> | softwareUpdateProfile | No | No |
-> | softwareUpdates | No | No |
## Microsoft.HanaOnAzure
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | healthBots | No | No |
+> | healthBots | Yes | Yes |
## Microsoft.HealthcareApis
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | instances | No | No |
-> | instances / chambers | No | No |
-> | instances / chambers / accessProfiles | No | No |
-> | instances / chambers / workloads | No | No |
-> | instances / consortiums | No | No |
+> | instances | Yes | Yes |
+> | instances / chambers | Yes | Yes |
+> | instances / chambers / accessProfiles | Yes | Yes |
+> | instances / chambers / workloads | Yes | Yes |
+> | instances / consortiums | Yes | Yes |
## Microsoft.HybridCompute
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | provisionedClusters | No | No |
-> | provisionedClusters / agentPools | No | No |
+> | provisionedClusters | Yes | Yes |
+> | provisionedClusters / agentPools | Yes | Yes |
> | provisionedClusters / hybridIdentityMetadata | No | No | ## Microsoft.HybridData
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | devices | No | No |
-> | networkFunctions | No | No |
+> | devices | Yes | Yes |
+> | networkFunctions | Yes | Yes |
> | networkFunctionVendors | No | No | > | registeredSubscriptions | No | No | > | vendors | No | No |
-> | vendors / vendorSkus | No | No |
-> | vendors / vendorSkus / previewSubscriptions | No | No |
-## Microsoft.Hydra
+## Microsoft.ImportExport
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | components | Yes | Yes |
-> | networkScopes | Yes | Yes |
+> | jobs | Yes | Yes |
-## Microsoft.ImportExport
+## Microsoft.IndustryDataLifecycle
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | jobs | Yes | Yes |
+> | baseModels | Yes | Yes |
+> | baseModels / entities | No | No |
+> | baseModels / relationships | No | No |
+> | builtInModels | No | No |
+> | builtInModels / entities | No | No |
+> | builtInModels / relationships | No | No |
+> | collaborativeInvitations | No | No |
+> | custodianCollaboratives | Yes | Yes |
+> | custodianCollaboratives / collaborativeImage | No | No |
+> | custodianCollaboratives / dataModels | No | No |
+> | custodianCollaboratives / dataModels / mergePipelines | No | No |
+> | custodianCollaboratives / invitations | No | No |
+> | custodianCollaboratives / invitations / termsOfUseDocuments | No | No |
+> | custodianCollaboratives / receivedDataPackages | No | No |
+> | custodianCollaboratives / termsOfUseDocuments | No | No |
+> | dataConsumerCollaboratives | Yes | Yes |
+> | dataproviders | No | No |
+> | derivedModels | Yes | Yes |
+> | derivedModels / entities | No | No |
+> | derivedModels / relationships | No | No |
+> | generateMappingTemplate | No | No |
+> | memberCollaboratives | Yes | Yes |
+> | memberCollaboratives / sharedDataPackages | No | No |
+> | modelMappings | Yes | Yes |
+> | pipelineSets | Yes | Yes |
-## Microsoft.Insights
+## microsoft.insights
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | actionGroups | Yes | Yes |
+> | actiongroups | Yes | Yes |
> | activityLogAlerts | Yes | Yes | > | alertrules | Yes | Yes | > | autoscalesettings | Yes | Yes | > | components | Yes | Yes |
+> | components / aggregate | No | No |
> | components / analyticsItems | No | No |
+> | components / annotations | No | No |
+> | components / api | No | No |
+> | components / apiKeys | No | No |
+> | components / currentBillingFeatures | No | No |
+> | components / defaultWorkItemConfig | No | No |
+> | components / events | No | No |
+> | components / exportConfiguration | No | No |
+> | components / extendQueries | No | No |
> | components / favorites | No | No |
-> | components / linkedStorageAccounts | No | No |
+> | components / featureCapabilities | No | No |
+> | components / generateDiagnosticServiceReadOnlyToken | No | No |
+> | components / generateDiagnosticServiceReadWriteToken | No | No |
+> | components / linkedstorageaccounts | No | No |
+> | components / metadata | No | No |
+> | components / metricDefinitions | No | No |
+> | components / metrics | No | No |
+> | components / move | No | No |
> | components / myAnalyticsItems | No | No |
+> | components / myFavorites | No | No |
> | components / pricingPlans | No | No |
-> | components / ProactiveDetectionConfigs | No | No |
-> | dataCollectionEndpoints | No | No |
+> | components / proactiveDetectionConfigs | No | No |
+> | components / purge | No | No |
+> | components / query | No | No |
+> | components / quotaStatus | No | No |
+> | components / webtests | No | No |
+> | components / workItemConfigs | No | No |
+> | createnotifications | No | No |
+> | dataCollectionEndpoints | Yes | Yes |
+> | dataCollectionEndpoints / networkSecurityPerimeterAssociationProxies | No | No |
+> | dataCollectionEndpoints / networkSecurityPerimeterConfigurations | No | No |
+> | dataCollectionEndpoints / scopedPrivateLinkProxies | No | No |
> | dataCollectionRuleAssociations | No | No | > | dataCollectionRules | Yes | Yes | > | diagnosticSettings | No | No |
+> | diagnosticSettingsCategories | No | No |
+> | eventCategories | No | No |
+> | eventtypes | No | No |
+> | extendedDiagnosticSettings | No | No |
+> | generateDiagnosticServiceReadOnlyToken | No | No |
+> | generateDiagnosticServiceReadWriteToken | No | No |
> | guestDiagnosticSettings | Yes | Yes |
-> | guestDiagnosticSettingsAssociation | Yes | Yes |
-> | logprofiles | Yes | Yes |
-> | metricAlerts | Yes | Yes |
+> | guestDiagnosticSettingsAssociation | No | No |
+> | logDefinitions | No | No |
+> | logprofiles | No | No |
+> | logs | No | No |
+> | metricalerts | Yes | Yes |
+> | metricbaselines | No | No |
+> | metricbatch | No | No |
+> | metricDefinitions | No | No |
+> | metricNamespaces | No | No |
+> | metrics | No | No |
+> | migratealertrules | No | No |
+> | migrateToNewPricingModel | No | No |
+> | monitoredObjects | No | No |
> | myWorkbooks | No | No |
+> | notificationgroups | Yes | Yes |
+> | notificationstatus | No | No |
> | privateLinkScopes | Yes | Yes |
+> | privateLinkScopes / privateEndpointConnectionProxies | No | No |
> | privateLinkScopes / privateEndpointConnections | No | No | > | privateLinkScopes / scopedResources | No | No |
-> | queryPacks | Yes | Yes |
-> | queryPacks / queries | No | No |
-> | scheduledQueryRules | Yes | Yes |
+> | rollbackToLegacyPricingModel | No | No |
+> | scheduledqueryrules | Yes | Yes |
+> | topology | No | No |
+> | transactions | No | No |
> | webtests | Yes | Yes |
+> | webtests / getTestResultFile | No | No |
> | workbooks | Yes | Yes | > | workbooktemplates | Yes | Yes |
-## Microsoft.Intune
+## Microsoft.IntelligentITDigitalTwin
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | diagnosticSettings | No | No |
-> | diagnosticSettingsCategories | No | No |
+> | digitalTwins | Yes | Yes |
+> | digitalTwins / assets | Yes | Yes |
+> | digitalTwins / executionPlans | Yes | Yes |
+> | digitalTwins / testPlans | Yes | Yes |
+> | digitalTwins / tests | Yes | Yes |
## Microsoft.IoTCentral
Jump to a resource provider namespace:
> | sensors | No | No | > | sites | No | No |
-## Microsoft.IoTSpaces
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Supports tags | Tag in cost report |
-> | - | -- | -- |
-> | Graph | Yes | Yes |
- ## Microsoft.KeyVault > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | connectedClusters | No | No |
+> | connectedClusters | Yes | Yes |
> | registeredSubscriptions | No | No | ## Microsoft.KubernetesConfiguration
Jump to a resource provider namespace:
> | extensions | No | No | > | fluxConfigurations | No | No | > | namespaces | No | No |
+> | privateLinkScopes | Yes | Yes |
+> | privateLinkScopes / privateEndpointConnectionProxies | No | No |
+> | privateLinkScopes / privateEndpointConnections | No | No |
> | sourceControlConfigurations | No | No | ## Microsoft.Kusto
Jump to a resource provider namespace:
> | labs | Yes | Yes | > | users | No | No |
-## Microsoft.LocationServices
+## Microsoft.LoadTestService
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | accounts | Yes | Yes |
+> | loadtests | Yes | Yes |
## Microsoft.Logic
Jump to a resource provider namespace:
> | hostingEnvironments | Yes | Yes | > | integrationAccounts | Yes | Yes | > | integrationServiceEnvironments | Yes | Yes |
-> | integrationServiceEnvironments / managedApis | No | No |
+> | integrationServiceEnvironments / managedApis | Yes | Yes |
> | isolatedEnvironments | Yes | Yes | > | workflows | Yes | Yes |
+## Microsoft.Logz
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | monitors | Yes | Yes |
+> | monitors / accounts | Yes | Yes |
+> | monitors / accounts / tagRules | No | No |
+> | monitors / metricsSource | Yes | Yes |
+> | monitors / metricsSource / tagRules | No | No |
+> | monitors / singleSignOnConfigurations | No | No |
+> | monitors / tagRules | No | No |
+> | registeredSubscriptions | No | No |
+ ## Microsoft.MachineLearning > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | aisysteminventories | Yes | Yes |
+> | registries | Yes | Yes |
> | virtualclusters | Yes | Yes | > | workspaces | Yes | Yes | > | workspaces / batchEndpoints | Yes | Yes |
Jump to a resource provider namespace:
> | workspaces / models / versions | No | No | > | workspaces / onlineEndpoints | Yes | Yes | > | workspaces / onlineEndpoints / deployments | Yes | Yes |
+> | workspaces / registries | Yes | Yes |
> | workspaces / services | No | No | > [!NOTE]
Jump to a resource provider namespace:
> | - | -- | -- | > | Identities | No | No | > | userAssignedIdentities | Yes | Yes |
+> | userAssignedIdentities / federatedIdentityCredentials | No | No |
## Microsoft.ManagedServices
Jump to a resource provider namespace:
> | accounts | Yes | Yes | > | accounts / creators | Yes | Yes | > | accounts / eventGridFilters | No | No |
-> | accounts / privateAtlases | Yes | Yes |
## Microsoft.Marketplace
Jump to a resource provider namespace:
> | publishers / offers / amendments | No | No | > | register | No | No |
-## Microsoft.MarketplaceApps
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Supports tags | Tag in cost report |
-> | - | -- | -- |
-> | classicDevServices | Yes | Yes |
-> | updateCommunicationPreference | No | No |
- ## Microsoft.MarketplaceNotifications > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | mediaservices / eventGridFilters | No | No | > | mediaservices / graphInstances | No | No | > | mediaservices / graphTopologies | No | No |
-> | mediaservices / liveEventOperations | No | No |
> | mediaservices / liveEvents | Yes | Yes | > | mediaservices / liveEvents / liveOutputs | No | No |
-> | mediaservices / liveOutputOperations | No | No |
> | mediaservices / mediaGraphs | No | No |
-> | mediaservices / privateEndpointConnectionOperations | No | No |
> | mediaservices / privateEndpointConnectionProxies | No | No | > | mediaservices / privateEndpointConnections | No | No |
-> | mediaservices / streamingEndpointOperations | No | No |
> | mediaservices / streamingEndpoints | Yes | Yes | > | mediaservices / streamingLocators | No | No | > | mediaservices / streamingPolicies | No | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | holographicsBroadcastAccounts | Yes | Yes |
> | objectAnchorsAccounts | Yes | Yes | > | objectUnderstandingAccounts | Yes | Yes | > | remoteRenderingAccounts | Yes | Yes |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | mobileNetworks | No | No |
-> | mobileNetworks / dataNetworks | No | No |
-> | mobileNetworks / services | No | No |
-> | mobileNetworks / simPolicies | No | No |
-> | mobileNetworks / sites | No | No |
-> | mobileNetworks / slices | No | No |
-> | networks | No | No |
-> | networks / sites | No | No |
-> | packetCoreControlPlanes | No | No |
-> | packetCoreControlPlanes / packetCoreDataPlanes | No | No |
-> | packetCoreControlPlanes / packetCoreDataPlanes / attachedDataNetworks | No | No |
-> | packetCores | No | No |
-> | sims | No | No |
-> | sims / simProfiles | No | No |
+> | mobileNetworks | Yes | Yes |
+> | mobileNetworks / dataNetworks | Yes | Yes |
+> | mobileNetworks / services | Yes | Yes |
+> | mobileNetworks / simPolicies | Yes | Yes |
+> | mobileNetworks / sites | Yes | Yes |
+> | mobileNetworks / slices | Yes | Yes |
+> | networks | Yes | Yes |
+> | networks / sites | Yes | Yes |
+> | packetCoreControlPlanes | Yes | Yes |
+> | packetCoreControlPlanes / packetCoreDataPlanes | Yes | Yes |
+> | packetCoreControlPlanes / packetCoreDataPlanes / attachedDataNetworks | Yes | Yes |
+> | packetCores | Yes | Yes |
+> | sims | Yes | Yes |
+> | sims / simProfiles | Yes | Yes |
## Microsoft.Monitor
Jump to a resource provider namespace:
> | - | -- | -- | > | netAppAccounts | Yes | No | > | netAppAccounts / accountBackups | No | No |
+> | netAppAccounts / backupPolicies | Yes | Yes |
> | netAppAccounts / capacityPools | Yes | Yes | > | netAppAccounts / capacityPools / volumes | Yes | No |
+> | netAppAccounts / capacityPools / volumes / backups | No | No |
+> | netAppAccounts / capacityPools / volumes / mountTargets | No | No |
> | netAppAccounts / capacityPools / volumes / snapshots | No | No | > | netAppAccounts / capacityPools / volumes / subvolumes | No | No |
+> | netAppAccounts / capacityPools / volumes / volumeQuotaRules | No | No |
> | netAppAccounts / snapshotPolicies | Yes | Yes |
+> | netAppAccounts / vaults | No | No |
> | netAppAccounts / volumeGroups | No | No |
-## Microsoft.NetworkFunction
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Supports tags | Tag in cost report |
-> | - | -- | -- |
-> | azureTrafficCollectors | Yes | Yes |
- ## Microsoft.Network > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | applicationSecurityGroups | Yes | Yes | > | azureFirewallFqdnTags | No | No | > | azureFirewalls | Yes | No |
-> | bastionHosts | Yes | No |
+> | azureWebCategories | No | No |
+> | bastionHosts | Yes | Yes |
> | bgpServiceCommunities | No | No | > | connections | Yes | Yes | > | customIpPrefixes | Yes | Yes | > | ddosCustomPolicies | Yes | Yes | > | ddosProtectionPlans | Yes | Yes |
-> | dnsOperationStatuses | No | No |
-> | dnszones | Yes, see [note below](#network-limitations) | Yes |
+> | dnsForwardingRulesets | Yes | Yes |
+> | dnsForwardingRulesets / forwardingRules | No | No |
+> | dnsForwardingRulesets / virtualNetworkLinks | No | No |
+> | dnsResolvers | Yes | Yes |
+> | dnsResolvers / inboundEndpoints | Yes | Yes |
+> | dnsResolvers / outboundEndpoints | Yes | Yes |
+> | dnszones | Yes | Yes |
> | dnszones / A | No | No | > | dnszones / AAAA | No | No | > | dnszones / all | No | No |
Jump to a resource provider namespace:
> | expressRouteCrossConnections | Yes | Yes | > | expressRouteGateways | Yes | Yes | > | expressRoutePorts | Yes | Yes |
+> | expressRouteProviderPorts | No | No |
> | expressRouteServiceProviders | No | No | > | firewallPolicies | Yes, see [note below](#network-limitations) | Yes | > | frontdoors | Yes, but limited (see [note below](#network-limitations)) | Yes |
+> | frontdoors / frontendEndpoints | Yes, but limited (see [note below](#network-limitations)) | No |
+> | frontdoors / frontendEndpoints / customHttpsConfiguration | Yes, but limited (see [note below](#network-limitations)) | No |
> | frontdoorWebApplicationFirewallManagedRuleSets | Yes, but limited (see [note below](#network-limitations)) | No | > | frontdoorWebApplicationFirewallPolicies | Yes, but limited (see [note below](#network-limitations)) | Yes | > | getDnsResourceReference | No | No | > | internalNotify | No | No |
-> | ipAllocations | Yes | Yes |
-> | ipGroups | Yes, see [note below](#network-limitations) | Yes |
+> | ipGroups | Yes | Yes |
> | loadBalancers | Yes | Yes | > | localNetworkGateways | Yes | Yes | > | natGateways | Yes | Yes |
+> | networkExperimentProfiles | Yes | Yes |
> | networkIntentPolicies | Yes | Yes | > | networkInterfaces | Yes | Yes |
+> | networkManagerConnections | No | No |
> | networkManagers | Yes | Yes | > | networkProfiles | Yes | Yes | > | networkSecurityGroups | Yes | Yes |
+> | networkSecurityPerimeters | Yes | Yes |
> | networkVirtualAppliances | Yes | Yes | > | networkWatchers | Yes | Yes | > | networkWatchers / connectionMonitors | Yes | No |
-> | networkWatchers / flowLogs | Yes | No |
+> | networkWatchers / flowLogs | Yes | Yes |
> | networkWatchers / lenses | Yes | No | > | networkWatchers / pingMeshes | Yes | No | > | p2sVpnGateways | Yes | Yes |
-> | privateDnsOperationStatuses | No | No |
> | privateDnsZones | Yes | Yes | > | privateDnsZones / A | No | No | > | privateDnsZones / AAAA | No | No |
Jump to a resource provider namespace:
> | privateDnsZones / SRV | No | No | > | privateDnsZones / TXT | No | No | > | privateDnsZones / virtualNetworkLinks | Yes | Yes |
+> | privateDnsZonesInternal | No | No |
+> | privateEndpointRedirectMaps | Yes | Yes |
> | privateEndpoints | Yes | Yes |
+> | privateEndpoints / privateLinkServiceProxies | No | No |
> | privateLinkServices | Yes | Yes | > | publicIPAddresses | Yes | Yes | > | publicIPPrefixes | Yes | Yes |
Jump to a resource provider namespace:
> | serviceEndpointPolicies | Yes | Yes | > | trafficManagerGeographicHierarchies | No | No | > | trafficmanagerprofiles | Yes, see [note below](#network-limitations) | Yes |
-> | trafficmanagerprofiles/heatMaps | No | No |
+> | trafficmanagerprofiles / heatMaps | No | No |
> | trafficManagerUserMetricsKeys | No | No | > | virtualHubs | Yes | Yes | > | virtualNetworkGateways | Yes | Yes | > | virtualNetworks | Yes | Yes |
-> | virtualNetworks / subnets | No | No |
+> | virtualNetworks / privateDnsZoneLinks | No | No |
+> | virtualNetworks / taggedTrafficConsumers | No | No |
> | virtualNetworkTaps | Yes | Yes |
-> | virtualWans | Yes | No |
+> | virtualRouters | Yes | Yes |
+> | virtualWans | Yes | Yes |
> | vpnGateways | Yes | Yes | > | vpnServerConfigurations | Yes | Yes | > | vpnSites | Yes | Yes |
-> | webApplicationFirewallPolicies | Yes | Yes |
<a id="network-limitations"></a>
Jump to a resource provider namespace:
> Azure IP Groups and Azure Firewall Policies don't support PATCH operations, which means they don't support updating tags through the portal. Instead, use the update commands for those resources. For example, you can update tags for an IP group with the [az network ip-group update](/cli/azure/network/ip-group#az-network-ip-group-update) command.
-## Microsoft.Notebooks
+## Microsoft.NetworkCloud
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | bareMetalMachines | Yes | Yes |
+> | clusterManagers | Yes | Yes |
+> | clusters | Yes | Yes |
+> | rackManifests | Yes | Yes |
+> | racks | Yes | Yes |
+> | virtualMachines | Yes | Yes |
+> | workloadNetworks | Yes | Yes |
+
+## Microsoft.NetworkFunction
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | NotebookProxies | No | No |
+> | azureTrafficCollectors | Yes | Yes |
+> | meshVpns | Yes | Yes |
+> | meshVpns / connectionPolicies | Yes | Yes |
+> | meshVpns / privateEndpointConnectionProxies | No | No |
+> | meshVpns / privateEndpointConnections | No | No |
## Microsoft.NotificationHubs
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | osNamespaces | No | No |
+> | osNamespaces | Yes | Yes |
## Microsoft.OffAzure
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | energyServices | No | No |
+> | energyServices | Yes | Yes |
-## Microsoft.OperationalInsights
+## Microsoft.OpenLogisticsPlatform
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | clusters | Yes | Yes |
-> | deletedWorkspaces | No | No |
-> | linkTargets | No | No |
-> | querypacks | Yes | Yes |
-> | storageInsightConfigs | No | No |
+> | applicationManagers | Yes | Yes |
+> | applicationManagers / applicationRegistrations | No | No |
+> | applicationManagers / eventGridFilters | No | No |
+> | applicationRegistrationInvites | No | No |
+> | applicationWorkspaces | Yes | Yes |
+> | applicationWorkspaces / applications | No | No |
+> | applicationWorkspaces / applications / applicationRegistrationInvites | No | No |
+> | shareInvites | No | No |
> | workspaces | Yes | Yes |
-> | workspaces / dataExports | No | No |
-> | workspaces / dataSources | No | No |
-> | workspaces / linkedServices | No | No |
-> | workspaces / linkedStorageAccounts | No | No |
-> | workspaces / metadata | No | No |
-> | workspaces / query | No | No |
-> | workspaces / scopedPrivateLinkProxies | No | No |
-> | workspaces / storageInsightConfigs | No | No |
-> | workspaces / tables | No | No |
+> | workspaces / applicationRegistrations | No | No |
+> | workspaces / applications | No | No |
+> | workspaces / eventGridFilters | No | No |
+> | workspaces / shares | No | No |
+> | workspaces / shareSubscriptions | No | No |
-## Microsoft.OperationsManagement
+## Microsoft.Orbital
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | managementassociations | No | No |
-> | managementconfigurations | Yes | Yes |
-> | solutions | Yes | Yes |
-> | views | Yes | Yes |
+> | contactProfiles | Yes | Yes |
+> | edgeSites | Yes | Yes |
+> | globalCommunicationsSites | No | No |
+> | groundStations | Yes | Yes |
+> | l2Connections | Yes | Yes |
+> | l3Connections | Yes | Yes |
+> | orbitalGateways | Yes | Yes |
+> | spacecrafts | Yes | Yes |
+> | spacecrafts / contacts | No | No |
## Microsoft.Peering
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | PlayerAccountPools | No | No |
-> | Titles | No | No |
+> | playeraccountpools | Yes | Yes |
+> | titles | Yes | Yes |
+> | titles / segments | No | No |
+> | titles / titledatakeyvalues | No | No |
+> | titles / titleinternaldatakeyvalues | No | No |
## Microsoft.PolicyInsights
Jump to a resource provider namespace:
> | - | -- | -- | > | accounts | Yes | Yes | > | deletedAccounts | No | No |
+> | getDefaultAccount | No | No |
+> | removeDefaultAccount | No | No |
+> | setDefaultAccount | No | No |
## Microsoft.ProviderHub
Jump to a resource provider namespace:
> | - | -- | -- | > | accounts | Yes | Yes | > | accounts / kafkaConfigurations | No | No |
-> | deletedAccounts | No | No |
> | getDefaultAccount | No | No | > | removeDefaultAccount | No | No | > | setDefaultAccount | No | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | Workspaces | No | No |
+> | Workspaces | Yes | Yes |
## Microsoft.Quota
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | accounts | No | No |
-> | accounts / modeling | No | No |
-> | accounts / serviceEndpoints | No | No |
+> | accounts | Yes | Yes |
+> | accounts / modeling | Yes | Yes |
+> | accounts / serviceEndpoints | Yes | Yes |
## Microsoft.RecoveryServices
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | availabilityStatuses | No | No |
-> | childAvailabilityStatuses | No | No |
> | childResources | No | No | > | emergingissues | No | No | > | events | No | No |
Jump to a resource provider namespace:
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | deployments | Yes | No |
-> | deployments / operations | No | No |
> | deploymentScripts | Yes | Yes | > | deploymentScripts / logs | No | No |
-> | deploymentStacks | No | No |
> | deploymentStacks / snapshots | No | No | > | links | No | No |
-> | providers | No | No |
> | resourceGroups | Yes | No | > | subscriptions | Yes | No |
+> | tags | No | No |
> | templateSpecs | Yes | Yes | > | templateSpecs / versions | Yes | Yes | > | tenants | No | No |
Jump to a resource provider namespace:
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | applications | Yes | Yes |
-> | resources | Yes | Yes |
+> | resources | Yes | No |
> | saasresources | No | No |
-## Microsoft.Scheduler
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Supports tags | Tag in cost report |
-> | - | -- | -- |
-> | jobcollections | Yes | Yes |
- ## Microsoft.Scom > [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | managedInstances | No | No |
+> | managedInstances | Yes | Yes |
## Microsoft.ScVmm > [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | AvailabilitySets | No | No |
-> | clouds | No | No |
-> | VirtualMachines | No | No |
-> | VirtualMachineTemplates | No | No |
-> | VirtualNetworks | No | No |
-> | vmmservers | No | No |
+> | availabilitysets | Yes | Yes |
+> | Clouds | Yes | Yes |
+> | VirtualMachines | Yes | Yes |
+> | VirtualMachineTemplates | Yes | Yes |
+> | VirtualNetworks | Yes | Yes |
+> | vmmservers | Yes | Yes |
> | VMMServers / InventoryItems | No | No | ## Microsoft.Search
Jump to a resource provider namespace:
> | alertsSuppressionRules | No | No | > | allowedConnections | No | No | > | antiMalwareSettings | No | No |
-> | applicationWhitelistings | No | No |
> | assessmentMetadata | No | No | > | assessments | No | No | > | assessments / governanceAssignments | No | No |
Jump to a resource provider namespace:
> | securityStatuses | No | No | > | securityStatusesSummaries | No | No | > | serverVulnerabilityAssessments | No | No |
+> | serverVulnerabilityAssessmentsSettings | No | No |
> | settings | No | No | > | sqlVulnerabilityAssessments | No | No | > | standards | Yes | Yes |
Jump to a resource provider namespace:
> | topologies | No | No | > | workspaceSettings | No | No |
-## Microsoft.SecurityGraph
+## Microsoft.SecurityDetonation
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | diagnosticSettings | No | No |
-> | diagnosticSettingsCategories | No | No |
+> | chambers | Yes | Yes |
+
+## Microsoft.SecurityDevOps
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | gitHubConnectors | Yes | Yes |
+> | gitHubConnectors / gitHubRepos | No | No |
## Microsoft.SecurityInsights
Jump to a resource provider namespace:
> | bookmarks | No | No | > | cases | No | No | > | dataConnectors | No | No |
-> | dataConnectorsCheckRequirements | No | No |
> | enrichment | No | No | > | entities | No | No |
-> | entityQueries | No | No |
> | entityQueryTemplates | No | No |
+> | fileImports | No | No |
> | incidents | No | No | > | metadata | No | No | > | MitreCoverageRecords | No | No |
-> | officeConsents | No | No |
> | onboardingStates | No | No |
+> | securityMLAnalyticsSettings | No | No |
> | settings | No | No | > | sourceControls | No | No | > | threatIntelligence | No | No |
-> | watchlists | No | No |
## Microsoft.SerialConsole
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | applications | Yes | Yes |
> | clusters | Yes | Yes | > | clusters / applications | No | No |
-> | containerGroups | Yes | Yes |
-> | containerGroupSets | Yes | Yes |
> | edgeclusters | Yes | Yes | > | edgeclusters / applications | No | No | > | managedclusters | Yes | Yes |
Jump to a resource provider namespace:
> | managedclusters / applicationTypes | No | No | > | managedclusters / applicationTypes / versions | No | No | > | managedclusters / nodetypes | No | No |
-> | networks | Yes | Yes |
-> | secretstores | Yes | Yes |
-> | secretstores / certificates | No | No |
-> | secretstores / secrets | No | No |
-> | volumes | Yes | Yes |
-
-## Microsoft.ServiceFabricMesh
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Supports tags | Tag in cost report |
-> | - | -- | -- |
-> | applications | Yes | Yes |
-> | containerGroups | Yes | Yes |
-> | gateways | Yes | Yes |
-> | networks | Yes | Yes |
-> | secrets | Yes | Yes |
-> | volumes | Yes | Yes |
## Microsoft.ServiceLinker
Jump to a resource provider namespace:
> | dryruns | No | No | > | linkers | No | No |
-## Microsoft.Services
+## Microsoft.ServicesHub
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | providerRegistrations | No | No |
-> | providerRegistrations / resourceTypeRegistrations | No | No |
-> | rollouts | Yes | Yes |
+> | connectors | Yes | Yes |
+> | supportOfferingEntitlement | No | No |
+> | workspaces | No | No |
## Microsoft.SignalRService
Jump to a resource provider namespace:
> | accounts / groupPolicies | No | No | > | accounts / jobs | No | No | > | accounts / models | No | No |
+> | accounts / networks | No | No |
> | accounts / storageContainers | No | No | > | images | No | No | > | quotas | No | No |
Jump to a resource provider namespace:
> | applications | Yes | Yes | > | jitRequests | Yes | Yes | -
-## Microsoft.SQL
+## Microsoft.Sql
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | instancePools | Yes | Yes |
-> | longtermRetentionManagedInstance / longtermRetentionDatabase / longtermRetentionBackup | No | No |
-> | longtermRetentionServer / longtermRetentionDatabase / longtermRetentionBackup | No | No |
> | managedInstances | Yes | Yes | > | managedInstances / administrators | No | No |
-> | managedInstances / databases | No | No |
+> | managedInstances / databases | Yes | Yes |
> | managedInstances / databases / backupLongTermRetentionPolicies | No | No |
-> | managedInstances / databases / backupShortTermRetentionPolicies | No | No |
-> | managedInstances / databases / schemas / tables / columns / sensitivityLabels | No | No |
> | managedInstances / databases / vulnerabilityAssessments | No | No |
-> | managedInstances / databases / vulnerabilityAssessments / rules / baselines | No | No |
-> | managedInstances / encryptionProtector | No | No |
-> | managedInstances / keys | No | No |
-> | managedInstances / restorableDroppedDatabases / backupShortTermRetentionPolicies | No | No |
+> | managedInstances / dnsAliases | No | No |
+> | managedInstances / metricDefinitions | No | No |
+> | managedInstances / metrics | No | No |
+> | managedInstances / recoverableDatabases | No | No |
> | managedInstances / sqlAgent | No | No |
+> | managedInstances / startStopSchedules | No | No |
+> | managedInstances / tdeCertificates | No | No |
> | managedInstances / vulnerabilityAssessments | No | No | > | servers | Yes | Yes | > | servers / administrators | No | No |
+> | servers / advancedThreatProtectionSettings | No | No |
> | servers / advisors | No | No |
+> | servers / aggregatedDatabaseMetrics | No | No |
> | servers / auditingSettings | No | No |
+> | servers / automaticTuning | No | No |
> | servers / communicationLinks | No | No |
-> | servers / databases | Yes (see [note below](#sqlnote)) | Yes |
+> | servers / connectionPolicies | No | No |
+> | servers / databases | Yes | Yes |
+> | servers / databases / activate | No | No |
+> | servers / databases / activatedatabase | No | No |
+> | servers / databases / advancedThreatProtectionSettings | No | No |
> | servers / databases / advisors | No | No | > | servers / databases / auditingSettings | No | No |
+> | servers / databases / auditRecords | No | No |
+> | servers / databases / automaticTuning | No | No |
> | servers / databases / backupLongTermRetentionPolicies | No | No | > | servers / databases / backupShortTermRetentionPolicies | No | No |
+> | servers / databases / databaseState | No | No |
> | servers / databases / dataMaskingPolicies | No | No |
+> | servers / databases / dataMaskingPolicies / rules | No | No |
+> | servers / databases / deactivate | No | No |
+> | servers / databases / deactivatedatabase | No | No |
> | servers / databases / extensions | No | No |
+> | servers / databases / geoBackupPolicies | No | No |
+> | servers / databases / ledgerDigestUploads | No | No |
+> | servers / databases / metricDefinitions | No | No |
+> | servers / databases / metrics | No | No |
+> | servers / databases / recommendedSensitivityLabels | No | No |
> | servers / databases / securityAlertPolicies | No | No | > | servers / databases / syncGroups | No | No | > | servers / databases / syncGroups / syncMembers | No | No |
+> | servers / databases / topQueries | No | No |
+> | servers / databases / topQueries / queryText | No | No |
> | servers / databases / transparentDataEncryption | No | No |
+> | servers / databases / VulnerabilityAssessment | No | No |
+> | servers / databases / vulnerabilityAssessments | No | No |
+> | servers / databases / VulnerabilityAssessmentScans | No | No |
+> | servers / databases / VulnerabilityAssessmentSettings | No | No |
> | servers / databases / workloadGroups | No | No |
+> | servers / databaseSecurityPolicies | No | No |
+> | servers / devOpsAuditingSettings | No | No |
+> | servers / disasterRecoveryConfiguration | No | No |
+> | servers / dnsAliases | No | No |
+> | servers / elasticPoolEstimates | No | No |
> | servers / elasticpools | Yes | Yes |
+> | servers / elasticPools / advisors | No | No |
+> | servers / elasticpools / metricdefinitions | No | No |
+> | servers / elasticpools / metrics | No | No |
> | servers / encryptionProtector | No | No |
+> | servers / extendedAuditingSettings | No | No |
> | servers / failoverGroups | No | No |
-> | servers / firewallRules | No | No |
+> | servers / import | No | No |
+> | servers / jobAccounts | Yes | Yes |
> | servers / jobAgents | Yes | Yes | > | servers / jobAgents / jobs | No | No |
-> | servers / jobAgents / jobs / steps | No | No |
> | servers / jobAgents / jobs / executions | No | No |
+> | servers / jobAgents / jobs / steps | No | No |
> | servers / keys | No | No |
+> | servers / recommendedElasticPools | No | No |
+> | servers / recoverableDatabases | No | No |
> | servers / restorableDroppedDatabases | No | No |
-> | servers / serviceobjectives | No | No |
+> | servers / securityAlertPolicies | No | No |
+> | servers / serviceObjectives | No | No |
+> | servers / syncAgents | No | No |
> | servers / tdeCertificates | No | No |
+> | servers / usages | No | No |
> | servers / virtualNetworkRules | No | No |
-> | virtualClusters | No | No |
+> | servers / vulnerabilityAssessments | No | No |
+> | virtualClusters | Yes | Yes |
<a id="sqlnote"></a> > [!NOTE] > The Master database doesn't support tags, but other databases, including Azure Synapse Analytics databases, support tags. Azure Synapse Analytics databases must be in Active (not Paused) state. + ## Microsoft.SqlVirtualMachine > [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | SqlVirtualMachineGroups | Yes | Yes |
-> | SqlVirtualMachineGroups / AvailabilityGroupListeners | No | No |
> | SqlVirtualMachines | Yes | Yes | ## Microsoft.Storage
Jump to a resource provider namespace:
> | caches / storageTargets | No | No | > | usageModels | No | No |
-## Microsoft.StorageReplication
+## Microsoft.StoragePool
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | replicationGroups | No | No |
+> | diskPools | Yes | Yes |
+> | diskPools / iscsiTargets | No | No |
## Microsoft.StorageSync
Jump to a resource provider namespace:
> | - | -- | -- | > | clusters | Yes | Yes | > | clusters / privateEndpoints | No | No |
-> | streamingjobs | Yes (see note below) | Yes |
+> | streamingjobs | Yes | Yes |
> [!NOTE] > You can't add a tag when streamingjobs is running. Stop the resource to add a tag.
Jump to a resource provider namespace:
> | cancel | No | No | > | changeTenantRequest | No | No | > | changeTenantStatus | No | No |
-> | CreateSubscription | No | No |
> | enable | No | No | > | policies | No | No | > | rename | No | No | > | SubscriptionDefinitions | No | No |
-> | SubscriptionOperations | No | No |
> | subscriptions | No | No |
+## microsoft.support
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | lookUpResourceId | No | No |
+> | services | No | No |
+> | services / problemclassifications | No | No |
+> | supporttickets | No | No |
+ ## Microsoft.Synapse > [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | kustoOperations | No | No |
> | privateLinkHubs | Yes | Yes | > | workspaces | Yes | Yes | > | workspaces / bigDataPools | Yes | Yes |
Jump to a resource provider namespace:
> | workspaces / kustoPools / attacheddatabaseconfigurations | No | No | > | workspaces / kustoPools / databases | No | No | > | workspaces / kustoPools / databases / dataconnections | No | No |
-> | workspaces / operationStatuses | No | No |
> | workspaces / sqlDatabases | Yes | Yes | > | workspaces / sqlPools | Yes | Yes |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | testBaseAccounts | No | No |
+> | testBaseAccounts | Yes | Yes |
> | testBaseAccounts / customerEvents | No | No | > | testBaseAccounts / emailEvents | No | No | > | testBaseAccounts / flightingRings | No | No |
-> | testBaseAccounts / packages | No | No |
+> | testBaseAccounts / packages | Yes | Yes |
> | testBaseAccounts / packages / favoriteProcesses | No | No | > | testBaseAccounts / packages / osUpdates | No | No | > | testBaseAccounts / testSummaries | No | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | accounts | No | No |
+> | accounts | Yes | Yes |
## Microsoft.VirtualMachineImages
Jump to a resource provider namespace:
> | imageTemplates | Yes | Yes | > | imageTemplates / runOutputs | No | No |
+## microsoft.visualstudio
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | account | Yes | Yes |
+> | account / extension | Yes | Yes |
+> | account / project | Yes | Yes |
+ ## Microsoft.VMware > [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | arczones | No | No |
-> | resourcepools | No | No |
-> | vcenters | No | No |
+> | arczones | Yes | Yes |
+> | resourcepools | Yes | Yes |
+> | vcenters | Yes | Yes |
> | VCenters / InventoryItems | No | No |
-> | virtualmachines | No | No |
-> | virtualmachinetemplates | No | No |
-> | virtualnetworks | No | No |
+> | virtualmachines | Yes | Yes |
+> | virtualmachinetemplates | Yes | Yes |
+> | virtualnetworks | Yes | Yes |
## Microsoft.VMwareCloudSimple
Jump to a resource provider namespace:
> | plans | Yes | No | > | registeredSubscriptions | No | No |
-## Microsoft.WindowsDefenderATP
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Supports tags | Tag in cost report |
-> | - | -- | -- |
-> | diagnosticSettings | No | No |
-> | diagnosticSettingsCategories | No | No |
- ## Microsoft.Web > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | certificates | Yes | Yes | > | connectionGateways | Yes | Yes | > | connections | Yes | Yes |
+> | containerApps | Yes | Yes |
> | customApis | Yes | Yes |
+> | customhostnameSites | No | No |
> | deletedSites | No | No | > | functionAppStacks | No | No | > | generateGithubAccessTokenForAppserviceCLI | No | No |
Jump to a resource provider namespace:
> | serverFarms / firstPartyApps | No | No | > | serverFarms / firstPartyApps / keyVaultSettings | No | No | > | sites | Yes | Yes |
-> | sites / config | No | No |
> | sites / eventGridFilters | No | No | > | sites / hostNameBindings | No | No | > | sites / networkConfig | No | No |
Jump to a resource provider namespace:
> | sites / slots / networkConfig | No | No | > | sourceControls | No | No | > | staticSites | Yes | Yes |
+> | staticSites / builds | No | No |
+> | staticSites / builds / userProvidedFunctionApps | No | No |
+> | staticSites / userProvidedFunctionApps | No | No |
> | validate | No | No | > | verifyHostingEnvironmentVnet | No | No | > | webAppStacks | No | No |
+> | workerApps | Yes | Yes |
## Microsoft.WindowsESU
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | migrationAgents | No | No |
-> | workloads | No | No |
+> | migrationAgents | Yes | Yes |
+> | workloads | Yes | Yes |
> | workloads / instances | No | No | > | workloads / versions | No | No | > | workloads / versions / artifacts | No | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | monitors | No | No |
+> | monitors | Yes | Yes |
> | monitors / providerInstances | No | No |
-> | phpWorkloads | No | No |
+> | phpWorkloads | Yes | Yes |
> | phpWorkloads / wordpressInstances | No | No |
-> | sapVirtualInstances | No | No |
-> | sapVirtualInstances / applicationInstances | No | No |
-> | sapVirtualInstances / centralInstances | No | No |
-> | sapVirtualInstances / databaseInstances | No | No |
+> | sapVirtualInstances | Yes | Yes |
+> | sapVirtualInstances / applicationInstances | Yes | Yes |
+> | sapVirtualInstances / centralInstances | Yes | Yes |
+> | sapVirtualInstances / databaseInstances | Yes | Yes |
## Next steps
azure-resource-manager Deployment Complete Mode Deletion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-complete-mode-deletion.md
Title: Complete mode deletion description: Shows how resource types handle complete mode deletion in Azure Resource Manager templates. Previously updated : 04/19/2022 Last updated : 04/20/2022 # Deletion of Azure resources for complete mode deployments
The resources are listed by resource provider namespace. To match a resource pro
> | servers / queryTexts | No | > | servers / recoverableServers | No | > | servers / resetQueryPerformanceInsightData | No |
-> | servers / start | No |
-> | servers / stop | No |
> | servers / topQueryStatistics | No | > | servers / virtualNetworkRules | No | > | servers / waitStatistics | No |
The resources are listed by resource provider namespace. To match a resource pro
> | servers / queryTexts | No | > | servers / recoverableServers | No | > | servers / resetQueryPerformanceInsightData | No |
-> | servers / start | No |
-> | servers / stop | No |
> | servers / topQueryStatistics | No |
-> | servers / upgrade | No |
> | servers / virtualNetworkRules | No | > | servers / waitStatistics | No |
The resources are listed by resource provider namespace. To match a resource pro
> | - | -- | > | domains | Yes | > | domains / topics | No |
+> | eventSubscriptions | No |
+> | extensionTopics | No |
> | partnerConfigurations | Yes | > | partnerDestinations | Yes | > | partnerNamespaces | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
+> | applyUpdates | No |
+> | configurationAssignments | No |
> | maintenanceConfigurations | Yes | > | publicMaintenanceConfigurations | No |
+> | updates | No |
## Microsoft.ManagedIdentity
The resources are listed by resource provider namespace. To match a resource pro
> | networkExperimentProfiles | Yes | > | networkIntentPolicies | Yes | > | networkInterfaces | Yes |
+> | networkManagerConnections | No |
> | networkManagers | Yes | > | networkProfiles | Yes | > | networkSecurityGroups | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | builtInTemplateSpecs | No |
-> | builtInTemplateSpecs / versions | No |
-> | bulkDelete | No |
-> | calculateTemplateHash | No |
> | deployments | No | > | deploymentScripts | Yes | > | deploymentScripts / logs | No | > | deploymentStacks / snapshots | No | > | links | No |
-> | notifyResourceJobs | No |
-> | providers | No |
> | resourceGroups | No |
-> | resources | No |
> | subscriptions | No |
-> | subscriptions / providers | No |
-> | subscriptions / resourceGroups | No |
-> | subscriptions / resourcegroups / resources | No |
-> | subscriptions / resources | No |
-> | subscriptions / tagnames | No |
-> | subscriptions / tagNames / tagValues | No |
> | tags | No | > | templateSpecs | Yes | > | templateSpecs / versions | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | managedInstances / metrics | No | > | managedInstances / recoverableDatabases | No | > | managedInstances / sqlAgent | No |
-> | managedInstances / start | No |
> | managedInstances / startStopSchedules | No |
-> | managedInstances / stop | No |
> | managedInstances / tdeCertificates | No | > | managedInstances / vulnerabilityAssessments | No | > | servers | Yes |
azure-signalr Signalr Howto Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-diagnostic-logs.md
The following code is an example of an archive log JSON string:
To view resource logs, follow these steps: 1. Click `Logs` in your target Log Analytics.-
- ![Log Analytics menu item](./media/signalr-tutorial-diagnostic-logs/log-analytics-menu-item.png)
+ :::image type="content" alt-text="Log Analytics menu item" source="./media/signalr-tutorial-diagnostic-logs/log-analytics-menu-item.png" lightbox="./media/signalr-tutorial-diagnostic-logs/log-analytics-menu-item.png":::
2. Enter `SignalRServiceDiagnosticLogs` and select time range to query resource logs. For advanced query, see [Get started with Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-tutorial.md)
- ![Query log in Log Analytics](./media/signalr-tutorial-diagnostic-logs/query-log-in-log-analytics.png)
+ :::image type="content" alt-text="Query log in Log Analytics" source="./media/signalr-tutorial-diagnostic-logs/query-log-in-log-analytics.png" lightbox="./media/signalr-tutorial-diagnostic-logs/query-log-in-log-analytics.png":::
+
+To use sample query for SignalR service, please follow the steps below:
+1. Select `Logs` in your target Log Analytics.
+2. Select `Queries` to open query explorer.
+3. Select `Resource type` to group sample queries in resource type.
+4. Select `Run` to run the script.
+ :::image type="content" alt-text="Sample query in Log Analytics" source="./media/signalr-tutorial-diagnostic-logs/log-analytics-sample-query.png" lightbox="./media/signalr-tutorial-diagnostic-logs/log-analytics-sample-query.png":::
+ Archive log columns include elements listed in the following table:
If you find that you cannot establish SignalR client connections to Azure Signal
### Get help We recommend you troubleshoot by yourself first. Most issues are caused by app server or network issues. Follow [troubleshooting guide with resource log](#troubleshooting-with-resource-logs) and [basic trouble shooting guide](https://github.com/Azure/azure-signalr/blob/dev/docs/tsg.md) to find the root cause.
-If the issue still can't be resolved, then consider open an issue in GitHub or create ticket in Azure Portal.
+If the issue still can't be resolved, then consider open an issue in GitHub or create ticket in Azure portal.
Provide: 1. Time range about 30 minutes when the issue occurs 2. Azure SignalR Service's resource ID
azure-signalr Signalr Howto Troubleshoot Live Trace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-troubleshoot-live-trace.md
You can enable and disable the live trace feature with a single click. You can a
> [!NOTE] > Please note that the live traces will be counted as outbound messages.
+> Azure Active Directory access to live trace tool is not supported, please enable `Access Key` in `Keys` menu.
## Launch the live trace tool
azure-sql Service Tiers Sql Database Vcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/service-tiers-sql-database-vcore.md
Previously updated : 04/13/2022 Last updated : 04/21/2022 # vCore purchasing model - Azure SQL Database
Compute tier options in the vCore model include the provisioned and [serverless]
Hardware configurations in the vCore model include Gen4, Gen5, M-series, Fsv2-series, and DC-series. Hardware configuration defines compute and memory limits and other characteristics that impact workload performance.
+Certain hardware configurations such as Gen5 may use more than one type of processor (CPU), as described in [Compute resources (CPU and memory)](#compute-resources-cpu-and-memory). While a given database or elastic pool tends to stay on the hardware with the same CPU type for a long time (commonly for multiple months), there are certain events that can cause a database or pool to be moved to hardware that uses a different CPU type. For example, a database or pool can be moved if it's scaled up or down to a different service objective, or if the current infrastructure in a datacenter is approaching its capacity limits, or if the currently used hardware is being decommissioned due to its end of life.
+
+For some workloads, a move to a different CPU type can change performance. SQL Database configures hardware with the goal to provide predictable workload performance even if CPU type changes, keeping performance changes within a narrow band. However, across the wide spectrum of customer workloads running in SQL Database, and as new types of CPUs become available, it is possible to occasionally see more noticeable changes in performance if a database or pool moves to a different CPU type.
+
+Regardless of CPU type used, resource limits for a database or elastic pool remain the same as long as the database stays on the same service objective.
+ ### Gen4/Gen5 - Gen4/Gen5 hardware provides balanced compute and memory resources, and is suitable for most database workloads that do not have higher memory, higher vCore, or faster single vCore requirements as provided by Fsv2-series or M-series.
If you need DC-series in a currently unsupported region, [submit a support ticke
The following table compares compute resources in different hardware configurations and compute tiers:
-|Hardware configuration |Compute |Memory |
+|Hardware configuration |CPU |Memory |
|:|:|:| |Gen4 |- Intel&reg; E5-2673 v3 (Haswell) 2.4-GHz processors<br>- Provision up to 24 vCores (1 vCore = 1 physical core) |- 7 GB per vCore<br>- Provision up to 168 GB| |Gen5 |**Provisioned compute**<br>- Intel&reg; E5-2673 v4 (Broadwell) 2.3-GHz, Intel&reg; SP-8160 (Skylake)\*, and Intel&reg; 8272CL (Cascade Lake) 2.5 GHz\* processors<br>- Provision up to 80 vCores (1 vCore = 1 hyper-thread)<br><br>**Serverless compute**<br>- Intel&reg; E5-2673 v4 (Broadwell) 2.3-GHz and Intel&reg; SP-8160 (Skylake)* processors<br>- Auto-scale up to 40 vCores (1 vCore = 1 hyper-thread)|**Provisioned compute**<br>- 5.1 GB per vCore<br>- Provision up to 408 GB<br><br>**Serverless compute**<br>- Auto-scale up to 24 GB per vCore<br>- Auto-scale up to 120 GB max|
azure-web-pubsub Howto Troubleshoot Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-troubleshoot-resource-logs.md
Title: How to troubleshoot with Azure Web PubSub service resource logs description: Learn how to get and troubleshoot with resource logs--++ Previously updated : 11/08/2021 Last updated : 04/01/2022 # How to troubleshoot with resource logs
The Azure Web PubSub service live trace tool has ability to collect resource log
:::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/diagnostic-logs-with-live-trace-tool.png" alt-text="Screenshot of launching the live trace tool.":::
+> Azure Active Directory access to live trace tool is not yet supported, please enable `Access Key` in `Keys` menu.
+ ### Capture the resource logs The live trace tool provides some fundamental functionalities to help you capture the resource logs for troubleshooting.
Currently Azure Web PubSub supports integrate with [Azure Storage](../azure-moni
> [!NOTE] > The storage account should be the same region to Azure Web PubSub service.
-### Archive to a storage account
+### Archive to Azure Storage Account
Logs are stored in the storage account that configured in **Diagnostics setting** pane. A container named `insights-logs-<CATEGORY_NAME>` is created automatically to store resource logs. Inside the container, logs are stored in the file `resourceId=/SUBSCRIPTIONS/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/RESOURCEGROUPS/XXXX/PROVIDERS/MICROSOFT.SIGNALRSERVICE/SIGNALR/XXX/y=YYYY/m=MM/d=DD/h=HH/m=00/PT1H.json`. Basically, the path is combined by `resource ID` and `Date Time`. The log files are split by `hour`. Therefore, the minutes always be `m=00`.
The following code is an example of an archive log JSON string:
} ```
+### Archive to Azure Log Analytics
+
+Once you check `Send to Log Analytics`, and select target Azure Log Analytics, the logs will be stored in the target. To view resource logs, follow these steps:
+
+1. Select `Logs` in your target Log Analytics.
+
+ :::image type="content" alt-text="Log Analytics menu item" source="./media/howto-troubleshoot-diagnostic-logs/log-analytics-menu-item.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/log-analytics-menu-item.png":::
+
+2. Enter `WebPubSubConnectivity`, `WebPubSubMessaging` or `WebPubSubHttpRequest` and select time range to query [connectivity log](#connectivity-logs), [messaging log](#messaging-logs) or [http request logs](#http-request-logs) correspondingly. For advanced query, see [Get started with Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-tutorial.md)
+
+ :::image type="content" alt-text="Query log in Log Analytics" source="./media/howto-troubleshoot-diagnostic-logs/query-log-in-log-analytics.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/query-log-in-log-analytics.png":::
+
+To use sample query for SignalR service, please follow the steps below:
+1. Select `Logs` in your target Log Analytics.
+2. Select `Queries` to open query explorer.
+3. Select `Resource type` to group sample queries in resource type.
+4. Select `Run` to run the script.
+ :::image type="content" alt-text="Sample query in Log Analytics" source="./media/howto-troubleshoot-diagnostic-logs/log-analytics-sample-query.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/log-analytics-sample-query.png":::
++
+Archive log columns include elements listed in the following table:
+
+Name | Description
+- | -
+TimeGenerated | Log event time
+Collection | Collection of the log event. Allowed values are: `Connection`, `Authorization` and `Throttling`
+OperationName | Operation name of the event
+Location | Location of your Azure SignalR Service
+Level | Log event level
+CallerIpAddress | IP address of your server/client
+Message | Detailed message of log event
+UserId | Identity of the user
+ConnectionId | Identity of the connection
+ConnectionType | Type of the connection. Allowed values are: `Server` \| `Client`. `Server`: connection from server side; `Client`: connection from client side
+TransportType | Transport type of the connection. Allowed values are: `Websockets` \| `ServerSentEvents` \| `LongPolling`
+ ## Troubleshoot with the resource logs When finding connection unexpected growing or dropping situation, you can take advantage of resource logs to troubleshoot. Typical issues are often about connections' unexpected quantity changes, connections reach connection limits and authorization failure.
batch Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Batch description: Lists Azure Policy Regulatory Compliance controls available for Azure Batch. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/10/2022 Last updated : 04/20/2022 --++
chaos-studio Chaos Studio Tutorial Aks Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-aks-cli.md
Title: Create an experiment that uses an AKS Chaos Mesh fault using Azure Chaos
description: Create an experiment that uses an AKS Chaos Mesh fault with the Azure CLI Previously updated : 11/11/2021 Last updated : 04/21/2022
Before you can run Chaos Mesh faults in Chaos Studio, you need to install Chaos
helm repo add chaos-mesh https://charts.chaos-mesh.org helm repo update kubectl create ns chaos-testing
- helm install chaos-mesh chaos-mesh/chaos-mesh --namespace=chaos-testing --version 2.0.3 --set chaosDaemon.runtime=containerd --set chaosDaemon.socketPath=/run/containerd/containerd.sock
+ helm install chaos-mesh chaos-mesh/chaos-mesh --namespace=chaos-testing --set chaosDaemon.runtime=containerd --set chaosDaemon.socketPath=/run/containerd/containerd.sock
``` 2. Verify that the Chaos Mesh pods are installed by running the following command:
chaos-studio Chaos Studio Tutorial Aks Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-aks-portal.md
Title: Create an experiment that uses an AKS Chaos Mesh fault using Azure Chaos
description: Create an experiment that uses an AKS Chaos Mesh fault with the Azure portal Previously updated : 11/01/2021 Last updated : 04/21/2022
az aks get-credentials -g $RESOURCE_GROUP -n $CLUSTER_NAME
helm repo add chaos-mesh https://charts.chaos-mesh.org helm repo update kubectl create ns chaos-testing
-helm install chaos-mesh chaos-mesh/chaos-mesh --namespace=chaos-testing --version 2.0.3 --set chaosDaemon.runtime=containerd --set chaosDaemon.socketPath=/run/containerd/containerd.sock
+helm install chaos-mesh chaos-mesh/chaos-mesh --namespace=chaos-testing --set chaosDaemon.runtime=containerd --set chaosDaemon.socketPath=/run/containerd/containerd.sock
``` 2. Verify that the Chaos Mesh pods are installed by running the following command:
With your AKS cluster now onboarded, you can create your experiment. A chaos exp
namespace: chaos-testing spec: action: pod-failure
- mode: one
- duration: '30s'
+ mode: all
+ duration: '600s'
selector: namespaces: - default
With your AKS cluster now onboarded, you can create your experiment. A chaos exp
```yaml action: pod-failure
- mode: one
- duration: '30s'
+ mode: all
+ duration: '600s'
selector: namespaces: - default
With your AKS cluster now onboarded, you can create your experiment. A chaos exp
4. Use a [YAML-to-JSON converter like this one](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minimize it. ```json
- {"action":"pod-failure","mode":"one","duration":"30s","selector":{"namespaces":["default"]}}
+ {"action":"pod-failure","mode":"all","duration":"600s","selector":{"namespaces":["default"]}}
```
- 5. Paste the minimized JSON into the **json** field in the portal.
--
+ 5. Paste the minimized JSON into the **jsonSpec** field in the portal.
Click **Next: Target resources >**
cognitive-services Prebuilt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/prebuilt.md
We see that multiple answers are received as part of the API response. Each answ
## Prebuilt API limits
+### API call limits
If you need to use larger documents than the limit allows, you can break the text into smaller chunks of text before sending them to the API. In this context, a document is a defined single string of text characters. These numbers represent the **per individual API call limits**:
These numbers represent the **per individual API call limits**:
* Maximum size of a single document: 5,120 characters. * Maximum three responses per document.
+### Language codes supported
+The following language codes are supported by Prebuilt API. These language codes are in accordance to the [ISO 639-1 codes standard](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes).
+
+Language code|Language
+-|-
+af|Afrikaans
+am|Amharic
+ar|Arabic
+as|Assamese
+az|Azerbaijani
+ba|Bashkir
+be|Belarusian
+bg|Bulgarian
+bn|Bengali
+ca|Catalan, Valencian
+ckb|Central Kurdish
+cs|Czech
+cy|Welsh
+da|Danish
+de|German
+el|Greek, Modern (1453ΓÇô)
+en|English
+eo|Esperanto
+es|Spanish, Castilian
+et|Estonian
+eu|Basque
+fa|Persian
+fi|Finnish
+fr|French
+ga|Irish
+gl|Galician
+gu|Gujarati
+he|Hebrew
+hi|Hindi
+hr|Croatian
+hu|Hungarian
+hy|Armenian
+id|Indonesian
+is|Icelandic
+it|Italian
+ja|Japanese
+ka|Georgian
+kk|Kazakh
+km|Central Khmer
+kn|Kannada
+ko|Korean
+ky|Kirghiz, Kyrgyz
+la|Latin
+lo|Lao
+lt|Lithuanian
+lv|Latvian
+mk|Macedonian
+ml|Malayalam
+mn|Mongolian
+mr|Marathi
+ms|Malay
+mt|Maltese
+my|Burmese
+ne|Nepali
+nl|Dutch, Flemish
+nn|Norwegian Nynorsk
+no|Norwegian
+or|Oriya
+pa|Punjabi, Panjabi
+pl|Polish
+ps|Pashto, Pushto
+pt|Portuguese
+ro|Romanian, Moldavian, Moldovan
+ru|Russian
+sa|Sanskrit
+sd|Sindhi
+si|Sinhala, Sinhalese
+sk|Slovak
+sl|Slovenian
+sq|Albanian
+sr|Serbian
+sv|Swedish
+sw|Swahili
+ta|Tamil
+te|Telugu
+tg|Tajik
+th|Thai
+tl|Tagalog
+tr|Turkish
+tt|Tatar
+ug|Uighur, Uyghur
+uk|Ukrainian
+ur|Urdu
+uz|Uzbek
+vi|Vietnamese
+yi|Yiddish
+zh|Chinese
+ ## Prebuilt API reference
-Visit the [full prebuilt API samples](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/Language/stable/2021-10-01/examples/questionanswering/SuccessfulQueryText.json) documentation to understand the input and output parameters required for calling the API.
+Visit the [full prebuilt API samples](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/Language/stable/2021-10-01/examples/questionanswering/SuccessfulQueryText.json) documentation to understand the input and output parameters required for calling the API.
cognitive-services Adding Synonyms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/tutorials/adding-synonyms.md
As you can see, when `troubleshoot` was not added as a synonym, we got a low con
> [!NOTE] > Synonyms are case insensitive. Synonyms also might not work as expected if you add stop words as synonyms. The list of stop words can be found here: [List of stop words](https://github.com/Azure-Samples/azure-search-sample-dat). > For instance, if you add the abbreviation **IT** for Information technology, the system might not be able to recognize Information Technology because **IT** is a stop word and is filtered when a query is processed.
+> Synonyms do not allow these special characters: ',', '?', ':', ';', '\"', '\'', '(', ')', '{', '}', '[', ']', '-', '+', '.', '/', '!', '*', '-', '_', '@', '#'
## Next steps
cognitive-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/how-to/use-containers.md
Previously updated : 11/29/2021 Last updated : 04/20/2022 ms.devlang: azurecli
This command:
- Runs the Text Analytics for health container from the container image - Allocates 6 CPU core and 12 gigabytes (GB) of memory - Exposes TCP port 5000 and allocates a pseudo-TTY for the container-- Accepts the end user license agreement (Eula) and responsible AI (RAI) terms
+- Accepts the end user license agreement (EULA) and responsible AI (RAI) terms
- Automatically removes the container after it exits. The container image is still available on the host computer. ### Demo UI to visualize output
curl -X POST 'http://<serverURL>:5000/text/analytics/v3.1/entities/health' --hea
### Install the container using Azure Web App for Containers
-Azure [Web App for Containers](https://azure.microsoft.com/services/app-service/containers/) is an Azure resource dedicated to running containers in the cloud. It brings out-of-the-box capabilities such as autoscaling, support of docker containers and docker compose, HTTPS support and much more.
+Azure [Web App for Containers](https://azure.microsoft.com/services/app-service/containers/) is an Azure resource dedicated to running containers in the cloud. It brings out-of-the-box capabilities such as autoscaling, support for docker containers and docker compose, HTTPS support and much more.
> [!NOTE] > Using Azure Web App you will automatically get a domain in the form of `<appservice_name>.azurewebsites.net`
az webapp config appsettings set -g $resource_group_name -n $appservice_name --s
You can also use an Azure Container Instance (ACI) to make deployment easier. ACI is a resource that allows you to run Docker containers on-demand in a managed, serverless Azure environment.
-See [How to use Azure Container Instances](../../../containers/azure-container-instance-recipe.md) for steps on deploying an ACI resource using the Azure portal. You can also use the below PowerShell script using Azure CLI, which will create a ACI on your subscription using the container image. Wait for the script to complete (approximately 25-30 minutes) before submitting the first request. Due to the limit on the maximum number of CPUs per ACI resource, do not select this option if you expect to submit more than 5 large documents (approximately 5000 characters each) per request.
+See [How to use Azure Container Instances](../../../containers/azure-container-instance-recipe.md) for steps on deploying an ACI resource using the Azure portal. You can also use the below PowerShell script using Azure CLI, which will create an ACI on your subscription using the container image. Wait for the script to complete (approximately 25-30 minutes) before submitting the first request. Due to the limit on the maximum number of CPUs per ACI resource, do not select this option if you expect to submit more than 5 large documents (approximately 5000 characters each) per request.
See the [ACI regional support](../../../../container-instances/container-instances-region-availability.md) article for availability information. > [!NOTE]
Use the host, `http://localhost:5000`, for container APIs.
You can use Postman or the example cURL request below to submit a query to the container you deployed, replacing the `serverURL` variable with the appropriate value. Note the version of the API in the URL for the container is different than the hosted API.
-```bash
-curl -X POST 'http://<serverURL>:5000/text/analytics/v3.1/entities/health' --header 'Content-Type: application/json' --header 'accept: application/json' --data-binary @example.json
-```
-
-The following JSON is an example of a JSON file attached to the Text Analytics for health request's POST body:
-
-```json
-example.json
-
-{
- "documents": [
- {
- "language": "en",
- "id": "1",
- "text": "Patient reported itchy sores after swimming in the lake."
- },
- {
- "language": "en",
- "id": "2",
- "text": "Prescribed 50mg benadryl, taken twice daily."
- }
- ]
-}
-```
-
-### Container response body
-
-The following JSON is an example of the Text Analytics for health response body from the containerized synchronous call:
-
-```json
-{
- "documents": [
- {
- "id": "1",
- "entities": [
- {
- "offset": 25,
- "length": 5,
- "text": "100mg",
- "category": "Dosage",
- "confidenceScore": 1.0
- },
- {
- "offset": 31,
- "length": 10,
- "text": "remdesivir",
- "category": "MedicationName",
- "confidenceScore": 1.0,
- "name": "remdesivir",
- "links": [
- {
- "dataSource": "UMLS",
- "id": "C4726677"
- },
- {
- "dataSource": "DRUGBANK",
- "id": "DB14761"
- },
- {
- "dataSource": "GS",
- "id": "6192"
- },
- {
- "dataSource": "MEDCIN",
- "id": "398132"
- },
- {
- "dataSource": "MMSL",
- "id": "d09540"
- },
- {
- "dataSource": "MSH",
- "id": "C000606551"
- },
- {
- "dataSource": "MTHSPL",
- "id": "3QKI37EEHE"
- },
- {
- "dataSource": "NCI",
- "id": "C152185"
- },
- {
- "dataSource": "NCI_FDA",
- "id": "3QKI37EEHE"
- },
- {
- "dataSource": "NDDF",
- "id": "018308"
- },
- {
- "dataSource": "RXNORM",
- "id": "2284718"
- },
- {
- "dataSource": "SNOMEDCT_US",
- "id": "870592005"
- },
- {
- "dataSource": "VANDF",
- "id": "4039395"
- }
- ]
- },
- {
- "offset": 42,
- "length": 13,
- "text": "intravenously",
- "category": "MedicationRoute",
- "confidenceScore": 1.0
- },
- {
- "offset": 73,
- "length": 7,
- "text": "120 min",
- "category": "Time",
- "confidenceScore": 0.94
- }
- ],
- "relations": [
- {
- "relationType": "DosageOfMedication",
- "entities": [
- {
- "ref": "#/documents/0/entities/0",
- "role": "Dosage"
- },
- {
- "ref": "#/documents/0/entities/1",
- "role": "Medication"
- }
- ]
- },
- {
- "relationType": "RouteOfMedication",
- "entities": [
- {
- "ref": "#/documents/0/entities/1",
- "role": "Medication"
- },
- {
- "ref": "#/documents/0/entities/2",
- "role": "Route"
- }
- ]
- },
- {
- "relationType": "TimeOfMedication",
- "entities": [
- {
- "ref": "#/documents/0/entities/1",
- "role": "Medication"
- },
- {
- "ref": "#/documents/0/entities/3",
- "role": "Time"
- }
- ]
- }
- ],
- "warnings": []
- }
- ],
- "errors": [],
- "modelVersion": "2021-03-01"
-}
-```
## Run the container with client library support
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/quickstart.md
Previously updated : 11/02/2021 Last updated : 04/20/2022 ms.devlang: csharp, java, javascript, python
zone_pivot_groups: programming-languages-text-analytics
Use this article to get started with Text Analytics for health using the client library and REST API. Follow these steps to try out examples code for mining text:
+> [!IMPORTANT]
+> Fast Healthcare Interoperability Resources (FHIR) structuring is available for preview using the Language REST API. The client libraries are not currently supported.
+ ::: zone pivot="programming-language-csharp" [!INCLUDE [C# quickstart](includes/quickstarts/csharp-sdk.md)]
Use this article to get started with Text Analytics for health using the client
::: zone pivot="rest-api" ::: zone-end
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Previously updated : 03/15/2022 Last updated : 04/20/2022
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-to-date with recent developments, this article provides you with information about new releases and features.
+## April 2022
+
+* Fast Healthcare Interoperability Resources (FHIR) support is available in the [Language REST API preview](text-analytics-for-health/quickstart.md?pivots=rest-api&tabs=language) for Text Analytics for health.
++ ## March 2022 * Expanded language support for:
communication-services Pre Call Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/pre-call-diagnostics.md
The Pre-Call API enables developers to programmatically validate a clientΓÇÖs re
## Accessing Pre-Call APIs
+>[!IMPORTANT]
+>Pre-Call diagnostics are available starting on the version [1.5.2-alpha.20220415.1](https://www.npmjs.com/package/@azure/communication-calling/v/1.5.2-alpha.20220415.1) of the Calling SDK. Make sure to use that version when trying the instructions below.
+ To Access the Pre-Call API, you will need to initialize a `callClient` and provision an Azure Communication Services access token. There you can access the `PreCallDiagnostics` feature and the `startTest` method. ```javascript
communication-services Chat Hero Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/chat-hero-sample.md
> [This sample is available **on GitHub**.](https://github.com/Azure-Samples/communication-services-web-chat-hero)
-The Azure Communication Services **Group Chat Hero Sample** demonstrates how the Communication Services Chat Web SDK can be used to build a group calling experience.
+The Azure Communication Services **Group Chat Hero Sample** demonstrates how the Communication Services Chat Web SDK can be used to build a group chat experience.
In this Sample quickstart, we'll learn how the sample works before we run the sample on your local machine. We'll then deploy the sample to Azure using your own Azure Communication Services resources.
connectors Connectors Create Api Azureblobstorage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-azureblobstorage.md
ms.suite: integration Previously updated : 02/18/2022 Last updated : 04/18/2022 tags: connectors
You can connect to Blob Storage from both **Logic App (Consumption)** and **Logi
> [!IMPORTANT] > A logic app workflow can't directly access a storage account behind a firewall if they're both in the same region.
-> As a workaround, your logic app and storage account can be in different regions. For more information about enabling access from Azure Logic Apps to storage accounts behind firewalls, review the [Access storage accounts behind firewalls](#access-storage-accounts-behind-firewalls) section later in this topic.
+> As a workaround, your logic app and storage account can be in different regions. For more information about enabling
+> access from Azure Logic Apps to storage accounts behind firewalls, review the [Access storage accounts behind firewalls](#access-storage-accounts-behind-firewalls) section later in this topic.
## Prerequisites
If you have problems connecting to your storage account, review [how to access s
## Access storage accounts behind firewalls
-You can add network security to an Azure storage account by [restricting access with a firewall and firewall rules](../storage/common/storage-network-security.md). However, this setup creates a challenge for Azure and other Microsoft services that need access to the storage account. Local communication in the data center abstracts the internal IP addresses, so you can't set up firewall rules with IP restrictions.
+You can add network security to an Azure storage account by [restricting access with a firewall and firewall rules](../storage/common/storage-network-security.md). However, this setup creates a challenge for Azure and other Microsoft services that need access to the storage account. Local communication in the data center abstracts the internal IP addresses, so just permitting traffic through IP addresses might not be enough to successfully allow communication across the firewall. Based on which Azure Blob Storage connector you use, the following options are available:
-To access storage accounts behind firewalls using the Blob Storage connector:
+- To access storage accounts behind firewalls using the Azure Blob Storage managed connector in Consumption, Standard, and ISE-based logic apps, review the following documentation:
-- [Access storage accounts in other regions](#access-storage-accounts-in-other-regions)-- [Access storage accounts through a trusted virtual network](#access-storage-accounts-through-trusted-virtual-network)
+ - [Access storage accounts with managed identities](#access-blob-storage-with-managed-identities)
-Other solutions for accessing storage accounts behind firewalls:
+ - [Access storage accounts in other regions](#access-storage-accounts-in-other-regions)
-- [Access storage accounts as a trusted service with managed identities](#access-blob-storage-with-managed-identities)-- [Access storage accounts through Azure API Management](#access-storage-accounts-through-azure-api-management)
+- To access storage accounts behind firewalls using the ISE-versioned Azure Blob Storage connector that's only available in an ISE-based logic app, review [Access storage accounts through trusted virtual network](#access-storage-accounts-through-trusted-virtual-network).
+
+- To access storage accounts behind firewalls using the *built-in* Azure Blob Storage connector that's only available in Standard logic apps, review [Access storage accounts through VNet integration](#access-storage-accounts-through-vnet-integration).
### Access storage accounts in other regions
-Logic app workflows can't directly access storage accounts behind firewalls when they're both in the same region. As a workaround, put your logic app resources in a different region than your storage account. Then, give access to the [outbound IP addresses for the managed connectors in your region](/connectors/common/outbound-ip-addresses#azure-logic-apps).
+If you don't use managed identity authentication, logic app workflows can't directly access storage accounts behind firewalls when both the logic app resource and storage account exist in the same region. As a workaround, put your logic app resource in a different region than your storage account. Then, give access to the [outbound IP addresses for the managed connectors in your region](/connectors/common/outbound-ip-addresses#azure-logic-apps).
> [!NOTE] > This solution doesn't apply to the Azure Table Storage connector and Azure Queue Storage connector.
To add your outbound IP addresses to the storage account firewall, follow these
### Access storage accounts through trusted virtual network
-You can put the storage account in an Azure virtual network that you manage, and then add that virtual network to the trusted virtual networks list. To give your logic app access to the storage account through a [trusted virtual network](../virtual-network/virtual-networks-overview.md), you need to deploy that logic app to an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md), which can connect to resources in a virtual network. You can then add the subnets in that ISE to the trusted list. Azure Storage connectors, such as the Blob Storage connector, can directly access the storage container. This setup is the same experience as using the service endpoints from an ISE.
+- Your logic app and storage account exist in the same region.
+
+ You can put your storage account in an Azure virtual network by creating a private endpoint, and then add that virtual network to the trusted virtual networks list. To give your logic app access to the storage account through a [trusted virtual network](../virtual-network/virtual-networks-overview.md), you need to deploy that logic app to an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md), which can connect to resources in a virtual network. You can then add the subnets in that ISE to the trusted list. ISE-based storage connectors, such as the ISE-versioned Azure Blob Storage connector, can directly access the storage container. This setup is the same experience as using the service endpoints from an ISE.
+
+- Your logic app and storage account exist in different regions.
+
+ You don't have to create a private endpoint. You can just permit traffic through the ISE outbound IPs on the storage account.
+
+### Access storage accounts through VNet integration
-### Access storage accounts through Azure API Management
+- Your logic app and storage account exist in the same region.
-If you use a dedicated tier for [API Management](../api-management/api-management-key-concepts.md), you can front the Storage API by using API Management and permitting the latter's IP addresses through the firewall. Basically, add the Azure virtual network that's used by API Management to the storage account's firewall setting. You can then use either the API Management action or the HTTP action to call the Azure Storage APIs. However, if you choose this option, you have to handle the authentication process yourself. For more info, see [Simple enterprise integration architecture](/azure/architecture/reference-architectures/enterprise-integration/basic-enterprise-integration).
+ You can put the storage account in an Azure virtual network by creating a private endpoint, and then add that virtual network to the trusted virtual networks list. To give your logic app access to the storage account, you have to [Set up outbound traffic using VNet integration](../logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md#set-up-outbound) to enable connecting to resources in a virtual network. You can then add the VNet to the storage account's trusted virtual networks list.
-## Access Blob Storage with managed identities
+- Your logic app and storage account exist in different regions.
-If you want to access Blob Storage without using this connector, you can use [managed identities for authentication](../active-directory/managed-identities-azure-resources/overview.md) instead. You can create an exception that gives Microsoft trusted services, such as a managed identity, access to your storage account through a firewall.
+ You don't have to create a private endpoint. You can just permit traffic through the ISE outbound IPs on the storage account.
+
+### Access Blob Storage with managed identities
+
+To connect to Azure Blob Storage in any region, you can use [managed identities for authentication](../active-directory/managed-identities-azure-resources/overview.md). You can create an exception that gives Microsoft trusted services, such as a managed identity, access to your storage account through a firewall.
To use managed identities in your logic app to access Blob Storage, follow these steps:
To use managed identities in your logic app to access Blob Storage, follow these
> [!NOTE] > Limitations for this solution: >
-> - You can *only* use the HTTP trigger or action in your workflow.
> - You must set up a managed identity to authenticate your storage account connection.
-> - You can't use built-in Blob Storage operations if you authenticate with a managed identity.
-> - For logic apps in a single-tenant environment, only the system-assigned managed identity is
-> available and supported, not the user-assigned managed identity.
+>
+> - For Standard logic apps in the single-tenant Azure Logic Apps environment, only the system-assigned
+> managed identity is available and supported, not the user-assigned managed identity.
-### Configure storage account access
+#### Configure storage account access
To set up the exception and managed identity support, first configure appropriate access to your storage account:
To set up the exception and managed identity support, first configure appropriat
<a name="create-role-assignment-logic-app"></a>
-### Create role assignment for logic app
+#### Create role assignment for logic app
Next, [enable managed identity support](../logic-apps/create-managed-service-identity.md) on your logic app resource.
The following steps are the same for Consumption logic apps in multi-tenant envi
<a name="enable-managed-identity-support"></a>
-### Enable managed identity support on logic app
-
-Next, add an [HTTP trigger or action](connectors-native-http.md) in your workflow. Make sure to [set the authentication type to use the managed identity](../logic-apps/create-managed-service-identity.md#authenticate-access-with-managed-identity).
-
-The following steps are the same for Consumption logic apps in multi-tenant environments and Standard logic apps in single-tenant environments.
-
-1. Open your logic app workflow in the designer.
-
-1. Based on your scenario, add an **HTTP** trigger or action to your workflow. Set up the required parameter values.
-
- 1. Select a **Method** for your request. This example uses the HTTP **PUT** method.
-
- 1. Enter the **URI** for your blob. The path resembles `https://<storage-container-name>/<folder-name>/{name}`. Provide your container name and folder name instead, but keep the `{name}` literal string.
-
- 1. Under **Headers**, add the following items:
-
- - The blob type header `x-ms-blob-type` with the value `BlockBlob`.
-
- - The API version header `x-ms-version` with the appropriate value.
+#### Enable managed identity support on logic app
- For more information, see [Authenticate access with managed identity](../logic-apps/create-managed-service-identity.md#authenticate-access-with-managed-identity) and [Versioning for Azure Storage services](/rest/api/storageservices/versioning-for-the-azure-storage-services#specifying-service-versions-in-requests).
+Next, complete the following steps:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/managed-identity-connect.png" alt-text="Screenshot showing the workflow designer and HTTP trigger or action with the required 'PUT' parameters.":::
+1. If you have a blank workflow, add an Azure Blob Storage connector trigger. Otherwise, add an Azure Blob Storage connector action. Make sure that you create a new connection for the trigger or action, rather than use an existing connection.
-1. From the **Add a new parameter** list, select **Authentication** to [configure the managed identity](../logic-apps/create-managed-service-identity.md#authenticate-access-with-managed-identity).
+1. Make sure that you [set the authentication type to use the managed identity](../logic-apps/create-managed-service-identity.md#authenticate-access-with-managed-identity).
- 1. Under **Authentication**, for the **Authentication type** property, select **Managed identity**.
+1. After you configure the trigger or action, you can save the workflow and test the trigger or action.
- 1. For the **Managed identity** property, select **System-assigned managed identity**.
+## Troubleshoot problems with accessing storage accounts
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/managed-identity-authenticate.png" alt-text="Screenshot showing the workflow designer and HTTP action with the managed identity authentication parameter settings.":::
+- **"This request is not authorized to perform this operation."**
-1. When you're done, on the designer toolbar, select **Save**.
+ The following error is a commonly reported problem that happens when your logic app and storage account exist in the same region. However, options are available to resolve this limitation as described in the section, [Access storage accounts behind firewalls](#access-storage-accounts-behind-firewalls).
-Now, you can call the [Blob service REST API](/rest/api/storageservices/blob-service-rest-api) to run any necessary storage operations.
+ ```json
+ {
+ "status": 403,
+ "message": "This request is not authorized to perform this operation.\\r\\nclientRequestId: a3da2269-7120-44b4-9fe5-ede7a9b0fbb8",
+ "error": {
+ "message": "This request is not authorized to perform this operation."
+ },
+ "source": "azureblob-ase.azconn-ase.p.azurewebsites.net"
+ }
+ ```
## Next steps
container-apps Scale App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/scale-app.md
Scaling rules are defined in `resources.properties.template.scale` section of th
| Scale property | Description | Default value | Min value | Max value | ||||||
-| `minReplicas` | Minimum number of replicas running for your container app. | 0 | 1 | 25 |
-| `maxReplicas` | Maximum number of replicas running for your container app. | n/a | 1 | 25 |
+| `minReplicas` | Minimum number of replicas running for your container app. | 0 | 1 | 10 |
+| `maxReplicas` | Maximum number of replicas running for your container app. | n/a | 1 | 10 |
- If your container app scales to zero, then you aren't billed. - Individual scale rules are defined in the `rules` array.
cosmos-db Defender For Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/defender-for-cosmos-db.md
Use the following PowerShell cmdlets:
# [ARM template](#tab/arm-template) Use an Azure Resource Manager (ARM) template to set up Azure Cosmos DB with Azure Defender protection enabled. For more information, see
-[Create a CosmosDB Account with Advanced Threat Protection](https://azure.microsoft.com/resources/templates/cosmosdb-advanced-threat-protection-create-account/).
+[Create a CosmosDB Account with Advanced Threat Protection](https://azure.microsoft.com/resources/templates/microsoft-defender-cosmosdb-create-account/).
# [Azure Policy](#tab/azure-policy)
cost-management-billing Mca Setup Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-setup-account.md
Before you start the setup, we recommend you do the following actions:
- **Verify your access to complete the setup** - Only users with certain administrative permissions can complete the setup. Check if you have the [access required to complete the setup](#access-required-to-complete-the-setup). - **Understand changes to your billing hierarchy**
- - You new billing account is organized differently than your Enterprise Agreement enrollment. [Understand changes to your billing hierarchy in the new account](#understand-changes-to-your-billing-hierarchy).
+ - Your new billing account is organized differently than your Enterprise Agreement enrollment. [Understand changes to your billing hierarchy in the new account](#understand-changes-to-your-billing-hierarchy).
- **Understand changes to your billing administrators' access** - Administrators from your Enterprise Agreement enrollment get access to the billing scopes in the new account.[Understand changes to their access](#changes-to-billing-administrator-access). - **View Enterprise Agreement features that are replaced by the new account**
You can use the following options to start the migration experience for your EA
If you have both the enterprise administrator and billing account owner roles, you see the following page in the Azure portal. You can continue setting up your EA enrollments and Microsoft Customer Agreement billing account for transition. If you don't have the enterprise administrator role for the enterprise agreement or the billing account owner role for the Microsoft Customer Agreement, then use the following information to get the access that you need to complete setup.
If you don't have the enterprise administrator role for the enterprise agreement
You see the following page in the Azure portal if you have a billing account owner role but you're not an enterprise administrator. You have two options:
If you're an enterprise administrator but you don't have a billing account, you'
If you believe that you have billing account owner access to the correct Microsoft Customer Agreement and you see the following message, make sure that you are in the correct tenant for your organization. You might need to change directories. You have two options:
Open the migration that you were presented previously, or open the link that you
The following image shows and example of the Prepare your enterprise agreement enrollments for transition window. Next, select the source enrollment to transition. Then select the billing account. If validation passes without any problems similar to the following screen, select **Continue** to proceed. **Error conditions**
-If you have the Enterprise Administrator (read-only) role, you'll see the following error that prevents the transition. You must have the Enterprise Administrator role before before you can transition your enrollment.
+If you have the Enterprise Administrator (read-only) role, you'll see the following error that prevents the transition. You must have the Enterprise Administrator role before you can transition your enrollment.
`Select another enrollment. You do not hve Enterprise Administrator write permission to the enrollment.`
If your new billing profile doesn't have the new plan enabled, you'll see the fo
Your new billing account simplifies billing for your organization while providing you enhanced billing and cost management capabilities. The following diagram explains how billing is organized in the new billing account.
-![Image of ea-mca-post-transition-hierarchy](./media/mca-setup-account/mca-post-transition-hierarchy.png)
+![Image of the Enterprise Agreement to Microsoft Customer Agreement post-transition hierarchy.](./media/microsoft-customer-agreement-setup-account/microsoft-customer-agreement-post-transition-hierarchy.png)
1. You use the billing account to manage billing for your Microsoft customer agreement. Enterprise administrators become owners of the billing account. To learn more about billing account, see [understand billing account](../understand/mca-overview.md#your-billing-account). 2. You use the billing profile to manage billing for your organization, similar to your Enterprise Agreement enrollment. Enterprise administrators become owners of the billing profile. To learn more about billing profiles, see [understand billing profiles](../understand/mca-overview.md#billing-profiles).
To complete the setup, you need access to both the new billing account and the E
3. Select **Start transition** in the last step of the setup. Once you select start transition:
- ![Screenshot that shows the setup wizard](./media/mca-setup-account/ea-mca-set-up-wizard.png)
+ ![Screenshot that shows the setup wizard](./media/microsoft-customer-agreement-setup-account/ea-microsoft-customer-agreement-set-up-wizard.png)
- A billing hierarchy corresponding to your Enterprise Agreement hierarchy is created in the new billing account. For more information, see [understand changes to your billing hierarchy](#understand-changes-to-your-billing-hierarchy). - Administrators from your Enterprise Agreement enrollment are given access to the new billing account so that they continue to manage billing for your organization.
To complete the setup, you need access to both the new billing account and the E
4. You can monitor the status of the transition on the **Transition status** page.
- ![Screenshot that shows the transition status](./media/mca-setup-account/ea-mca-set-up-status.png)
+ ![Screenshot that shows the transition status](./media/microsoft-customer-agreement-setup-account/ea-microsoft-customer-agreement-set-up-status.png)
## Validate billing account setup
To complete the setup, you need access to both the new billing account and the E
2. Search for **Cost Management + Billing**.
- ![Screenshot that shows Azure portal search](./media/mca-setup-account/search-cmb.png)
+ ![Screenshot that shows Azure portal search](./media/microsoft-customer-agreement-setup-account/search-cmb.png)
3. Select the billing account. The billing account will be of type **Microsoft Customer Agreement**. 4. Select **Azure subscriptions** from the left side.
- ![Screenshot that shows list of subscriptions](./media/mca-setup-account/mca-subscriptions-post-transition.png)
+ ![Screenshot that shows list of subscriptions](./media/microsoft-customer-agreement-setup-account/microsoft-customer-agreement-subscriptions-post-transition.png)
Azure subscriptions that are transitioned from your Enterprise Agreement enrollment to the new billing account are displayed on the Azure subscriptions page. If you believe any subscription is missing, transition the billing of the subscription manually in the Azure portal. For more information, see [get billing ownership of Azure subscriptions from other users](mca-request-billing-ownership.md)
Azure reservations in your Enterprise Agreement enrollment will be moved to your
2. Search for **Cost Management + Billing**.
- ![Screenshot that shows Azure portal search](./media/mca-setup-account/search-cmb.png)
+ ![Screenshot that shows Azure portal search](./media/microsoft-customer-agreement-setup-account/search-cmb.png)
3. Select the billing account for your **Microsoft Customer Agreement**. 4. Select **Access control (IAM)** from the left side.
- ![Screenshot that shows access of enterprise administrators listed as billing account owners post transition.](./media/mca-setup-account/mca-ea-admins-ba-access-post-transition.png)
+ ![Screenshot that shows access of enterprise administrators listed as billing account owners post transition.](./media/microsoft-customer-agreement-setup-account/microsoft-customer-agreement-ea-admins-ba-access-post-transition.png)
Enterprise administrators are listed as billing account owners while the enterprise administrators with read-only permissions are listed as billing account readers. If you believe the access for any enterprise administrators is missing, you can give them access in the Azure portal. For more information, see [manage billing roles in the Azure portal](understand-mca-roles.md#manage-billing-roles-in-the-azure-portal).
Enterprise administrators are listed as billing account owners while the enterpr
2. Search for **Cost Management + Billing**.
- ![Screenshot that shows Azure portal search](./media/mca-setup-account/search-cmb.png)
+ ![Screenshot that shows Azure portal search](./media/microsoft-customer-agreement-setup-account/search-cmb.png)
3. Select the billing profile created for your enrollment. Depending on your access, you may need to select a billing account. From the billing account, select Billing profiles and then the billing profile. 4. Select **Access control (IAM)** from the left side.
- ![Screenshot that shows access of enterprise administrators listed as billing profile owners post transition.](./media/mca-setup-account/mca-ea-admins-bp-access-post-transition.png)
+ ![Screenshot that shows access of enterprise administrators listed as billing profile owners post transition.](./media/microsoft-customer-agreement-setup-account/microsoft-customer-agreement-ea-admins-bp-access-post-transition.png)
Enterprise administrators are listed as billing profile owners while the enterprise administrators with read-only permissions are listed as billing profile readers. If you believe the access for any enterprise administrators is missing, you can give them access in the Azure portal. For more information, see [manage billing roles in the Azure portal](understand-mca-roles.md#manage-billing-roles-in-the-azure-portal).
Enterprise administrators are listed as billing profile owners while the enterpr
2. Search for **Cost Management + Billing**.
- ![Screenshot that shows Azure portal search](./media/mca-setup-account/search-cmb.png).
+ ![Screenshot that shows Azure portal search](./media/microsoft-customer-agreement-setup-account/search-cmb.png)
3. Select an invoice section. Invoice sections have the same name as their respective departments in Enterprise Agreement enrollments. Depending on your access, you may need to select a billing account. From the billing account, select **Billing profiles** and then select **Invoice sections**. From the invoice sections list, select an invoice section.
- ![Screenshot that shows list of invoice section post transition](./media/mca-setup-account/mca-invoice-sections-post-transition.png)
+ ![Screenshot that shows list of invoice section post transition](./media/microsoft-customer-agreement-setup-account/microsoft-customer-agreement-invoice-sections-post-transition.png)
4. Select **Access control (IAM)** from the left side.
- ![Screenshot that shows access of department and account admins access post transition](./media/mca-setup-account/mca-department-account-admins-access-post-transition.png)
+ ![Screenshot that shows access of department and account admins access post transition](./media/microsoft-customer-agreement-setup-account/microsoft-customer-agreement-department-account-admins-access-post-transition.png)
Enterprise administrators and department administrators are listed as invoice section owners or invoice section readers while account owners in the department are listed as Azure subscription creators. Repeat the step for all invoice sections to check access for all departments in your Enterprise Agreement enrollment. Account owners that weren't part of any department will get permission on an invoice section named **Default invoice section**. If you believe the access for any administrators is missing, you can give them access in the Azure portal. For more information, see [manage billing roles in the Azure portal](understand-mca-roles.md#manage-billing-roles-in-the-azure-portal).
If you need help, [contact support](https://portal.azure.com/?#blade/Microsoft_A
- [Get started with your new billing account](../understand/mca-overview.md) - [Complete Enterprise Agreement tasks in your billing account for a Microsoft Customer Agreement](mca-enterprise-operations.md)-- [Manage access to your billing account](understand-mca-roles.md)
+- [Manage access to your billing account](understand-mca-roles.md)
data-factory Continuous Integration Delivery Resource Manager Custom Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-resource-manager-custom-parameters.md
Previously updated : 09/24/2021 Last updated : 04/20/2022
If your development instance has an associated Git repository, you can override
* Use the custom parameter file and remove properties that don't need parameterization, i.e., properties that can keep a default value and hence decrease the parameter count. * Refactor logic in the dataflow to reduce parameters, for example, pipeline parameters all have the same value, you can just use global parameters instead.
- * Split one data factory into multiple data flows.
+ * Split one data factory into multiple data factories.
To override the default Resource Manager parameter configuration, go to the **Manage** hub and select **ARM template** in the "Source control" section. Under **ARM parameter configuration** section, click **Edit** icon in "Edit parameter configuration" to open the Resource Manager parameter configuration code editor.
data-factory Data Factory Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-private-link.md
Enabling the private link service for each of the preceding communication channe
- The command communications between the self-hosted integration runtime and the Azure data factory service can be performed securely in a private network environment. The traffic between the self-hosted integration runtime and the Azure data factory service goes through private link. - **Not currently supported**: - Interactive authoring that uses a self-hosted integration runtime, such as test connection, browse folder list and table list, get schema, and preview data, goes through Private Link.
- - The new version of the self-hosted integration runtime which can be automatically downloaded from Microsoft Download Center if you enable Auto-Update , is not supported at this time .
+ - The new version of the self-hosted integration runtime that can be automatically downloaded from Microsoft Download Center if you enable Auto-Update, isn't supported at this time.
> [!NOTE] > For functionality that's not currently supported, you still need to configure the previously mentioned domain and port in the virtual network or your corporate firewall.
Enabling the private link service for each of the preceding communication channe
This section will detail how to configure the private endpoint for communication between self-hosted integration runtime and Azure data factory. **Step 1: Create a private endpoint and set up a private link for Azure data factory.**
-The private endpoint is created in your virtual network for the communication between self-hosted integration runtime and Azure data factory service. Please follow the details step in [Set up a private endpoint link for Azure Data Factory](#set-up-a-private-endpoint-link-for-azure-data-factory)
+The private endpoint is created in your virtual network for the communication between self-hosted integration runtime and Azure data factory service. Follow the details step in [Set up a private endpoint link for Azure Data Factory](#set-up-a-private-endpoint-link-for-azure-data-factory)
**Step 2: Make sure the DNS configuration is correct.**
-Please follow the instructions [DNS changes for private endpoints](#dns-changes-for-private-endpoints) to check or configure your DNS settings.
+Follow the instructions [DNS changes for private endpoints](#dns-changes-for-private-endpoints) to check or configure your DNS settings.
**Step 3: Put FQDNs of Azure Relay and download center into the allow list of your firewall.**
-If your self-hosted integration runtime is installed on the virtual machine in your virtual network, please allow outbound traffic to below FQDNs in the NSG of your virtual network.
+If your self-hosted integration runtime is installed on the virtual machine in your virtual network, allow outbound traffic to below FQDNs in the NSG of your virtual network.
If your self-hosted integration runtime is installed on the machine in your on-premises environment, please allow outbound traffic to below FQDNs in the firewall of your on-premises environment and NSG of your virtual network.
If your self-hosted integration runtime is installed on the machine in your on-p
> [!NOTE] > If you donΓÇÖt allow above outbound traffic in the firewall and NSG, self-hosted integration runtime is shown as limited status. But you can still use it to execute activities. Only interactive authoring and auto-update donΓÇÖt work.
+> [!NOTE]
+> If one data factory (shared) has a self-hosted integration runtime and the self-hosted integration runtime is shared with other data factories (linked). You only need to create private endpoint for the shared data factory, other linked data factories can leverage this private link for the communications between self-hosted integration runtime and Azure data factory service.
+ ## DNS changes for private endpoints When you create a private endpoint, the DNS CNAME resource record for the data factory is updated to an alias in a subdomain with the prefix 'privatelink'. By default, we also create a [private DNS zone](../dns/private-dns-overview.md), corresponding to the 'privatelink' subdomain, with the DNS A resource records for the private endpoints.
The DNS resource records for DataFactoryA, when resolved in the virtual network
| DataFactoryA.{region}.datafactory.azure.net | CNAME | DataFactoryA.{region}.privatelink.datafactory.azure.net | | DataFactoryA.{region}.privatelink.datafactory.azure.net | A | < private endpoint IP address > |
-If you are using a custom DNS server on your network, clients must be able to resolve the FQDN for the data factory endpoint to the private endpoint IP address. You should configure your DNS server to delegate your private link subdomain to the private DNS zone for the virtual network, or configure the A records for ' DataFactoryA.{region}.datafactory.azure.net' with the private endpoint IP address.
+If you're using a custom DNS server on your network, clients must be able to resolve the FQDN for the data factory endpoint to the private endpoint IP address. You should configure your DNS server to delegate your private link subdomain to the private DNS zone for the virtual network, or configure the A records for ' DataFactoryA.{region}.datafactory.azure.net' with the private endpoint IP address.
- [Name resolution for resources in Azure virtual networks](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) - [DNS configuration for private endpoints](../private-link/private-endpoint-overview.md#dns-configuration)
If you are using a custom DNS server on your network, clients must be able to re
## Set up a private endpoint link for Azure Data Factory
-In this section you will set up a private endpoint link for Azure Data Factory.
+In this section, you'll set up a private endpoint link for Azure Data Factory.
You can choose whether to connect your Self-Hosted Integration Runtime (SHIR) to Azure Data Factory via public endpoint or private endpoint during the data factory creation step, shown here:
You can change the selection anytime after creation from the data factory portal
A private endpoint requires a virtual network and subnet for the link. In this example, a virtual machine within the subnet will be used to run the Self-Hosted Integration Runtime (SHIR), connecting via the private endpoint link. ### Create the virtual network
-If you do not have an existing virtual network to use with your private endpoint link, you must create a one, and assign a subnet.
+If you don't have an existing virtual network to use with your private endpoint link, you must create a one, and assign a subnet.
1. Sign into the Azure portal at https://portal.azure.com. 2. On the upper-left side of the screen, select **Create a resource > Networking > Virtual network** or search for **Virtual network** in the search box.
Finally, you must create the private endpoint in your data factory.
## Restrict access for data factory resources using private link If you want to restrict access for data factory resources in your subscriptions by private link, please follow [Use portal to create private link for managing Azure resources](../azure-resource-manager/management/create-private-link-access-portal.md?source=docs) + ## Known issue
-You are unable to access each other PaaS Resources when both sides are exposed to private Link and private endpoint. This is a known limitation of private link and private endpoint.
-For example, if A is using a private link to access the portal of data factory A in virtual network A. When data factory A doesnΓÇÖt block public access, B can access the portal of data factory A in virtual network B via public. But when customer B creates a private endpoint against data factory B in virtual network B, then he canΓÇÖt access data factory A via public in virtual network B anymore.
+You're unable to access each other PaaS Resource when both sides are exposed to private Link and private endpoint. This is a known limitation of private link and private endpoint.
+For example, if A is using a private link to access the portal of data factory A in virtual network A. When data factory A doesnΓÇÖt block public access, B can access the portal of data factory A in virtual network B via public. But when customer B creates a private endpoint against data factory B in virtual network B, then customer B canΓÇÖt access data factory A via public in virtual network B anymore.
## Next steps
data-factory Managed Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/managed-virtual-network-private-endpoint.md
Last updated 04/01/2022
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-This article will explain managed virtual network and Managed Private endpoints in Azure Data Factory.
+This article will explain managed virtual network and Managed private endpoints in Azure Data Factory.
## Managed virtual network
-When you create an Azure Integration Runtime (IR) within Azure Data Factory managed virtual network (VNET), the integration runtime will be provisioned with the managed virtual network and will leverage private endpoints to securely connect to supported data stores.
+When you create an Azure integration runtime within Azure Data Factory managed virtual network, the integration runtime will be provisioned with the managed virtual network and will use private endpoints to securely connect to supported data stores.
-Creating an Azure IR within managed virtual network ensures that data integration process is isolated and secure.
+Creating an Azure integration runtime within managed virtual network ensures that data integration process is isolated and secure.
Benefits of using managed virtual network: - With a managed virtual network, you can offload the burden of managing the virtual network to Azure Data Factory. You don't need to create a subnet for Azure Integration Runtime that could eventually use many private IPs from your virtual network and would require prior network infrastructure planning. -- It does not require deep Azure networking knowledge to do data integrations securely. Instead getting started with secure ETL is much simplified for data engineers. -- Managed virtual network along with Managed private endpoints protects against data exfiltration.
+- It doesn't require deep Azure networking knowledge to do data integrations securely. Instead getting started with secure ETL is much simplified for data engineers.
+- Managed virtual network along with managed private endpoints protects against data exfiltration.
> [!IMPORTANT] >Currently, the managed virtual network is only supported in the same region as Azure Data Factory region.
When you use a private link, traffic between your data stores and managed virtua
Private endpoint uses a private IP address in the managed virtual network to effectively bring the service into it. Private endpoints are mapped to a specific resource in Azure and not the entire service. Customers can limit connectivity to a specific resource approved by their organization. Learn more about [private links and private endpoints](../private-link/index.yml). > [!NOTE]
-> It's recommended that you create Managed private endpoints to connect to all your Azure data sources.
+> It's recommended that you create managed private endpoints to connect to all your Azure data sources.
> [!NOTE] > Make sure resource provider Microsoft.Network is registered to your subscription. > [!WARNING]
-> If a PaaS data store (Blob, ADLS Gen2, Azure Synapse Analytics) has a private endpoint already created against it, and even if it allows access from all networks, ADF would only be able to access it using a managed private endpoint. If a private endpoint does not already exist, you must create one in such scenarios.
+> If a PaaS data store (Blob, Azure Data Lake Storage Gen2, Azure Synapse Analytics) has a private endpoint already created against it, and even if it allows access from all networks, Azure Data Factory would only be able to access it using a managed private endpoint. If a private endpoint does not already exist, you must create one in such scenarios.
A private endpoint connection is created in a "Pending" state when you create a managed private endpoint in Azure Data Factory. An approval workflow is initiated. The private link resource owner is responsible to approve or reject the connection. :::image type="content" source="./media/tutorial-copy-data-portal-private/manage-private-endpoint.png" alt-text="Manage private endpoint":::
-If the owner approves the connection, the private link is established. Otherwise, the private link won't be established. In either case, the Managed private endpoint will be updated with the status of the connection.
+If the owner approves the connection, the private link is established. Otherwise, the private link won't be established. In either case, the managed private endpoint will be updated with the status of the connection.
:::image type="content" source="./media/tutorial-copy-data-portal-private/approve-private-endpoint.png" alt-text="Approve Managed private endpoint":::
-Only a Managed private endpoint in an approved state can send traffic to a given private link resource.
+Only a managed private endpoint in an approved state can send traffic to a given private link resource.
## Interactive authoring
-Interactive authoring capabilities is used for functionalities like test connection, browse folder list and table list, get schema, and preview data. You can enable interactive authoring when creating or editing an Azure Integration Runtime which is in ADF-managed virtual network. The backend service will pre-allocate compute for interactive authoring functionalities. Otherwise, the compute will be allocated every time any interactive operation is performed which will take more time. The Time To Live (TTL) for interactive authoring is 60 minutes, which means it will automatically become disabled after 60 minutes of the last interactive authoring operation.
+Interactive authoring capabilities are used for functionalities like test connection, browse folder list and table list, get schema, and preview data. You can enable interactive authoring when creating or editing an Azure integration runtime, which is in Azure Data Factory managed virtual network. The backend service will pre-allocate compute for interactive authoring functionalities. Otherwise, the compute will be allocated every time any interactive operation is performed which will take more time. The Time-To-Live (TTL) for interactive authoring is 60 minutes, which means it will automatically become disabled after 60 minutes of the last interactive authoring operation.
:::image type="content" source="./media/managed-vnet/interactive-authoring.png" alt-text="Interactive authoring"::: ## Activity execution time using managed virtual network
-By design, Azure integration runtime in managed virtual network takes longer queue time than global Azure integration runtime as we are not reserving one compute node per data factory, so there is a warm up for each activity to start, and it occurs primarily on virtual network join rather than Azure integration runtime. For non-copy activities including pipeline activity and external activity, there is a 60 minutes Time To Live (TTL) when you trigger them at the first time. Within TTL, the queue time is shorter because the node is already warmed up.
+By design, Azure integration runtime in managed virtual network takes longer queue time than global Azure integration runtime as we aren't reserving one compute node per data factory, so there's a warm-up for each activity to start, and it occurs primarily on virtual network join rather than Azure integration runtime. For non-copy activities including pipeline activity and external activity, there's a 60 minutes Time-To-Live (TTL) when you trigger them at the first time. Within TTL, the queue time is shorter because the node is already warmed up.
> [!NOTE] > Copy activity doesn't have TTL support yet.
New-AzResource -ApiVersion "${apiVersion}" -ResourceId "${integrationRuntimeReso
## Limitations and known issues
-### Supported data sources
-The following data sources have native Private Endpoint support and can be connected through private link from ADF managed virtual network.
+### Supported data sources and services
+The following data sources and services have native private endpoint support and can be connected through private link from ADF managed virtual network.
- Azure Blob Storage (not including Storage account V1)
+- Azure Functions (Premium plan)
- Azure Cognitive Search - Azure Cosmos DB MongoDB API - Azure Cosmos DB SQL API
The following data sources have native Private Endpoint support and can be conne
- Azure Table Storage (not including Storage account V1) > [!Note]
-> You still can access all data sources that are supported by Data Factory through public network.
+> You still can access all data sources that are supported by Azure Data Factory through public network.
> [!NOTE]
-> Because Azure SQL Managed Instance native Private Endpoint in public preview, you can access it from managed virtual network using Private Linked Service and Load Balancer. Please see [How to access SQL Managed Instance from Data Factory Managed VNET using Private Endpoint](tutorial-managed-virtual-network-sql-managed-instance.md).
+> Because Azure SQL Managed Instance native private endpoint in private preview, you can access it from managed virtual network using Private Linked Service and Load Balancer. Please see [How to access SQL Managed Instance from Azure Data Factory managed virtual network using private endpoint](tutorial-managed-virtual-network-sql-managed-instance.md).
### On-premises data sources
-To access on-premises data sources from managed virtual network using Private Endpoint, please see this tutorial [How to access on-premises SQL Server from Data Factory Managed VNET using Private Endpoint](tutorial-managed-virtual-network-on-premise-sql-server.md).
-
-### Azure Data Factory managed virtual network is available in the following Azure regions
-Generally, managed virtual network is available to all Azure Data Factory regions, except:
-- South India
+To access on-premises data sources from managed virtual network using private endpoint, see this tutorial [How to access on-premises SQL Server from Azure Data Factory managed virtual network using private endpoint](tutorial-managed-virtual-network-on-premise-sql-server.md).
### Outbound communications through public endpoint from ADF managed virtual network - All ports are opened for outbound communications.
-### Linked Service creation of Azure Key Vault
-- When you create a Linked Service for Azure Key Vault, there is no Azure Integration Runtime reference. So you can't create Private Endpoint during Linked Service creation of Azure Key Vault. But when you create Linked Service for data stores which references Azure Key Vault Linked Service and this Linked Service references Azure Integration Runtime with managed virtual network enabled, then you are able to create a Private Endpoint for the Azure Key Vault Linked Service during the creation. -- **Test connection** operation for Linked Service of Azure Key Vault only validates the URL format, but doesn't do any network operation.-- The column **Using private endpoint** is always shown as blank even if you create Private Endpoint for Azure Key Vault.
+### Linked service creation of Azure Key Vault
+- When you create a linked service for Azure Key Vault, there's no Azure integration runtime reference. So you can't create private endpoint during linked service creation of Azure Key Vault. But when you create linked service for data stores, which references Azure Key Vault linked service and this linked service references Azure integration runtime with managed virtual network enabled, then you're able to create a private endpoint for the Azure Key Vault linked service during the creation.
+- **Test connection** operation for linked service of Azure Key Vault only validates the URL format, but doesn't do any network operation.
+- The column **Using private endpoint** is always shown as blank even if you create private endpoint for Azure Key Vault.
-### Linked Service creation of Azure HDI
-- The column **Using private endpoint** is always shown as blank even if you create Private Endpoint for HDI using private link service and load balancer with port forwarding.
+### Linked service creation of Azure HDI
+- The column **Using private endpoint** is always shown as blank even if you create private endpoint for HDI using private link service and load balancer with port forwarding.
:::image type="content" source="./media/managed-vnet/akv-pe.png" alt-text="Private Endpoint for AKV":::
data-factory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new.md
+ Last updated 01/21/2022
This page is updated monthly, so revisit it regularly.
<tr><td><b>Service Category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr> <tr><td><b>Data Movement</b></td><td>Get metadata driven data ingestion pipelines on ADF Copy Data Tool within 10 minutes (Public Preview)</td><td>With this, you can build large-scale data copy pipelines with metadata-driven approach on copy data tool(Public Preview) within 10 minutes.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/get-metadata-driven-data-ingestion-pipelines-on-adf-within-10/ba-p/2528219">Learn more</a></td></tr> <tr><td><b>Data Flow</b></td><td>New map functions added in data flow transformation functions</td><td>A new set of data flow transformation functions has been added to enable data engineers to easily generate, read, and update map data types and complex map structures.<br><a href="data-flow-map-functions.md">Learn more</a></td></tr>
-<tr><td><b>Integration Runtime</b></td><td>5 new regions available in Azure Data Factory Managed VNET (Public Preview)</td><td>These 5 new regions(China East2, China North2, US Gov Arizona, US Gov Texas, US Gov Virginia) are available in Azure Data Factory managed virtual network (Public Preview).<br><a href="managed-virtual-network-private-endpoint.md#azure-data-factory-managed-virtual-network-is-available-in-the-following-azure-regions">Learn more</a></td></tr>
+<tr><td><b>Integration Runtime</b></td><td>5 new regions available in Azure Data Factory Managed VNET (Public Preview)</td><td>These 5 new regions(China East2, China North2, US Gov Arizona, US Gov Texas, US Gov Virginia) are available in Azure Data Factory managed virtual network (Public Preview).<br></td></tr>
<tr><td rowspan=2><b>Developer Productivity</b></td><td>ADF homepage improvements</td><td>The Data Factory home page has been redesigned with better contrast and reflow capabilities. Additionally, a few sections have been introduced on the homepage to help you improve productivity in your data integration journey.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/the-new-and-refreshing-data-factory-home-page/ba-p/2515076">Learn more</a></td></tr> <tr><td>New landing page for Azure Data Factory Studio</td><td>The landing page for Data Factory blade in the Azure portal.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/the-new-and-refreshing-data-factory-home-page/ba-p/2515076">Learn more</a></td></tr> </table>
data-share Subscribe To Data Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/subscribe-to-data-share.md
Start by preparing your environment for the Azure CLI:
[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../includes/azure-cli-prepare-your-environment-no-header.md)]
-Run the [az datashare consumer invitation list](/cli/azure/datashare/consumer/invitation#az-datashare-consumer-invitation-list) command to see your current invitations:
+Run the [az datashare consumer invitation list](/cli/azure/datashare/invitation#az-datashare-invitation-list) command to see your current invitations:
```azurecli az datashare consumer invitation list --subscription 11111111-1111-1111-1111-111111111111
Copy your invitation ID for use in the next section.
### [Azure CLI](#tab/azure-cli)
-Use the [az datashare consumer share-subscription create](/cli/azure/datashare/consumer/share-subscription#az-datashare-consumer-share-subscription-create) command to create the Data Share.
+Use the [az datashare consumer share-subscription create](/cli/azure/datashare/share-subscription#az-datashare-share-subscription-create) command to create the Data Share.
```azurecli az datashare consumer share-subscription create --resource-group share-rg \
Follow the steps below to configure where you want to receive data.
Use these commands to configure where you want to receive data.
-1. Run the [az datashare consumer share-subscription list-source-dataset](/cli/azure/datashare/consumer/share-subscription#az-datashare-consumer-share-subscription-list-source-dataset) command to get the data set ID:
+1. Run the az datashare consumer share-subscription list-source-dataset command to get the data set ID:
```azurecli az datashare consumer share-subscription list-source-dataset \
Use these commands to configure where you want to receive data.
\"storage_account_name\":\"datashareconsumersa\",\"kind\":\"BlobFolder\",\"prefix\":\"consumer\"}' ```
-1. Use the [az datashare consumer dataset-mapping create](/cli/azure/datashare/consumer/dataset-mapping#az-datashare-consumer-dataset-mapping-create) command to create the dataset mapping:
+1. Use the [az datashare consumer dataset-mapping create](/cli/azure/datashare/data-set-mapping#az-datashare-data-set-mapping-create) command to create the dataset mapping:
```azurecli az datashare consumer dataset-mapping create --resource-group "share-rg" \
Use these commands to configure where you want to receive data.
--subscription 11111111-1111-1111-1111-111111111111 ```
-1. Run the [az datashare consumer share-subscription synchronization start](/cli/azure/datashare/consumer/share-subscription/synchronization#az-datashare-consumer-share-subscription-synchronization-start) command to start dataset synchronization.
+1. Run the az datashare consumer share-subscription synchronization start command to start dataset synchronization.
```azurecli az datashare consumer share-subscription synchronization start \
Use these commands to configure where you want to receive data.
--subscription 11111111-1111-1111-1111-111111111111 ```
- Run the [az datashare consumer share-subscription synchronization list](/cli/azure/datashare/consumer/share-subscription/synchronization#az-datashare-consumer-share-subscription-synchronization-list) command to see a list of your synchronizations:
+ Run the az datashare consumer share-subscription synchronization list command to see a list of your synchronizations:
```azurecli az datashare consumer share-subscription synchronization list \
Use these commands to configure where you want to receive data.
--subscription 11111111-1111-1111-1111-111111111111 ```
- Use the [az datashare consumer share-subscription list-source-share-synchronization-setting](/cli/azure/datashare/consumer/share-subscription#az-datashare-consumer-share-subscription-list-source-share-synchronization-setting) command to see synchronization settings set on your share.
+ Use the [az datashare consumer share-subscription list-source-share-synchronization-setting](/cli/azure/datashare/share-subscription#az-datashare-share-subscription-list-source-share-synchronization-setting) command to see synchronization settings set on your share.
```azurecli az datashare consumer share-subscription list-source-share-synchronization-setting \
These steps only apply to snapshot-based sharing.
### [Azure CLI](#tab/azure-cli)
-Run the [az datashare consumer trigger create](/cli/azure/datashare/consumer/trigger#az-datashare-consumer-trigger-create) command to trigger a snapshot:
+Run the [az datashare consumer trigger create](/cli/azure/datashare/trigger#az-datashare-trigger-create) command to trigger a snapshot:
```azurecli az datashare consumer trigger create --resource-group "share-rg" \
databox Data Box Deploy Export Picked Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-export-picked-up.md
Previously updated : 01/25/2022 Last updated : 04/21/2022 # Customer intent: As an IT admin, I need to be able to return Data Box to upload on-premises data from my server onto Azure.
Follow the guidelines for the region you're shipping from if you're using Micros
[!INCLUDE [data-box-shipping-in-uae](../../includes/data-box-shipping-in-uae.md)]
+## [Norway](#tab/in-norway)
+++ ### Self-managed shipping [!INCLUDE [data-box-shipping-self-managed](../../includes/data-box-shipping-self-managed.md)] -- ## Erasure of data from Data Box Once the device reaches Azure datacenter, the Data Box erases the data on its disks as per the [NIST SP 800-88 Revision 1 guidelines](https://csrc.nist.gov/News/2014/Released-SP-800-88-Revision-1,-Guidelines-for-Medi).
ddos-protection Test Through Simulations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/test-through-simulations.md
na Previously updated : 09/08/2020 Last updated : 04/21/2022
-# Test through simulations
+# Test with simulation partners
ItΓÇÖs a good practice to test your assumptions about how your services will respond to an attack by conducting periodic simulations. During testing, validate that your services or applications continue to function as expected and thereΓÇÖs no disruption to the user experience. Identify gaps from both a technology and process standpoint and incorporate them in the DDoS response strategy. We recommend that you perform such tests in staging environments or during non-peak hours to minimize the impact to the production environment.
-We have partnered with [BreakingPoint Cloud](https://www.ixiacom.com/products/breakingpoint-cloud), a self-service traffic generator, to build an interface where Azure customers can generate traffic against DDoS Protection-enabled public endpoints for simulations. You can use the simulation to:
-
+Simulations help you:
- Validate how Azure DDoS Protection helps protect your Azure resources from DDoS attacks. - Optimize your incident response process while under DDoS attack. - Document DDoS compliance. - Train your network security teams.
+## Azure DDoS simulation testing policy
+You may only simulate attacks using our approved testing partners:
+- [Red Button](https://www.red-button.net/): work with a dedicated team of experts to simulate real-world DDoS attack scenarios in a controlled environment.
+- [BreakingPoint Cloud](https://www.ixiacom.com/products/breakingpoint-cloud): a self-service traffic generator where your customers can generate traffic against DDoS Protection-enabled public endpoints for simulations.
+
+Our testing partners' simulation environments are built within Azure. You can only simulate against Azure-hosted public IP addresses that belong to an Azure subscription of your own, which will be validated by Azure Active Directory (Azure AD) before testing. Additionally, these target public IP addresses must be protected under Azure DDoS Protection.
++ > [!NOTE]
-> BreakingPoint Cloud is only available for the Public cloud.
+> BreakingPoint Cloud and Red Button are only available for the Public cloud.
## Prerequisites - Before you can complete the steps in this tutorial, you must first create a [Azure DDoS Standard protection plan](manage-ddos-protection.md) with protected public IP addresses.-- You must first create an account with [BreakingPoint Cloud](https://www.ixiacom.com/products/breakingpoint-cloud).
+- For BreakingPoint Cloud, you must first [create an account](https://www.ixiacom.com/products/breakingpoint-cloud).
-## Configure a DDoS test attack
+## BreakingPoint Cloud
+### Configure a DDoS test attack
1. Enter or select the following values, then select **Start test**:
It should now appear like this:
![DDoS Attack Simulation Example: BreakingPoint Cloud](./media/ddos-attack-simulation/ddos-attack-simulation-example-1.png)
-## Monitor and validate
+### Monitor and validate
1. Log in to https://portal.azure.com and go to your subscription. 1. Select the Public IP address you tested the attack on.
Once the resource is under attack, you should see that the value changes from **
This [API script](https://aka.ms/ddosbreakingpoint) can be used to automate DDoS testing by running once or using cron to schedule regular tests. This is useful to validate that your logging is configured properly and that detection and response procedures are effective. The scripts require a Linux OS (tested with Ubuntu 18.04 LTS) and Python 3. Install prerequisites and API client using the included script or by using the documentation on the [BreakingPoint Cloud](https://www.ixiacom.com/products/breakingpoint-cloud) website.
+## Red Button
+
+Red ButtonΓÇÖs [DDoS Testing](https://www.red-button.net/ddos-testing/) service suite includes three stages:
+
+1. **Planning session**: Red Button experts meet with your team to understand your network architecture, assemble technical details, and define clear goals and testing schedules. This includes planning the DDoS test scope and targets, attack vectors, and attack rates. The joint planning effort is detailed in a test plan document.
+2. **Controlled DDoS attack**: Based on the defined goals, the Red Button team launches a combination of multi-vector DDoS attacks. The test typically lasts between three to six hours. Attacks are securely executed using dedicated servers and are controlled and monitored using Red ButtonΓÇÖs management console.
+3. **Summary and recommendations**: The Red Button team provides you with a written DDoS Test Report outlining the effectiveness of DDoS mitigation. The report includes an executive summary of the test results, a complete log of the simulation, a list of vulnerabilities within your infrastructure, and recommendations on how to correct them.
+
+Here's an example of a [DDoS Test Report](https://www.red-button.net/wp-content/uploads/2021/06/DDoS-Test-Report-Example-with-Analysis.pdf) from Red Button:
+
+![DDoS Test Report Example](./media/ddos-attack-simulation/red-button-test-report-example.png)
+
+In addition, Red Button offers two other service suites, [DDoS 360](https://www.red-button.net/prevent-ddos-attacks-with-ddos360/) and [DDoS Incident Response](https://www.red-button.net/ddos-incident-response/), that can complement the DDoS Testing service suite.
+ ## Next steps - Learn how to [view and configure DDoS protection telemetry](telemetry.md).
defender-for-cloud Quickstart Enable Defender For Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-enable-defender-for-cosmos.md
You can enable Microsoft Defender for Cloud on a specific Azure Cosmos DB accoun
### [ARM template](#tab/arm-template)
-Use an Azure Resource Manager template to deploy an Azure Cosmos DB account with Microsoft Defender for Azure Cosmos DB enabled. For more information, see [Create an Azure Cosmos DB account with Microsoft Defender for Azure Cosmos DB enabled](https://azure.microsoft.com/resources/templates/cosmosdb-advanced-threat-protection-create-account/).
+Use an Azure Resource Manager template to deploy an Azure Cosmos DB account with Microsoft Defender for Azure Cosmos DB enabled. For more information, see [Create an Azure Cosmos DB account with Microsoft Defender for Azure Cosmos DB enabled](https://azure.microsoft.com/resources/templates/?term=cosmosdb-advanced-threat-protection-create-account).
digital-twins Concepts Apis Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-apis-sdks.md
To use the data plane APIs:
- You can see detailed information and usage examples by continuing to the [.NET (C#) SDK (data plane)](#net-c-sdk-data-plane) section of this article. * You can use the Java SDK. To use the Java SDK... - You can view and install the package from Maven: [`com.azure:azure-digitaltwins-core`](https://search.maven.org/artifact/com.azure/azure-digitaltwins-core/1.0.0/jar)
- - You can view the [SDK reference documentation](/java/api/overview/azure/digitaltwins/client?view=azure-java-stable&preserve-view=true)
+ - You can view the [SDK reference documentation](/java/api/overview/azure/digitaltwins)
- You can find the SDK source in GitHub: [Azure IoT Digital Twins client library for Java](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/digitaltwins/azure-digitaltwins-core) * You can use the JavaScript SDK. To use the JavaScript SDK... - You can view and install the package from npm: [Azure Azure Digital Twins Core client library for JavaScript](https://www.npmjs.com/package/@azure/digital-twins-core).
dms Migrate Azure Mysql Consistent Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migrate-azure-mysql-consistent-backup.md
+
+ Title: MySQL to Azure Database for MySQL Data Migration - MySQL Consistent Backup (Preview)
+description: Learn how to use the Azure Database for MySQL Data Migration - MySQL Consistent Backup for transaction consistency even without making the Source server read-only
++++++++ Last updated : 04/19/2022+++
+# MySQL to Azure Database for MySQL Data Migration - MySQL Consistent Backup (Preview)
+
+MySQL Consistent Backup is a new feature that allows users to take a Consistent Backup of a MySQL server without losing data integrity at source because of ongoing CRUD (Create, Read, Update, and Delete) operations. Transactional consistency is achieved without the need to set the source server to read-only mode through this feature.
+
+## Current implementation
+
+In the current implementation, users can enable the **Make Source Server Read Only** option during offline migration. This maintains the data integrity of the target database as the source is migrated by preventing Write/Delete operations on the source server during migration. When you make the source server read only as part of the migration process, the selection applies to all the source serverΓÇÖs databases, regardless of whether they are selected for migration.
++
+## Disadvantages
+
+Making the source server read only prevents users from modifying the data, rendering the database unavailable for any update operations. However, if this option is not enabled, there is a possibility for data updates to occur during migration. As a result, migrated data could be inconsistent because the database snapshots would be read at different points in time.
+
+## Consistent Backup
+
+Consistent Backup allows data migration to proceed and maintains data consistency regardless of whether the source server is set as read only. To enable Consistent Backup, select the **[Preview] Enable Transactional Consistency** option.
++
+With the **Enable Transactional Consistency** selected, you can maintain data consistency at the target even as traffic continues to the source server.
+
+### The Undo log
+
+The undo log makes repeatable reads possible and helps generate the snapshot that is required for the migration. The undo log is a collection of undo log records associated with a single read-write transaction. An undo log record contains information about how to undo the latest change by a transaction to a clustered index record. If another transaction needs to see the original data as part of a consistent read operation, the unmodified data is retrieved from undo log records. Transactional consistency is achieved through an isolation level of repeatable reads along with the undo log. On executing the Select query (for migration), the source server inspects the current contents of a table, compares it to the Undo log and then rolls back all the changes from the point in time the migration was started, before returning the results to the client. The undo log is not user controlled, is enabled by default, and does not offer any APIs for control by the user.
+
+### How Consistent Backup works
+
+When you initiate a migration, the service flushes all tables on the source server with a **read** lock to obtain the point-in-time snapshot. This is done because a global lock is more reliable than attempting to lock individual databases or tables. As a result, even if you are not migrating all databases in a server, they are locked as part of setting up the migration process. The migration service initiates a repeatable read and combines the current table state with contents of the undo log for the snapshot. The **snapshot** is generated after obtaining the server wide lock and spawning several connections for the migration. After the creation of all connections that will be used for the migration, the locks on the table are released.
+
+The migration threads are used to perform the migration with repeatable read enabled for all transactions and the source server hides all new changes from the offline migration. Clicking on the specific database in the Azure Database Migration Service (DMS) Portal UI during the migration displays the migration status of all the tables - completed or in progress - in the migration. In case of connection issues, the status of the database changes to **Retrying** and the error information is displayed if the migration fails.
+
+Repeatable reads are enabled to keep the undo logs accessible during the migration, which will increase the storage required on the source because of long running connections. It is important to note that the longer a migration runs the more table changes that occur, the undo log's history of changes will be more extensive. The longer a migration, the more slowly it runs as the undo logs to retrieve the unmodified data from will be longer. This could also increase the compute requirements and load on the source server.
+
+### The binary log
+
+The [binary log (or binlog)](https://dev.mysql.com/doc/refman/8.0/en/binary-log.html) is an artifact that is reported to the user after the offline migration is complete. As the service spawns threads for migration during read lock, the migration service records the initial binlog position because the binlog position could change after the server is unlocked. While the migration service attempts to obtain the locks and set up the migration, the bin log position will display the status **Waiting for data movement to start...**.
++
+The binlog keeps a record of all the CRUD operations in the source server. The DMS portal UI shows the binary log filename and position aligned to the time the locks were obtained on the source for the consistent snapshot and it does not impact the outcome of the offline migration. The binlog position is updated on the UI as soon as it is available, and the user does not have to wait for the migration to conclude.
++
+This binlog position can be used in conjunction with [Data-in replication](../mysql/concepts-data-in-replication.md) or third-party tools (such as Striim or Attunity) that provide for replaying binlog changes to a different server, if required.
+
+The binary log is deleted periodically, so the user must take necessary precautions if Change Data Capture (CDC) is used later to migrate the post-migration updates at the source. Configure the **binlog_expire_logs_seconds** parameter on the source server to ensure that binlogs are not purged before the replica commits the changes. If non-zero, binary logs will be purged after **binlog_expire_logs_seconds** seconds. Post successful cut-over, you can reset the value. Users will need to leverage the changes in the binlog to carry out the online migration. Users can take advantage of DMS to provide the initial seeding of the data and then stitch that together with the CDC solution of their choice to implement a minimal downtime migration.
+
+## Prerequisites
+
+To complete the migration successfully with Consistent Backup enabled to:
+
+- Ensure that the user who is attempting to flush tables with a **read lock** has the RELOAD or FLUSH permission.
+
+- Use the mysql client tool to determine whether log_bin is enabled on the source server. The Bin log is not always turned on by default and should be checked to see if it is enabled before starting the migration. The mysql client tool is used to determine whether **log_bin** is enabled on the source by running the command: **SHOW VARIABLES LIKE 'log_bin';**
+
+> [!NOTE]
+> With Azure Database for MySQL Single Server, which supports up to 4TB, this is not enabled by default. However, if you promote a read replica for the source server and then delete read replica, the parameter will be set to ON.
+
+- Configure the **binlog_expire_logs_seconds** parameter on the source server to ensure that binlog files are not purged before the replica commits the changes. Post successful cutover, the value can be reset.
+
+## Known issues and limitations
+
+The known issues and limitations associated with Consistent Backup fall broadly into two categories: locks and retries.
+
+> [!NOTE]
+> The migration service runs the **START TRANSACTION WITH CONSISTENT SNAPSHOT** query for all the worker threads to get the server snapshot. But this is supported only by InnoDB. More info [here](https://dev.mysql.com/doc/refman/8.0/en/commit.html).
+
+### Locks
+
+Typically, it is a straightforward process to obtain a lock, which should take between a few seconds and a couple of minutes to complete. In a few scenarios, however, attempts to obtain a lock on the source server can fail.
+
+- The presence of long running queries could result in unnecessary downtime, as DMS could lock a subset of the tables and then time out waiting for the last few to come available. Before starting the migration, check for any long running operations by running the **SHOW PROCESSLIST** command.
+
+- If the source server is experiencing a lot of write updates at the time a lock is requested, the lock cannot be readily obtained and could fail after the lock-wait timeout. This happens in the case of batch processing tasks in the tables, which when in progress may result in denying the request for a lock. As mentioned earlier, the lock requested is a single global-level lock for the entire server so even if a single table or database is under processing, the lock request would have to wait for the ongoing task to conclude.
+
+- Another limitation relates to migrating from an RDS source server. RDS does not support flush tables with read lock and **LOCK TABLE** query is run on the selected tables under the hood. As the tables are locked individually, the locking process can be less reliable, and locks can take longer to acquire.
+
+### Retries
+
+The migration handles transient connection issues and additional connections are typically provisioned upfront for this purpose. We look at the migration settings, particularly the number of parallel read operations on the source and apply a factor (typically ~1.5) and create as many connections up-front. This way the service makes sure we can keep operations running in parallel. At any point of time, if there is a connection loss or the service is unable to obtain a lock, the service uses the surplus connections provisioned to retry the migration. If all the provisioned connections are exhausted resulting in the loss of the point-in-time sync , the migration must be restarted from the beginning. In cases of both success and failure, all cleanup actions are undertaken by this offline migration service and the user does not have to perform any explicit cleanup actions.
+
+## Next steps
+
+- Learn more about [Data-in Replication](../mysql/concepts-data-in-replication.md)
+
+- [Tutorial: Migrate MySQL to Azure Database for MySQL offline using DMS](tutorial-mysql-azure-mysql-offline-portal.md)
expressroute Expressroute Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-faqs.md
Your existing circuit will continue advertising the prefixes for Microsoft 365.
* Microsoft peering of ExpressRoute circuits that are configured on or after August 1, 2017 will not have any prefixes advertised until a route filter is attached to the circuit. You will see no prefixes by default.
+### If I have multiple Virtual Networks (Vnets) connected to the same ExpressRoute circuit, can I use ExpressRoute for Vnet-to-Vnet connectivity?
+Vnet-to-Vnet connectivity over ExpressRoute is not recommended. To acheive this, configure [Virtual Network Peering](https://docs.microsoft.com/azure/virtual-network/virtual-network-peering-overview?msclkid=b64a7b6ac19e11eca60d5e3e5d0764f5).
+ ## <a name="expressRouteDirect"></a>ExpressRoute Direct [!INCLUDE [ExpressRoute Direct](../../includes/expressroute-direct-faq-include.md)]
frontdoor Front Door Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-diagnostics.md
Front Door currently provides diagnostic logs. Diagnostic logs provide individua
| TimeTaken | The length of time from first byte of request into Front Door to last byte of response out, in seconds. | | TrackingReference | The unique reference string that identifies a request served by Front Door, also sent as X-Azure-Ref header to the client. Required for searching details in the access logs for a specific request. | | UserAgent | The browser type that the client used. |
-| ErrorInfo | This field contains the specific type of error for further troubleshooting. </br> Possible values include: </br> **NoError**: Indicates no error was found. </br> **CertificateError**: Generic SSL certificate error.</br> **CertificateNameCheckFailed**: The host name in the SSL certificate is invalid or doesn't match. </br> **ClientDisconnected**: Request failure because of client network connection. </br> **UnspecifiedClientError**: Generic client error. </br> **InvalidRequest**: Invalid request. It might occur because of malformed header, body, and URL. </br> **DNSFailure**: DNS Failure. </br> **DNSNameNotResolved**: The server name or address couldn't be resolved. </br> **OriginConnectionAborted**: The connection with the origin was stopped abruptly. </br> **OriginConnectionError**: Generic origin connection error. </br> **OriginConnectionRefused**: The connection with the origin wasn't able to established. </br> **OriginError**: Generic origin error. </br> **OriginInvalidResponse**: Origin returned an invalid or unrecognized response. </br> **OriginTimeout**: The timeout period for origin request expired. </br> **ResponseHeaderTooBig**: The origin returned too large of a response header. </br> **RestrictedIP**: The request was blocked because of restricted IP. </br> **SSLHandshakeError**: Unable to establish connection with origin because of SSL hand shake failure. </br> **UnspecifiedError**: An error occurred that didnΓÇÖt fit in any of the errors in the table. |
+| ErrorInfo | This field contains the specific type of error for further troubleshooting. </br> Possible values include: </br> **NoError**: Indicates no error was found. </br> **CertificateError**: Generic SSL certificate error.</br> **CertificateNameCheckFailed**: The host name in the SSL certificate is invalid or doesn't match. </br> **ClientDisconnected**: Request failure because of client network connection. </br> **UnspecifiedClientError**: Generic client error. </br> **InvalidRequest**: Invalid request. It might occur because of malformed header, body, and URL. </br> **DNSFailure**: DNS Failure. </br> **DNSNameNotResolved**: The server name or address couldn't be resolved. </br> **OriginConnectionAborted**: The connection with the origin was stopped abruptly. </br> **OriginConnectionError**: Generic origin connection error. </br> **OriginConnectionRefused**: The connection with the origin wasn't able to established. </br> **OriginError**: Generic origin error. </br> **OriginInvalidResponse**: Origin returned an invalid or unrecognized response. </br> **OriginTimeout**: The timeout period for origin request expired. </br> **ResponseHeaderTooBig**: The origin returned too large of a response header. </br> **RestrictedIP**: The request was blocked because of restricted IP. </br> **SSLHandshakeError**: Unable to establish connection with origin because of SSL hand shake failure. </br> **UnspecifiedError**: An error occurred that didnΓÇÖt fit in any of the errors in the table. </br> **SSLMismatchedSNI**:The request was invalid because the HTTP message header did not match the value presented in the TLS SNI extension during SSL/TLS connection setup.|
### Sent to origin shield deprecation The raw log property **isSentToOriginShield** has been deprecated and replaced by a new field **isReceivedFromClient**. Use the new field if you're already using the deprecated field.
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 03/08/2022 Last updated : 04/21/2022 ++ # Azure Policy built-in policy definitions
hdinsight Cluster Reboot Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/cluster-reboot-vm.md
description: Learn how to reboot unresponsive VMs for Azure HDInsight clusters.
Previously updated : 06/22/2020 Last updated : 04/21/2022 # Reboot VMs for HDInsight clusters
When a node is rebooting, the cluster might become unhealthy, and jobs might slo
- The process table on the VM has many entries where the process has completed, but it's listed with "Terminated state." > [!NOTE]
-> Rebooting VMs is not supported for **HBase** and **Kafka** clusters because rebooting might cause data to be lost.
+> If you must reboot a worker node or zookeeper node in HBase or Kafka cluster, please be cautious as it may cause stability issues for some time depending on cluster sizing and workload pressure. Rebooting worker node can cause unnecessary region/ topic partition movements.ΓÇ» Even ZooKeeper node reboot can cause instability in ZooKeper cluster and so may cause Region Server/ Kafka broker to go down.ΓÇ»
+Ideally, whenever possible, please stop HBase / Kafka service before the reboot to minimize the impact for new data written in the cluster.
## Use PowerShell to reboot VMs
Two steps are required to use the node reboot operation: list nodes and restart
``` Restart-AzHDInsightHost -ClusterName myclustername -Name wn0-myclus, wn1-myclus ```
+> [!NOTE]
+> Rebooting nodes for HBase and Kafka cluster types using PowerShell is not supported.
## Use a REST API to reboot VMs
The actual names of the nodes that you want to reboot are specified in a JSON ar
] ```
+> [!NOTE]
+> Rebooting nodes for HBase and Kafka cluster types using REST API is not supported.
+ ## Next steps * [Restart-AzHDInsightHost](/powershell/module/az.hdinsight/restart-azhdinsighthost)
hdinsight Hdinsight Hadoop Use Data Lake Storage Gen2 Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2-portal.md
-+ Last updated 09/07/2021
For more information on how managed identities work in Azure HDInsight, see [Man
## Create a storage account to use with Data Lake Storage Gen2
-Create an storage account to use with Azure Data Lake Storage Gen2.
+Create a storage account to use with Azure Data Lake Storage Gen2.
1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the upper-left click **Create a resource**.
For more information on other options during storage account creation, see [Quic
Assign the managed identity to the **Storage Blob Data Owner** role on the storage account. 1. In the [Azure portal](https://portal.azure.com), go to your storage account.
-1. Select your storage account, then select **Access control (IAM)** to display the access control settings for the account. Select the **Role assignments** tab to see the list of role assignments.
- :::image type="content" source="./media/hdinsight-hadoop-use-data-lake-storage-gen2/portal-access-control.png" alt-text="Screenshot showing storage access control settings":::
+1. Select **Access control (IAM)**.
-1. Select the **+ Add role assignment** button to add a new role.
-1. In the **Add role assignment** window, select the **Storage Blob Data Owner** role. Then, select the subscription that has the managed identity and storage account. Next, search to locate the user-assigned managed identity that you created previously. Finally, select the managed identity, and it will be listed under **Selected members**.
+1. Select **Add > Add role assignment**.
- :::image type="content" source="./media/hdinsight-hadoop-use-data-lake-storage-gen2/add-rbac-role3-window.png" alt-text="Screenshot showing how to assign an Azure role":::
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot showing Access control (IAM) page with Add role assignment menu open.":::
+
+1. On the **Role** tab, select **Storage Blob Data Owner**.
+
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-role-generic.png" alt-text="Screenshot showing Add role assignment page with Role tab selected.":::
+
+1. On the **Members** tab, select **Managed identity**, and then select **Select members**.
+
+1. Select your subscription, select **User-assigned managed identity**, and then select your user-assigned managed identity.
+
+1. On the **Review + assign** tab, select **Review + assign** to assign the role.
+
+ The user-assigned identity that you selected is now listed under the selected role.
+
+ For more information about role assignments, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md)
-1. Select **Save**. The user-assigned identity that you selected is now listed under the selected role.
1. After this initial setup is complete, you can create a cluster through the portal. The cluster must be in the same Azure region as the storage account. In the **Storage** tab of the cluster creation menu, select the following options: * For **Primary storage type**, select **Azure Data Lake Storage Gen2**.
iot-develop Quickstart Devkit Espressif Esp32 Freertos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-espressif-esp32-freertos.md
Select the **About** tab on the device page.
:::image type="content" source="media/quickstart-devkit-espressif-esp32/esp-device-info.png" alt-text="Screenshot of device information in IoT Central.":::
+> [!TIP]
+> To customize these views, edit the [device template](../iot-central/core/howto-edit-device-template.md).
+ ## Clean up resources If you no longer need the Azure resources created in this tutorial, you can delete them from the IoT Central portal. Optionally, if you continue to another article in this Getting Started content, you can keep the resources you've already created and reuse them.
iot-develop Quickstart Devkit Microchip Atsame54 Xpro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-microchip-atsame54-xpro.md
Select **About** tab from the device page.
:::image type="content" source="media/quickstart-devkit-microchip-atsame54-xpro/iot-central-device-about-iar.png" alt-text="Screenshot of device information in IoT Central"::: :::zone-end
+> [!TIP]
+> To customize these views, edit the [device template](../iot-central/core/howto-edit-device-template.md).
+ ## Troubleshoot and debug If you experience issues building the device code, flashing the device, or connecting, see [Troubleshooting](troubleshoot-embedded-device-quickstarts.md).
iot-develop Quickstart Devkit Mxchip Az3166 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-mxchip-az3166.md
Select **About** tab from the device page.
:::image type="content" source="media/quickstart-devkit-mxchip-az3166/iot-central-device-about.png" alt-text="Screenshot of device information in IoT Central":::
+> [!TIP]
+> To customize these views, edit the [device template](../iot-central/core/howto-edit-device-template.md).
+ ## Troubleshoot and debug If you experience issues building the device code, flashing the device, or connecting, see [Troubleshooting](troubleshoot-embedded-device-quickstarts.md).
iot-develop Quickstart Devkit Nxp Mimxrt1050 Evkb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-nxp-mimxrt1050-evkb.md
Select **About** tab from the device page.
:::image type="content" source="media/quickstart-devkit-nxp-mimxrt1050-evkb/iot-central-device-about.png" alt-text="Screenshot of device information in IoT Central":::
+> [!TIP]
+> To customize these views, edit the [device template](../iot-central/core/howto-edit-device-template.md).
+ ## Troubleshoot and debug If you experience issues building the device code, flashing the device, or connecting, see [Troubleshooting](troubleshoot-embedded-device-quickstarts.md).
iot-develop Quickstart Devkit Nxp Mimxrt1060 Evk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-nxp-mimxrt1060-evk.md
Select **About** tab from the device page.
:::image type="content" source="media/quickstart-devkit-nxp-mimxrt1060-evk/iot-central-device-about-iar.png" alt-text="Screenshot of NXP device information in IoT Central."::: :::zone-end
+> [!TIP]
+> To customize these views, edit the [device template](../iot-central/core/howto-edit-device-template.md).
+ ## Troubleshoot and debug If you experience issues building the device code, flashing the device, or connecting, see [Troubleshooting](troubleshoot-embedded-device-quickstarts.md).
iot-develop Quickstart Devkit Renesas Rx65n 2Mb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-renesas-rx65n-2mb.md
Select **About** tab from the device page.
:::image type="content" source="media/quickstart-devkit-renesas-rx65n-2mb/iot-central-device-about.png" alt-text="Screenshot of device information in IoT Central":::
+> [!TIP]
+> To customize these views, edit the [device template](../iot-central/core/howto-edit-device-template.md).
+ ## Troubleshoot If you experience issues building the device code, flashing the device, or connecting, see [Troubleshooting](troubleshoot-embedded-device-quickstarts.md).
iot-develop Quickstart Devkit Renesas Rx65n Cloud Kit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-renesas-rx65n-cloud-kit.md
Select **About** tab from the device page.
:::image type="content" source="media/quickstart-devkit-renesas-rx65n-cloud-kit/iot-central-device-about.png" alt-text="Screenshot of device information in IoT Central":::
+> [!TIP]
+> To customize these views, edit the [device template](../iot-central/core/howto-edit-device-template.md).
+ ## Troubleshoot If you experience issues building the device code, flashing the device, or connecting, see [Troubleshooting](troubleshoot-embedded-device-quickstarts.md).
iot-develop Quickstart Devkit Stm B L475e Freertos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l475e-freertos.md
Select **About** tab from the device page.
:::image type="content" source="media/quickstart-devkit-stm-b-l475e-freertos/iot-central-device-about.png" alt-text="Screenshot of device information in IoT Central":::
+> [!TIP]
+> To customize these views, edit the [device template](../iot-central/core/howto-edit-device-template.md).
+ ## Troubleshoot and debug If you experience issues when you build the device code, flash the device, or connect, see [Troubleshooting](troubleshoot-embedded-device-quickstarts.md).
iot-develop Quickstart Devkit Stm B L475e https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l475e.md
Select **About** tab from the device page.
:::image type="content" source="media/quickstart-devkit-stm-b-l475e/iot-central-device-about.png" alt-text="Screenshot of device information in IoT Central":::
+> [!TIP]
+> To customize these views, edit the [device template](../iot-central/core/howto-edit-device-template.md).
+ ## Troubleshoot and debug If you experience issues building the device code, flashing the device, or connecting, see [Troubleshooting](troubleshoot-embedded-device-quickstarts.md).
iot-develop Quickstart Devkit Stm B L4s5i https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l4s5i.md
Select the **About** tab from the device page.
:::zone-end :::zone pivot="iot-toolset-cmake"+
+> [!TIP]
+> To customize these views, edit the [device template](../iot-central/core/howto-edit-device-template.md).
+ ## Verify the device status To view the device status in IoT Central portal:
iot-dps Concepts Control Access Dps Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-control-access-dps-azure-ad.md
+
+ Title: Access control and security for DPS by using Azure Active Directory | Microsoft Docs
+description: Concepts - how to control access to Azure IoT Hub Device Provisioning Service (DPS) (DPS) for back-end apps. Includes information about Azure Active Directory and RBAC.
++++ Last updated : 02/07/2022++
+# Control access to Azure IoT Hub Device Provisioning Service (DPS) by using Azure Active Directory (preview)
+
+You can use Azure Active Directory (Azure AD) to authenticate requests to Azure IoT Hub Device Provisioning Service (DPS) APIs, like create device identity and invoke direct method. You can also use Azure role-based access control (Azure RBAC) to authorize those same service APIs. By using these technologies together, you can grant permissions to access Azure IoT Hub Device Provisioning Service (DPS) APIs to an Azure AD security principal. This security principal could be a user, group, or application service principal.
+
+Authenticating access by using Azure AD and controlling permissions by using Azure RBAC provides improved security and ease of use over [security tokens](how-to-control-access.md). To minimize potential security issues inherent in security tokens, we recommend that you use Azure AD with your Azure IoT Hub Device Provisioning Service (DPS) whenever possible.
+
+> [!NOTE]
+> Authentication with Azure AD isn't supported for the Azure IoT Hub Device Provisioning Service (DPS) *device APIs* (like register device or device registration status lookup). Use [symmetric keys](concepts-symmetric-key-attestation.md), [X.509](concepts-x509-attestation.md) or [TPM](concepts-tpm-attestation.md) to authenticate devices to Azure IoT Hub Device Provisioning Service (DPS).
+
+## Authentication and authorization
+
+When an Azure AD security principal requests access to an Azure IoT Hub Device Provisioning Service (DPS) API, the principal's identity is first *authenticated*. For authentication, the request needs to contain an OAuth 2.0 access token at runtime. The resource name for requesting the token is `https://iothubs.azure.net`. If the application runs in an Azure resource like an Azure VM, Azure Functions app, or Azure App Service app, it can be represented as a [managed identity](../active-directory/managed-identities-azure-resources/how-managed-identities-work-vm.md).
+
+After the Azure AD principal is authenticated, the next step is *authorization*. In this step, Azure IoT Hub Device Provisioning Service (DPS) uses the Azure AD role assignment service to determine what permissions the principal has. If the principal's permissions match the requested resource or API, Azure IoT Hub Device Provisioning Service (DPS) authorizes the request. So this step requires one or more Azure roles to be assigned to the security principal. Azure IoT Hub Device Provisioning Service (DPS) provides some built-in roles that have common groups of permissions.
+
+## Manage access to Azure IoT Hub Device Provisioning Service (DPS) by using Azure RBAC role assignment
+
+With Azure AD and RBAC, Azure IoT Hub Device Provisioning Service (DPS) requires the principal requesting the API to have the appropriate level of permission for authorization. To give the principal the permission, give it a role assignment.
+
+- If the principal is a user, group, or application service principal, follow the guidance in [Assign Azure roles by using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+- If the principal is a managed identity, follow the guidance in [Assign a managed identity access to a resource by using the Azure portal](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md).
+
+To ensure least privilege, always assign the appropriate role at the lowest possible [resource scope](#resource-scope), which is probably the Azure IoT Hub Device Provisioning Service (DPS) scope.
+
+Azure IoT Hub Device Provisioning Service (DPS) provides the following Azure built-in roles for authorizing access to Azure IoT Hub DPS APIs by using Azure AD and RBAC:
+
+| Role | Description |
+| - | -- |
+| Device Provisioning Service Data Contributor | Allows for full access to Device Provisioning Service data-plane operations. |
+| Device Provisioning Service Data Reader | Allows for full read access to Device Provisioning Service data-plane properties. |
++
+You can also define custom roles to use with Azure IoT Hub Device Provisioning Service (DPS) by combining the [permissions](#permissions-for-azure-iot-hub-device-provisioning-service-dps-apis) that you need. For more information, see [Create custom roles for Azure role-based access control](../role-based-access-control/custom-roles.md).
+
+### Resource scope
+
+Before you assign an Azure RBAC role to a security principal, determine the scope of access that the security principal should have. It's always best to grant only the narrowest possible scope. Azure RBAC roles defined at a broader scope are inherited by the resources beneath them.
+
+This list describes the levels at which you can scope access to IoT Hub, starting with the narrowest scope:
+
+- **The Azure IoT Hub Device Provisioning Service (DPS).** At this scope, a role assignment applies to the Azure IoT Hub Device Provisioning Service (DPS). Role assignment at smaller scopes, like enrollment group or individual enrollment, isn't supported.
+- **The resource group.** At this scope, a role assignment applies to all IoT hubs in the resource group.
+- **The subscription.** At this scope, a role assignment applies to all IoT hubs in all resource groups in the subscription.
+- **A management group.** At this scope, a role assignment applies to all IoT hubs in all resource groups in all subscriptions in the management group.
+
+## Permissions for Azure IoT Hub Device Provisioning Service (DPS) APIs
+
+The following table describes the permissions available for Azure IoT Hub Device Provisioning Service (DPS) API operations. To enable a client to call a particular operation, ensure that the client's assigned RBAC role offers sufficient permissions for the operation.
+
+| RBAC action | Description |
+|-|-|
+| `Microsoft.Devices/provisioningServices/attestationmechanism/details/action` | Fetch attestation mechanism details |
+| `Microsoft.Devices/provisioningServices/enrollmentGroups/read` | Read enrollment groups |
+| `Microsoft.Devices/provisioningServices/enrollmentGroups/write` | Write enrollment groups |
+| `Microsoft.Devices/provisioningServices/enrollmentGroups/delete` | Delete enrollment groups |
+| `Microsoft.Devices/provisioningServices/enrollments/read` | Read enrollments |
+| `Microsoft.Devices/provisioningServices/enrollments/write` | Write enrollments |
+| `Microsoft.Devices/provisioningServices/enrollments/delete` | Delete enrollments |
+| `Microsoft.Devices/provisioningServices/registrationStates/read` | Read registration states |
+| `Microsoft.Devices/provisioningServices/registrationStates/delete` | Delete registration states |
++
+## Azure IoT extension for Azure CLI
+
+Most commands against Azure IoT Hub Device Provisioning Service (DPS) support Azure AD authentication. You can control the type of authentication used to run commands by using the `--auth-type` parameter, which accepts `key` or `login` values. The `key` value is the default.
+
+- When `--auth-type` has the `key` value, the CLI automatically discovers a suitable policy when it interacts with Azure IoT Hub Device Provisioning Service (DPS).
+
+- When `--auth-type` has the `login` value, an access token from the Azure CLI logged in the principal is used for the operation.
+
+- The following commands currently support `--auth-type`:
+ - `az iot dps enrollment`
+ - `az iot dps enrollment-group`
+ - `az iot dps registration`
+
+For more information, see the [Azure IoT extension for Azure CLI release page](https://github.com/Azure/azure-iot-cli-extension/releases/tag/v0.13.0).
+
+## SDKs and samples
+
+- [Azure IoT SDKs for Node.js Provisioning Service](https://aka.ms/IoTDPSNodeJSSDKRBAC)
+ - [Sample](https://aka.ms/IoTDPSNodeJSSDKRBACSample)
+- [Azure IoT SDK for Java Preview Release ](https://aka.ms/IoTDPSJavaSDKRBAC)
+ - [Sample](https://aka.ms/IoTDPSJavaSDKRBACSample)
+- [ΓÇó Microsoft Azure IoT SDKs for .NET Preview Release](https://aka.ms/IoTDPScsharpSDKRBAC)
+ - [Sample](https://aka.ms/IoTDPScsharpSDKRBACSample)
+
+## Azure AD access from the Azure portal
+
+>[!NOTE]
+>Azure AD access from the Azure portal us currently not available during preview.
+
+## Next steps
+
+- For more information on the advantages of using Azure AD in your application, see [Integrating with Azure Active Directory](../active-directory/develop/active-directory-how-to-integrate.md).
+- For more information on requesting access tokens from Azure AD for users and service principals, see [Authentication scenarios for Azure AD](../active-directory/develop/authentication-vs-authorization.md).
iot-dps Concepts Control Access Dps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-control-access-dps.md
+
+ Title: Access control and security for Azure IoT Hub Device Provisioning Service | Microsoft Docs
+description: Overview on how to control access to Azure IoT Hub Device Provisioning Service (DPS), includes links to in-depth articles on Azure Active Directory integration (Public Preview) and SAS options.
++++ Last updated : 04/20/2022+++
+# Control access to Azure IoT Hub Device Provisioning Service (DPS)
+
+This article describes the available options for securing your Azure IoT Hub Device Provisioning Service (DPS). The provisioning service uses *authentication* and *permissions* to grant access to each endpoint. Permissions allow the authentication process to limit access to a service instance based on functionality.
+
+There are two different ways for controlling access to Azure IoT Hub DPS:
+
+- **Shared access signatures** lets you group permissions and grant them to applications using access keys and signed security tokens. To learn more, see [Control access to Azure IoT Hub DPS with Shared Access Signatures and security tokens](how-to-control-access.md).
+- **Azure Active Directory (Azure AD) integration (Public Preview)** for service APIs. Azure provides identity-based authentication with Azure Active Directory and fine-grained authorization with Azure role-based access control (Azure RBAC). Azure AD and RBAC integration is supported for Azure IoT Hub DPS service APIs only. To learn more, see [Control access to Azure IoT Hub DPS with Azure Active Directory (Public Preview)](concepts-control-access-dps-azure-ad.md).
+++
+## Next steps
+
+- [Control access to Azure IoT Hub DPS with Shared Access Signatures and security tokens](how-to-control-access.md)
+- [Control access to Azure IoT Hub DPS with Azure Active Directory (Public Preview)](concepts-control-access-dps-azure-ad.md)
iot-dps How To Control Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-control-access.md
Title: Security endpoints in Microsoft Azure IoT Device Provisioning Service
-description: Concepts - how to control access to IoT Device Provisioning Service (DPS) for backend apps. Includes information about security tokens.
+ Title: Access control and security for DPS by using shared access signatures | Microsoft Docs
+description: Concepts - how to control access to Azure IoT Hub Device Provisioning Service (DPS) for backend apps. Includes information about security tokens.
-# Control access to Azure IoT Hub Device Provisioning Service
+# Control access to Azure IoT Hub Device Provisioning Service (DPS) with shared access signatures and security tokens
-This article describes the available options for securing your IoT Device Provisioning service. The provisioning service uses *authentication* and *permissions* to grant access to each endpoint. Permissions allow the authentication process to limit access to a service instance based on functionality.
+This article describes the available options for securing your Azure IoT Hub Device Provisioning Service (DPS). The provisioning service uses *authentication* and *permissions* to grant access to each endpoint. Permissions allow the authentication process to limit access to a service instance based on functionality.
This article discusses:
The result, which would grant access to read all enrollment records, would be:
`SharedAccessSignature sr=mydps.azure-devices-provisioning.net&sig=JdyscqTpXdEJs49elIUCcohw2DlFDR3zfH5KqGJo4r4%3D&se=1456973447&skn=enrollmentread`
-## Device Provisioning Service permissions
+## SDKs and samples
+
+- [Azure IoT SDK for Java Preview Release ](https://aka.ms/IoTDPSJavaSDKRBAC)
+ - [Sample](https://aka.ms/IoTDPSJavaSDKSASSample])
+- [Microsoft Azure IoT SDKs for .NET Preview Release](https://aka.ms/IoTDPScsharpSDKRBAC)
+ - [Sample](https://aka.ms/IoTDPSscharpSDKSASSample)
+
+## Reference topics:
+
+The following reference topics provide you with more information about controlling access to your IoT Device Provisioning Service.
+
+### Device Provisioning Service permissions
The following table lists the permissions you can use to control access to your IoT Device Provisioning Service.
iot-edge How To Configure Iot Edge For Linux On Windows Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-iot-edge-for-linux-on-windows-networking.md
[!INCLUDE [iot-edge-version-all-supported](../../includes/iot-edge-version-all-supported.md)]
-This article will provide help you decide which networking option is best for your scenario and get insights into IoT Edge for Linux on Windows (EFLOW) configuration requirements.
+This article will help you decide which networking option is best for your scenario and provide insights into IoT Edge for Linux on Windows (EFLOW) configuration requirements.
To connect the IoT Edge for Linux on Windows (EFLOW) virtual machine over a network to your host, to other virtual machines on your Windows host, and to other devices/locations on an external network, the virtual machine networking must be configured accordingly. The easiest way to establish basic networking on Windows Client SKUs is by using the **default switch**, which is already created when enabling the Windows Hyper-V feature. However, on Windows Server SKUs devices, networking it's a bit more complicated as there's no **default switch** available. For more information about virtual switch creation for Windows Server, see [Create virtual switch for Linux on Windows](./how-to-create-virtual-switch.md).
-For more information about EFLOW networking concepts, see [IoT Edge for Linux on Windows networking](./nested-virtualization.md).
+For more information about EFLOW networking concepts, see [IoT Edge for Linux on Windows networking](./iot-edge-for-linux-on-windows-networking.md).
## Configure VM virtual switch
-The first step before deploying the EFLOW virtual machine, is to determine which type of virtual switch you'll use. For more information about EFLOW supported virtual switches, see [EFLOW virtual switch choices](./iot-edge-for-linux-on-windows-networking.md). Once you determine the type of virtual switch that you want to use, make sure to create the virtual switch correctly. For more information about virtual switch creation, see [Create a virtual switch for Hyper-V virtual machines](/windows-server/virtualization/hyper-v/get-started/create-a-virtual-switch-for-hyper-v-virtual-machines).
+The first step before deploying the EFLOW virtual machine is to determine which type of virtual switch you'll use. For more information about EFLOW supported virtual switches, see [EFLOW virtual switch choices](./iot-edge-for-linux-on-windows-networking.md). Once you determine the type of virtual switch that you want to use, make sure to create the virtual switch correctly. For more information about virtual switch creation, see [Create a virtual switch for Hyper-V virtual machines](/windows-server/virtualization/hyper-v/get-started/create-a-virtual-switch-for-hyper-v-virtual-machines).
>[!NOTE] > If you're using Windows client and you want to use the **default switch**, then no switch creation is needed and no `-vSwitchType` and `-vSwitchName` parameters are needed.
microsoft.com. 0 IN A 40.76.4.15
Read more about [Azure IoT Edge for Linux on Windows Security](./iot-edge-for-linux-on-windows-security.md).
-Stay up-to-date with the latest [IoT Edge for Linux on Windows updates](./iot-edge-for-linux-on-windows.md)
+Stay up-to-date with the latest [IoT Edge for Linux on Windows updates](./iot-edge-for-linux-on-windows-updates.md).
iot-edge Iot Edge For Linux On Windows Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-security.md
In the EFLOW Continuous Release (CR) version, we introduced a change in the tran
Read more about [Windows IoT security premises](/windows/iot/iot-enterprise/os-features/security)
-Stay up-to-date with the latest [IoT Edge for Linux on Windows updates](./iot-edge-for-linux-on-windows.md)
+Stay up-to-date with the latest [IoT Edge for Linux on Windows updates](./iot-edge-for-linux-on-windows-updates.md).
iot-edge Iot Edge For Linux On Windows Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-support.md
Azure IoT Edge for Linux on Windows supports the following architectures:
Azure IoT Edge for Linux on Windows can run in Windows virtual machines. Using a virtual machine as an IoT Edge device is common when customers want to augment existing infrastructure with edge intelligence. In order to run the EFLOW virtual machine inside a Windows VM, the host VM must support nested virtualization. There are two forms of nested virtualization compatible with Azure IoT Edge for Linux on Windows. Users can choose to deploy through a local VM or Azure VM. For more information, see [EFLOW Nested virtualization](./nested-virtualization.md).
-### VMware virtual machine
-
-Azure IoT Edge for Linux on Windows supports running inside a Windows virtual machine running on top of [VMware ESXi](https://www.vmware.com/products/esxi-and-esx.html) product family. Specific networking and virtualization configurations are needed to support this scenario. For more information about VMware configuration, see [EFLOW Nested virtualization](./nested-virtualization.md).
- ## Releases
A Windows device with the following minimum requirements:
* Virtualization support * On Windows 10, enable Hyper-V. For more information, see [Install Hyper-V on Windows 10](/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v). * On Windows Server, install the Hyper-V role and create a default network switch. For more information, see [Nested virtualization for Azure IoT Edge for Linux on Windows](./nested-virtualization.md).
- * On a virtual machine, configure nested virtualization. For more information, see [nested virtualization](./nested-virtualization.md).
+ * On a virtual machine, configure nested virtualization. For more information, see [nested virtualization](./nested-virtualization.md).
+
+## Next steps
+
+Read more about [IoT Edge for Linux on Windows security premises](./iot-edge-for-linux-on-windows-security.md).
+
+Stay up-to-date with the latest [IoT Edge for Linux on Windows updates](./iot-edge-for-linux-on-windows-updates.md).
iot-edge Iot Edge For Linux On Windows Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-updates.md
As explained before, IoT Edges for Linux on Windows updates are serviced using M
1. **CSP Policies** - By using the **Update/AllowMUUpdateService** CSP Policy - For more information about Microsoft Updates CSP policy, see [Policy CSP - MU Update](/windows/client-management/mdm/policy-csp-update#update-allowmuupdateservice).
-1. **Manually manage Microsoft Updates** - For more information about how to Opt-In to Microsoft Updates, see [Opt-In to Microsoft Update](/windows/win32/wua_sdk/opt-in-to-microsoft-update):
+1. **Manually manage Microsoft Updates** - For more information about how to Opt-In to Microsoft Updates, see [Opt-In to Microsoft Update](/windows/win32/wua_sdk/opt-in-to-microsoft-update).
<!-- 1.1 --> :::moniker range="iotedge-2018-06"
IoT Edge for Linux on Windows doesn't support migrations between the different r
## Next steps
-View the latest [Azure IoT Edge for Linux on Windows releases](https://github.com/Azure/iotedge-eflow/releases).
+View the latest [IoT Edge for Linux on Windows releases](https://github.com/Azure/iotedge-eflow/releases).
-Stay up-to-date with recent updates and announcements in the [Internet of Things blog](https://azure.microsoft.com/blog/topics/internet-of-things/)
+Read more about [IoT Edge for Linux on Windows security premises](./iot-edge-for-linux-on-windows-security.md).
iot-edge Nested Virtualization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/nested-virtualization.md
This is the baseline approach for any Windows VM that hosts Azure IoT Edge for L
If you're using Windows Server, make sure you [install the Hyper-V role](/windows-server/virtualization/hyper-v/get-started/install-the-hyper-v-role-on-windows-server).
-## Deployment on Windows VM on VMware
-
-VMware ESXi [6.7](https://docs.vmware.com/en/VMware-vSphere/6.7/vsphere-esxi-67-installation-setup-guide.pdf) and [7.0](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-vcenter-server-70-release-notes.html) versions support nested virtualization needed for hosting Azure IoT Edge for Linux on Windows on top of a Windows virtual machine.
-
-To set up an Azure IoT Edge for Linux on Windows on a VMware ESXi Windows Server virtual machine, use the following steps:
-
-1. Create a Windows virtual machine on the VMware ESXi host. For more information about VMware VM deployment, see [VMware - Deploying Virtual Machines](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-39D19B2B-A11C-42AE-AC80-DDA8682AB42C.html).
-
-1. Turn off the virtual machine created in previous step.
-
-1. Select the Windows virtual machine and then **Edit settings**.
-
-1. Search for _Hardware virtualization_ and turn on _Expose hardware assisted virtualization to the guest OS_.
-
-1. Select **Save** and start the virtual machine.
-
-1. Install Hyper-V hypervisor. If you're using Windows client, make sure you [Install Hyper-V on Windows 10](/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v). If you're using Windows Server, make sure you [install the Hyper-V role](/windows-server/virtualization/hyper-v/get-started/install-the-hyper-v-role-on-windows-server).
-
-> [!NOTE]
-> For VMware Windows virtual machines, if you plan to use an **external virtual switch** for the EFLOW virtual machine networking, make sure you enable _Promiscious mode_. For more information, see [Configuring promiscuous mode on a virtual switch or portgroup](https://kb.vmware.com/s/article/1004099). Failing to do so will result in EFLOW installation errors.
-- ## Deployment on Azure VMs Azure IoT Edge for Linux on Windows isn't compatible on an Azure VM running the Server SKU unless a script is executed that brings up a default switch. For more information on how to bring up a default switch, see [Create virtual switch for Linux on Windows](how-to-create-virtual-switch.md).
iot-hub Iot Hub Amqp Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-amqp-support.md
Previously updated : 04/30/2019 Last updated : 04/21/2022
while True:
You can also send telemetry messages from a device by using AMQP. The device can optionally provide a dictionary of application properties, or various message properties, such as message ID.
+To route messages based on message body, you must set the `content_type` property to be `application/json;charset=utf-8`. To learn more about routing messages either based on message properties or message body, please see the [IoT Hub message routing query syntax documentation](iot-hub-devguide-routing-query-syntax.md).
+ The following code snippet uses the [uAMQP library in Python](https://github.com/Azure/azure-uamqp-python) to send device-to-cloud messages from a device. ```python
msg_props = uamqp.message.MessageProperties()
msg_props.message_id = str(uuid.uuid4()) msg_props.creation_time = None msg_props.correlation_id = None
-msg_props.content_type = None
+msg_props.content_type = 'application/json;charset=utf-8'
msg_props.reply_to_group_id = None msg_props.subject = None msg_props.user_id = None
iot-hub Iot Hub Devguide Device Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-device-twins.md
Previously updated : 09/29/2020 Last updated : 01/31/2022
Tags, desired properties, and reported properties are JSON objects with the foll
* **Keys**: All keys in JSON objects are UTF-8 encoded, case-sensitive, and up-to 1 KB in length. Allowed characters exclude UNICODE control characters (segments C0 and C1), and `.`, `$`, and SP.
+ > [!NOTE]
+ > IoT Hub queries used in [Message Routing](./iot-hub-devguide-routing-query-syntax.md) don't support whitespace or any of the following characters as part of a key name: `()<>@,;:\"/?={}`.
+ * **Values**: All values in JSON objects can be of the following JSON types: boolean, number, string, object. Arrays are also supported. * Integers can have a minimum value of -4503599627370496 and a maximum value of 4503599627370495.
iot-hub Iot Hub Devguide Messages Construct https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-construct.md
Previously updated : 11/19/2021 Last updated : 2/7/2022 # Create and read IoT Hub messages
-To support seamless interoperability across protocols, IoT Hub defines a common message format for all device-facing protocols. This message format is used for both [device-to-cloud routing](iot-hub-devguide-messages-d2c.md) and [cloud-to-device](iot-hub-devguide-messages-c2d.md) messages.
+To support seamless interoperability across protocols, IoT Hub defines a common set of messaging features that are available in all device-facing protocols. These can be used in both [device-to-cloud message routing](iot-hub-devguide-messages-d2c.md) and [cloud-to-device messages](iot-hub-devguide-messages-c2d.md).
[!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-partial.md)]
An IoT Hub message consists of:
* A set of *application properties*. A dictionary of string properties that the application can define and access, without needing to deserialize the message body. IoT Hub never modifies these properties.
-* An opaque binary body.
+* A message body, which can be any type of data.
+
+Each device protocol implements setting properties in different ways. Please see the related [MQTT](./iot-hub-mqtt-support.md) and [AMQP](./iot-hub-amqp-support.md) developer guides for details.
Property names and values can only contain ASCII alphanumeric characters, plus ``{'!', '#', '$', '%, '&', ''', '*', '+', '-', '.', '^', '_', '`', '|', '~'}`` when you send device-to-cloud messages using the HTTPS protocol or send cloud-to-device messages.
Device-to-cloud messaging with IoT Hub has the following characteristics:
For more information about how to encode and decode messages sent using different protocols, see [Azure IoT SDKs](iot-hub-devguide-sdks.md).
+> [!NOTE]
+> Each IoT Hub protocol provides a message content type property which is respected when routing data to custom endpoints. To have your data properly handled at the destination (for example, JSON being treated as a parsable string instead of Base64 encoded binary data), you must provide the appropriate content type and charset for the message.
+>
+
+To use your message body in an IoT Hub routing query you must provide a valid JSON object for the message and set the content type property of the message to `application/json;charset=utf-8`.
+
+A valid, routable message body may look like the following:
+
+```json
+{
+ "timestamp": "2022-02-08T20:10:46Z",
+ "tag_name": "spindle_speed",
+ "tag_value": 100
+}
+```
+ ## System Properties of **D2C** IoT Hub messages | Property | Description |User Settable?|Keyword for </br>routing query|
iot-hub Iot Hub Devguide Routing Query Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-routing-query-syntax.md
$body.Weather.Temperature = 50 AND $twin.properties.desired.telemetryConfig.send
$twin.tags.deploymentLocation.floor = 1 ```
-Routing query on body or device twin with a period in the payload or property name is not supported.
+## Limitations
+
+Routing queries don't support using whitespace or any of the following characters in property names, the message body path, or the device/module twin path: `()<>@,;:\"/?={}`.
+ ## Next steps
iot-hub Iot Hub Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-managed-identity.md
Last updated 09/02/2021 + # IoT Hub support for managed identities
Managed identities can be used for egress connectivity from IoT Hub to other Azu
## Configure message routing with managed identities
-In this section, we use the [message routing](iot-hub-devguide-messages-d2c.md) to an event hub custom endpoint as an example. The example applies to other routing custom endpoints.
+In this section, we use the [message routing](iot-hub-devguide-messages-d2c.md) to an event hub custom endpoint as an example. The example applies to other routing custom endpoints.
-1. First we need to go to your event hub in Azure portal, to assign the managed identity the right access. In your event hub, navigate to the **Access control (IAM)** tab and click **Add** then **Add a role assignment**. If you don't have permissions to assign roles, the Add role assignment option will be disabled.
+1. Go to your event hub in the Azure portal to assign the managed identity the right access.
-2. Select **Event Hubs Data Sender as role**.
+1. Select **Access control (IAM)**.
+
+1. Select **Add > Add role assignment**.
+
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot showing Access control (IAM) page with Add role assignment menu open.":::
+
+1. On the **Role** tab, select **Azure Event Hubs Data Sender**.
> [!NOTE]
- > For storage account, select **Storage Blob Data Contributor** ([*not* Contributor or Storage Account Contributor](../storage/blobs/assign-azure-role-data-access.md)) as **role**. For service bus, select **Service bus Data Sender** as **role**.
+ > For a storage account, select **Storage Blob Data Contributor** ([*not* Contributor or Storage Account Contributor](../storage/blobs/assign-azure-role-data-access.md)) as the role. For a service bus, select **Azure Service Bus Data Sender**.
-3. For user-assigned, choose **User-assigned managed identity** under **Assign access to**. Select your subscription and your user-assigned managed identity in the drop-down list. Click the **Save** button.
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-role-generic.png" alt-text="Screenshot showing Add role assignment page with Role tab selected.":::
- :::image type="content" source="./media/iot-hub-managed-identity/eventhub-iam-user-assigned.png" alt-text="Screenshot that shows message routing with user assigned.":::
+1. On the **Members** tab, select **Managed identity**, and then select **Select members**.
-4. For system-assigned, under **Assign access to** choose **User, group, or service principal** and select your IoT Hub's resource name in the drop-down list. Click **Save**.
+1. For user-assigned managed identities, select your subscription, select **User-assigned managed identity**, and then select your user-assigned managed identity.
- :::image type="content" source="./media/iot-hub-managed-identity/eventhub-iam-system-assigned.png" alt-text="Screenshot that shows message routing with system assigned.":::
+1. For system-assigned managed identities, select your subscription, select **All system-assigned managed identities**, and then select your IoT Hub's resource name.
- If you need to restrict the connectivity to your custom endpoint through a VNet, you need to turn on the trusted Microsoft first party exception, to give your IoT hub access to the specific endpoint. For example, if you're adding an event hub custom endpoint, navigate to the **Firewalls and virtual networks** tab in your event hub and enable **Allow access from selected networks** option. Under the **Exceptions** list, check the box for **Allow trusted Microsoft services to access event hubs**. Click the **Save** button. This also applies to storage account and service bus. Learn more about [IoT Hub support for virtual networks](./virtual-network-support.md).
+1. On the **Review + assign** tab, select **Review + assign** to assign the role.
+
+ For more information about role assignments, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md)
+
+1. If you need to restrict the connectivity to your custom endpoint through a VNet, you need to turn on the trusted Microsoft first party exception, to give your IoT hub access to the specific endpoint. For example, if you're adding an event hub custom endpoint, navigate to the **Firewalls and virtual networks** tab in your event hub and enable **Allow access from selected networks** option. Under the **Exceptions** list, check the box for **Allow trusted Microsoft services to access event hubs**. Click the **Save** button. This also applies to storage account and service bus. Learn more about [IoT Hub support for virtual networks](./virtual-network-support.md).
> [!NOTE] > You need to complete above steps to assign the managed identity the right access before adding the event hub as a custom endpoint in IoT Hub. Please wait a few minutes for the role assignment to propagate.
In this section, we use the [message routing](iot-hub-devguide-messages-d2c.md)
## Configure file upload with managed identities
-IoT Hub's [file upload](iot-hub-devguide-file-upload.md) feature allows devices to upload files to a customer-owned storage account. To allow the file upload to function, IoT Hub needs to have connectivity to the storage account. Similar to message routing, you can pick the preferred authentication type and managed identity for IoT Hub egress connectivity to your Azure Storage account.
+IoT Hub's [file upload](iot-hub-devguide-file-upload.md) feature allows devices to upload files to a customer-owned storage account. To allow the file upload to function, IoT Hub needs to have connectivity to the storage account. Similar to message routing, you can pick the preferred authentication type and managed identity for IoT Hub egress connectivity to your Azure Storage account.
+
+1. In the Azure portal, navigate to your storage account.
+
+1. Select **Access control (IAM)**.
+
+1. Select **Add > Add role assignment**.
+
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot showing Access control (IAM) page with Add role assignment menu open.":::
+
+1. On the **Role** tab, select **Storage Blob Data Contributor**. (Don't select **Contributor** or **Storage Account Contributor**.)
+
+1. On the **Members** tab, select **Managed identity**, and then select **Select members**.
+
+1. For user-assigned managed identities, select your subscription, select **User-assigned managed identity**, and then select your user-assigned managed identity.
-1. In the Azure portal, navigate to your storage account's **Access control (IAM)** tab and click **Add** under the **Add a role assignment** section.
-2. Select **Storage Blob Data Contributor** (not Contributor or Storage Account Contributor) as role.
-3. For user-assigned, choose **User-assigned managed identity** under Assign access to. Select your subscription and your user-assigned managed identity in the drop-down list. Click the **Save** button.
-4. For system-assigned, under **Assign access to** choose **User, group, or service principal** and select your IoT hub's resource name in the drop-down list. Click **Save**.
+1. For system-assigned managed identities, select your subscription, select **All system-assigned managed identities**, and then select your IoT Hub's resource name.
+
+1. On the **Review + assign** tab, select **Review + assign** to assign the role.
+
+ For more information about role assignments, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md)
If you need to restrict the connectivity to your storage account through a VNet, you need to turn on the trusted Microsoft first party exception, to give your IoT hub access to the storage account. On your storage account resource page, navigate to the **Firewalls and virtual networks** tab and enable **Allow access from selected networks** option. Under the **Exceptions** list, check the box for **Allow trusted Microsoft services to access this storage account**. Click the **Save** button. Learn more about [IoT Hub support for virtual networks](./virtual-network-support.md).
IoT Hub's [file upload](iot-hub-devguide-file-upload.md) feature allows devices
## Configure bulk device import/export with managed identities
-IoT Hub supports the functionality to [import/export devices](iot-hub-bulk-identity-mgmt.md)' information in bulk from/to a customer-provided storage blob. This functionality requires connectivity from IoT Hub to the storage account.
+IoT Hub supports the functionality to [import/export devices](iot-hub-bulk-identity-mgmt.md)' information in bulk from/to a customer-provided storage blob. This functionality requires connectivity from IoT Hub to the storage account.
+
+1. In the Azure portal, navigate to your storage account.
+
+1. Select **Access control (IAM)**.
+
+1. Select **Add > Add role assignment**.
+
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot showing Access control (IAM) page with Add role assignment menu open.":::
+
+1. On the **Role** tab, select **Storage Blob Data Contributor**. (Don't select **Contributor** or **Storage Account Contributor**.)
+
+1. On the **Members** tab, select **Managed identity**, and then select **Select members**.
+
+1. For user-assigned managed identities, select your subscription, select **User-assigned managed identity**, and then select your user-assigned managed identity.
+
+1. For system-assigned managed identities, select your subscription, select **All system-assigned managed identities**, and then select your IoT Hub's resource name.
+
+1. On the **Review + assign** tab, select **Review + assign** to assign the role.
-1. In the Azure portal, navigate to your storage account's **Access control (IAM)** tab and click **Add** under the **Add a role assignment** section.
-2. Select **Storage Blob Data Contributor** (not Contributor or Storage Account Contributor) as role.
-3. For user-assigned, choose **User-assigned managed identity** under Assign access to. Select your subscription and your user-assigned managed identity in the drop-down list. Click the **Save** button.
-4. For system-assigned, under **Assign access to** choose **User, group, or service principal** and select your IoT hub's resource name in the drop-down list. Click **Save**.
+ For more information about role assignments, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md)
### Using REST API or SDK for import and export jobs
iot-hub Iot Hub Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-mqtt-support.md
In order to ensure a client/IoT Hub connection stays alive, both the service and
|Java | 230 seconds | [Yes](https://github.com/Azure/azure-iot-sdk-java/blob/main/device/iot-device-client/src/main/java/com/microsoft/azure/sdk/iot/device/ClientOptions.java#L64) | |C | 240 seconds | [Yes](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/Iothub_sdk_options.md#mqtt-transport) | |C# | 300 seconds* | [Yes](/dotnet/api/microsoft.azure.devices.client.transport.mqtt.mqtttransportsettings.keepaliveinseconds) |
-|Python | 60 seconds | No |
+|Python | 60 seconds | [Yes](https://github.com/Azure/azure-iot-sdk-python/blob/main/azure-iot-device/azure/iot/device/iothub/abstract_clients.py#L339) |
> *The C# SDK defines the default value of the MQTT KeepAliveInSeconds property as 300 seconds but in reality the SDK sends a ping request 4 times per keep-alive duration set. This means the SDK sends a keep-alive ping every 75 seconds.
iot-hub Monitor Iot Hub Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/monitor-iot-hub-reference.md
For metrics with a **Unit** value of **Count**, only total (sum) aggregation is
|Metric Display Name|Metric|Unit|Aggregation Type|Description|Dimensions| |||||||
-| Routing Delivery Attempts (preview) |RoutingDeliveries | Count | Total |This is the routing delivery metric. Use the dimensions to identify the delivery status for a specific endpoint or for a specific routing source.| Result,<br>RoutingSource,<br>EndpointType,<br>FailureReasonCategory,<br>EndpointName<br>*For more information, see [Metric dimensions](#metric-dimensions)*. |
+| Routing Deliveries (preview) |RoutingDeliveries | Count | Total |This is the routing delivery metric. Use the dimensions to identify the delivery status for a specific endpoint or for a specific routing source.| Result,<br>RoutingSource,<br>EndpointType,<br>FailureReasonCategory,<br>EndpointName<br>*For more information, see [Metric dimensions](#metric-dimensions)*. |
| Routing Delivery Data Size In Bytes (preview)|RoutingDataSizeInBytesDelivered| Bytes | Total |The total number of bytes routed by IoT Hub to custom endpoint and built-in endpoint. Use the dimensions to identify data size routed to a specific endpoint or for a specific routing source.| RoutingSource,<br>EndpointType<br>EndpointName<br>*For more information, see [Metric dimensions](#metric-dimensions)*.| | Routing Latency (preview) |RoutingDeliveryLatency| Milliseconds | Average |This is the routing delivery latency metric. Use the dimensions to identify the latency for a specific endpoint or for a specific routing source.| RoutingSource,<br>EndpointType,<br>EndpointName<br>*For more information, see [Metric dimensions](#metric-dimensions)*.| |Routing: blobs delivered to storage|d2c.endpoints.egress.storage.blobs|Count|Total|The number of times IoT Hub routing delivered blobs to storage endpoints.|None|
iot-hub Troubleshoot Message Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/troubleshoot-message-routing.md
To troubleshoot this issue, analyze the following.
#### The routing metrics for this endpoint
-All the [IoT Hub metrics related to routing](monitor-iot-hub-reference.md#routing-metrics) are prefixed with *Routing*. You can combine information from multiple metrics to identify root cause for issues. For example, use metric **Routing Delivery Attempts** to identify the number of messages that were delivered to an endpoint or dropped when they didn't match queries on any of the routes and fallback route was disabled. Check the **Routing Latency** metric to observe whether latency for message delivery is steady or increasing. A growing latency can indicate a problem with a specific endpoint and we recommend checking [the health of the endpoint](#the-health-of-the-endpoint). These routing metrics also have [dimensions](monitor-iot-hub-reference.md#metric-dimensions) that provide details on the metric like the endpoint type, specific endpoint name and a reason why the message was not delivered.
+All the [IoT Hub metrics related to routing](monitor-iot-hub-reference.md#routing-metrics) are prefixed with *Routing*. You can combine information from multiple metrics to identify root cause for issues. For example, use metric **Routing Deliveries** to identify the number of messages that were delivered to an endpoint or dropped when they didn't match queries on any of the routes and fallback route was disabled. Check the **Routing Latency** metric to observe whether latency for message delivery is steady or increasing. A growing latency can indicate a problem with a specific endpoint and we recommend checking [the health of the endpoint](#the-health-of-the-endpoint). These routing metrics also have [dimensions](monitor-iot-hub-reference.md#metric-dimensions) that provide details on the metric like the endpoint type, specific endpoint name and a reason why the message was not delivered.
#### The resource logs for any operational issues
lighthouse Remove Delegation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/remove-delegation.md
Get-AzManagedServicesAssignment -Scope "/subscriptions/{delegatedSubscriptionId}
# Delete the registration assignment
-Remove-AzManagedServicesAssignment -ResourceId "/subscriptions/{delegatedSubscriptionId}/providers/Microsoft.ManagedServices/registrationAssignments/{assignmentGuid}"
+Remove-AzManagedServicesAssignment -Name "<Assignmentname>" -Scope "/subscriptions/{delegatedSubscriptionId}"
``` ### Azure CLI
logic-apps Workflow Definition Language Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/workflow-definition-language-functions-reference.md
To work with strings, you can use these string functions and also some [collecti
| [length](../logic-apps/workflow-definition-language-functions-reference.md#length) | Return the number of items in a string or array. | | [nthIndexOf](../logic-apps/workflow-definition-language-functions-reference.md#nthIndexOf) | Return the starting position or index value where the *n*th occurrence of a substring appears in a string. | | [replace](../logic-apps/workflow-definition-language-functions-reference.md#replace) | Replace a substring with the specified string, and return the updated string. |
-| [slice](../logic-apps/workflow-definition-language-functions-reference.md#slice) | Return a substring by specifying the starting and ending position or value. |
+| [slice](../logic-apps/workflow-definition-language-functions-reference.md#slice) | Return a substring by specifying the starting and ending position or value. See also [substring](../logic-apps/workflow-definition-language-functions-reference.md#substring). |
| [split](../logic-apps/workflow-definition-language-functions-reference.md#split) | Return an array that contains substrings, separated by commas, from a larger string based on a specified delimiter character in the original string. | | [startsWith](../logic-apps/workflow-definition-language-functions-reference.md#startswith) | Check whether a string starts with a specific substring. |
-| [substring](../logic-apps/workflow-definition-language-functions-reference.md#substring) | Return characters from a string, starting from the specified position. |
+| [substring](../logic-apps/workflow-definition-language-functions-reference.md#substring) | Return characters from a string, starting from the specified position. See also [slice](../logic-apps/workflow-definition-language-functions-reference.md#slice). |
| [toLower](../logic-apps/workflow-definition-language-functions-reference.md#toLower) | Return a string in lowercase format. | | [toUpper](../logic-apps/workflow-definition-language-functions-reference.md#toUpper) | Return a string in uppercase format. | | [trim](../logic-apps/workflow-definition-language-functions-reference.md#trim) | Remove leading and trailing whitespace from a string, and return the updated string. |
And returns this array with the remaining items: `[1,2,3]`
### slice Return a substring by specifying the starting and ending position or value.
+See also [substring()](#substring).
``` slice('<text>', <startIndex>, <endIndex>?)
And returns this result: `10`
### substring Return characters from a string, starting from the specified position, or index. Index values start with the number 0.
+See also [slice()](#slice).
``` substring('<text>', <startIndex>, <length>)
machine-learning Dsvm Ubuntu Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro.md
Title: 'Quickstart: Create an Ubuntu Data Science Virtual Machine'
description: Configure and create a Data Science Virtual Machine for Linux (Ubuntu) to do analytics and machine learning. --++ Last updated 03/10/2020
# Quickstart: Set up the Data Science Virtual Machine for Linux (Ubuntu)
-Get up and running with the Ubuntu 18.04 Data Science Virtual Machine.
+Get up and running with the Ubuntu 18.04 and Ubuntu 20.04 Data Science Virtual Machines.
## Prerequisites
-To create an Ubuntu 18.04 Data Science Virtual Machine, you must have an Azure subscription. [Try Azure for free](https://azure.com/free).
+To create an Ubuntu 18.04 or Ubuntu 20.04 Data Science Virtual Machine, you must have an Azure subscription. [Try Azure for free](https://azure.com/free).
>[!NOTE] >Azure free accounts don't support GPU enabled virtual machine SKUs. ## Create your Data Science Virtual Machine for Linux
-Here are the steps to create an instance of the Data Science Virtual Machine Ubuntu 18.04:
+Here are the steps to create an instance of the Data Science Virtual Machine from Ubuntu 18.04 or Ubuntu 20.04:
1. Go to the [Azure portal](https://portal.azure.com). You might be prompted to sign in to your Azure account if you're not already signed in.
-1. Find the virtual machine listing by typing in "data science virtual machine" and selecting "Data Science Virtual Machine- Ubuntu 18.04"
+1. Find the virtual machine listing by typing in "data science virtual machine" and selecting "Data Science Virtual Machine- Ubuntu 18.04" or "Data Science Virtual Machine- Ubuntu 20.04"
1. On the next window, select **Create**.
machine-learning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/overview.md
keywords: data science tools, data science virtual machine, tools for data scien
--++ Last updated 04/02/2020
The DSVM is available on:
+ Windows Server 2019 + Ubuntu 18.04 LTS++ Ubuntu 20.04 LTS ## Comparison with Azure Machine Learning
The key differences between these two product offerings are detailed below:
|Built-in Collaboration | No | Yes | |Pre-installed Tools | Jupyter(lab), RStudio Server, VSCode,<br> Visual Studio, PyCharm, Juno,<br>Power BI Desktop, SSMS, <br>Microsoft Office 365, Apache Drill | Jupyter(lab)<br> RStudio Server |
-## Sample Use Cases
+## Sample use cases
Below we illustrate some common use cases for DSVM customers.
Learn more with these articles:
+ Linux: + [Set up a Linux DSVM (Ubuntu)](dsvm-ubuntu-intro.md)
- + [Data science on a Linux DSVM](linux-dsvm-walkthrough.md)
+ + [Data science on a Linux DSVM](linux-dsvm-walkthrough.md)
machine-learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md
Title: What's new on the Data Science Virtual Machine description: Release notes for the Azure Data Science Virtual Machine-+ -+ Last updated 12/14/2021
Azure portal users will always find the latest image available for provisioning
See the [list of known issues](reference-known-issues.md) to learn about known bugs and workarounds.
+## April 14, 2022
+New DSVM offering for [Data Science VM ΓÇô Ubuntu 20.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-2004?tab=Overview) is currently live in the marketplace.
+
+Version: 22.04.05
+ ## April 04, 2022
-New Image for [Data Science VM ΓÇô Ubuntu 18.04](https://azuremarketplace.microsoft.com/en-US/marketplace/apps/microsoft-dsvm.ubuntu-1804?tab=overview)
+New Image for [Data Science VM ΓÇô Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804?tab=overview)
Version: 22.04.01
Version: 21.12.03
Windows 2019 DSVM will now be supported under publisher: microsoft-dsvm, offer ID: dsvm-win-2019, plan ID/SKU ID: winserver-2019
-Users using Azure Resource Manager (ARM) template / virtual machine scale set (VMSS) to deploy the Windows DSVM machines, should configure the SKU with `winserver-2019` instead of `server-2019`, since we'll continue to ship updates to Windows DSVM images on the new SKU from March, 2022.
+Users using Azure Resource Manager (ARM) template / virtual machine scale set to deploy the Windows DSVM machines, should configure the SKU with `winserver-2019` instead of `server-2019`, since we'll continue to ship updates to Windows DSVM images on the new SKU from March, 2022.
## December 3, 2021
machine-learning Tools Included https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/tools-included.md
The Data Science Virtual Machine comes with the most useful data-science tools p
## Build deep learning and machine learning solutions
-| Tool | Windows Server 2019 DSVM | Ubuntu 18.04 DSVM | Usage notes |
-|--|:-:|:-:|:-:|
-| [CUDA, cuDNN, NVIDIA Driver](https://developer.nvidia.com/cuda-toolkit) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | [CUDA, cuDNN, NVIDIA Driver on the DSVM](./dsvm-tools-deep-learning-frameworks.md#cuda-cudnn-nvidia-driver) |
-| [Horovod](https://github.com/horovod/horovod) | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span></br> | [Horovod on the DSVM](./dsvm-tools-deep-learning-frameworks.md#horovod) |
-| [NVidia System Management Interface (nvidia-smi)](https://developer.nvidia.com/nvidia-system-management-interface) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | [nvidia-smi on the DSVM](./dsvm-tools-deep-learning-frameworks.md#nvidia-system-management-interface-nvidia-smi) |
-| [PyTorch](https://pytorch.org) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | [PyTorch on the DSVM](./dsvm-tools-deep-learning-frameworks.md#pytorch) |
-| [TensorFlow](https://www.tensorflow.org) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | [TensorFlow on the DSVM](./dsvm-tools-deep-learning-frameworks.md#tensorflow) |
-| Integration with [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/) (Python) | <span class='green-check'>&#9989;</span></br> (Python SDK, samples) | <span class='green-check'>&#9989;</span></br> (Python SDK,CLI, samples) | [Azure ML SDK](./dsvm-tools-data-science.md#azure-machine-learning-sdk-for-python) |
-| [XGBoost](https://github.com/dmlc/xgboost) | <span class='green-check'>&#9989;</span></br> (CUDA support) | <span class='green-check'>&#9989;</span></br> (CUDA support) | [XGBoost on the DSVM](./dsvm-tools-data-science.md#xgboost) |
-| [Vowpal Wabbit](https://github.com/JohnLangford/vowpal_wabbit) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span></br> | [Vowpal Wabbit on the DSVM](./dsvm-tools-data-science.md#vowpal-wabbit) |
-| [Weka](https://www.cs.waikato.ac.nz/ml/weka/) | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | |
-| LightGBM | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span></br> (GPU, MPI support) | |
-| H2O | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
-| CatBoost | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
-| Intel MKL | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
-| OpenCV | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
-| Dlib | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
-| Docker | <span class='green-check'>&#9989;</span> <br/> (Windows containers only) | <span class='green-check'>&#9989;</span> | |
-| Nccl | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
-| Rattle | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | |
-| ONNX Runtime | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
+| Tool | Windows Server 2019 DSVM | Ubuntu 18.04 DSVM | Ubuntu 20.04 DSVM | Usage notes |
+|--|:-:|:-:|:-:|:-:|
+| [CUDA, cuDNN, NVIDIA Driver](https://developer.nvidia.com/cuda-toolkit) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | [CUDA, cuDNN, NVIDIA Driver on the DSVM](./dsvm-tools-deep-learning-frameworks.md#cuda-cudnn-nvidia-driver) |
+| [Horovod](https://github.com/horovod/horovod) | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span></br> | [Horovod on the DSVM](./dsvm-tools-deep-learning-frameworks.md#horovod) |
+| [NVidia System Management Interface (nvidia-smi)](https://developer.nvidia.com/nvidia-system-management-interface) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | [nvidia-smi on the DSVM](./dsvm-tools-deep-learning-frameworks.md#nvidia-system-management-interface-nvidia-smi) |
+| [PyTorch](https://pytorch.org) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | [PyTorch on the DSVM](./dsvm-tools-deep-learning-frameworks.md#pytorch) |
+| [TensorFlow](https://www.tensorflow.org) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span></br> | [TensorFlow on the DSVM](./dsvm-tools-deep-learning-frameworks.md#tensorflow) |
+| Integration with [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/) (Python) | <span class='green-check'>&#9989;</span></br> (Python SDK, samples) | <span class='green-check'>&#9989;</span></br> (Python SDK,CLI, samples) | <span class='green-check'>&#9989;</span></br> (Python SDK,CLI, samples) | [Azure ML SDK](./dsvm-tools-data-science.md#azure-machine-learning-sdk-for-python) |
+| [XGBoost](https://github.com/dmlc/xgboost) | <span class='green-check'>&#9989;</span></br> (CUDA support) | <span class='green-check'>&#9989;</span></br> (CUDA support) | <span class='green-check'>&#9989;</span></br> (CUDA support) | [XGBoost on the DSVM](./dsvm-tools-data-science.md#xgboost) |
+| [Vowpal Wabbit](https://github.com/JohnLangford/vowpal_wabbit) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | [Vowpal Wabbit on the DSVM](./dsvm-tools-data-science.md#vowpal-wabbit) |
+| [Weka](https://www.cs.waikato.ac.nz/ml/weka/) | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | |
+| LightGBM | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span></br> (GPU, MPI support) | <span class='green-check'>&#9989;</span></br> (GPU, MPI support) | |
+| H2O | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
+| CatBoost | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
+| Intel MKL | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
+| OpenCV | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
+| Dlib | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
+| Docker | <span class='green-check'>&#9989;</span> <br/> (Windows containers only) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
+| Nccl | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
+| Rattle | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | |
+| ONNX Runtime | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
## Store, retrieve, and manipulate data
-| Tool | Windows Server 2019 DSVM | Ubuntu 18.04 DSVM | Usage notes |
-|--|-:|:-:|:-:|
-| Relational databases | [SQL Server 2019](https://www.microsoft.com/sql-server/sql-server-2019) <br/> Developer Edition | [SQL Server 2019](https://www.microsoft.com/sql-server/sql-server-2019) <br/> Developer Edition | [SQL Server on the DSVM](./dsvm-tools-data-platforms.md#sql-server-developer-edition) |
-| Database tools | SQL Server Management Studio<br/> SQL Server Integration Services<br/> [bcp, sqlcmd](/sql/tools/command-prompt-utility-reference-database-engine) | [SQuirreL SQL](http://squirrel-sql.sourceforge.net/) (querying tool), <br /> bcp, sqlcmd <br /> ODBC/JDBC drivers | |
-| [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | |
-| [Azure CLI](/cli/azure) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | |
-| [AzCopy](../../storage/common/storage-use-azcopy-v10.md) | <span class='green-check'>&#9989;</span></br> | <span class='red-x'>&#10060;</span> | [AzCopy on the DSVM](./dsvm-tools-ingestion.md#azcopy) |
-| [Blob FUSE driver](https://github.com/Azure/azure-storage-fuse) | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span></br> | [blobfuse on the DSVM](./dsvm-tools-ingestion.md#blobfuse) |
-| [Azure Cosmos DB Data Migration Tool](../../cosmos-db/import-data.md) | <span class='green-check'>&#9989;</span> | <span class='red-x'>&#10060;</span> | [Cosmos DB on the DSVM](./dsvm-tools-ingestion.md#azure-cosmos-db-data-migration-tool) |
-| Unix/Linux command-line tools | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
-| Apache Spark 3.1 (standalone) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span></br> | |
+| Tool | Windows Server 2019 DSVM | Ubuntu 18.04 DSVM | Ubuntu 20.04 DSVM | Usage notes |
+|--|-:|:-:|:-:|:-:|
+| Relational databases | [SQL Server 2019](https://www.microsoft.com/sql-server/sql-server-2019) <br/> Developer Edition | [SQL Server 2019](https://www.microsoft.com/sql-server/sql-server-2019) <br/> Developer Edition | [SQL Server 2019](https://www.microsoft.com/sql-server/sql-server-2019) <br/> Developer Edition | [SQL Server on the DSVM](./dsvm-tools-data-platforms.md#sql-server-developer-edition) |
+| Database tools | SQL Server Management Studio<br/> SQL Server Integration Services<br/> [bcp, sqlcmd](/sql/tools/command-prompt-utility-reference-database-engine) | [SQuirreL SQL](http://squirrel-sql.sourceforge.net/) (querying tool), <br /> bcp, sqlcmd <br /> ODBC/JDBC drivers | [SQuirreL SQL](http://squirrel-sql.sourceforge.net/) (querying tool), <br /> bcp, sqlcmd <br /> ODBC/JDBC drivers | |
+| [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | |
+| [Azure CLI](/cli/azure) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | |
+| [AzCopy](../../storage/common/storage-use-azcopy-v10.md) | <span class='green-check'>&#9989;</span></br> | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | [AzCopy on the DSVM](./dsvm-tools-ingestion.md#azcopy) |
+| [Blob FUSE driver](https://github.com/Azure/azure-storage-fuse) | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span></br> | <span class='red-x'>&#10060;</span></br> | [blobfuse on the DSVM](./dsvm-tools-ingestion.md#blobfuse) |
+| [Azure Cosmos DB Data Migration Tool](../../cosmos-db/import-data.md) | <span class='green-check'>&#9989;</span> | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | [Cosmos DB on the DSVM](./dsvm-tools-ingestion.md#azure-cosmos-db-data-migration-tool) |
+| Unix/Linux command-line tools | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
+| Apache Spark 3.1 (standalone) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | |
## Program in Python, R, Julia, and Node.js
-| Tool | Windows Server 2019 DSVM | Ubuntu 18.04 DSVM | Usage notes |
-|--|:-:|:-:|:-:|
-| [CRAN-R](https://cran.r-project.org/) with popular packages pre-installed | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
-| [Anaconda Python](https://www.continuum.io/) with popular packages pre-installed | <span class='green-check'>&#9989;</span><br/> (Miniconda) | <span class='green-check'>&#9989;</span></br> (Miniconda) | |
-| [Julia (Julialang)](https://julialang.org/) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
-| JupyterHub (multiuser notebook server) | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
-| JupyterLab (multiuser notebook server) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
-| Node.js | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
-| [Jupyter Notebook Server](https://jupyter.org/) with the following kernels: | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span> | [Jupyter Notebook samples](./dsvm-samples-and-walkthroughs.md) |
-| &nbsp;&nbsp;&nbsp;&nbsp; R | | | [R Jupyter Samples](./dsvm-samples-and-walkthroughs.md#r-language) |
-| &nbsp;&nbsp;&nbsp;&nbsp; Python | | | [Python Jupyter Samples](./dsvm-samples-and-walkthroughs.md#python-language) |
-| &nbsp;&nbsp;&nbsp;&nbsp; Julia | | | [Julia Jupyter Samples](./dsvm-samples-and-walkthroughs.md#julia-language) |
-| &nbsp;&nbsp;&nbsp;&nbsp; PySpark | | | [pySpark Jupyter Samples](./dsvm-samples-and-walkthroughs.md#sparkml) |
-
-**Ubuntu 18.04 DSVM and Windows Server 2019 DSVM** has the following Jupyter Kernels:-</br>
-* Python3.8-default</br>ΓÇ»
-* Python3.8-Tensorflow-Pytorch</br>ΓÇ»
-* Python3.8-AzureML</br>ΓÇ»
-* R</br>ΓÇ»
-* Python 3.7 - Spark (local)</br>ΓÇ»
-* Julia 1.6.0</br>ΓÇ»
-* R Spark ΓÇô HDInsight</br>ΓÇ»
-* Scala Spark ΓÇô HDInsight</br>ΓÇ»
-* Python 3 Spark ΓÇô HDInsight</br>ΓÇ»
-
-**Ubuntu 18.04 DSVM and Windows Server 2019 DSVM** has the following conda environments:-</br>
+| Tool | Windows Server 2019 DSVM | Ubuntu 18.04 DSVM | Ubuntu 20.04 DSVM | Usage notes |
+|--|:-:|:-:|:-:|:-:|
+| [CRAN-R](https://cran.r-project.org/) with popular packages pre-installed | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
+| [Anaconda Python](https://www.continuum.io/) with popular packages pre-installed | <span class='green-check'>&#9989;</span><br/> (Miniconda) | <span class='green-check'>&#9989;</span></br> (Miniconda) | <span class='green-check'>&#9989;</span></br> (Miniconda) | |
+| [Julia (Julialang)](https://julialang.org/) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
+| JupyterHub (multiuser notebook server) | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
+| JupyterLab (multiuser notebook server) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
+| Node.js | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
+| [Jupyter Notebook Server](https://jupyter.org/) with the following kernels: | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | [Jupyter Notebook samples](./dsvm-samples-and-walkthroughs.md) |
+| &nbsp;&nbsp;&nbsp;&nbsp; R | | | | [R Jupyter Samples](./dsvm-samples-and-walkthroughs.md#r-language) |
+| &nbsp;&nbsp;&nbsp;&nbsp; Python | | | | [Python Jupyter Samples](./dsvm-samples-and-walkthroughs.md#python-language) |
+| &nbsp;&nbsp;&nbsp;&nbsp; Julia | | | | [Julia Jupyter Samples](./dsvm-samples-and-walkthroughs.md#julia-language) |
+| &nbsp;&nbsp;&nbsp;&nbsp; PySpark | | | | [pySpark Jupyter Samples](./dsvm-samples-and-walkthroughs.md#sparkml) |
+
+**Ubuntu 18.04 DSVM, 20.04 DSVM and Windows Server 2019 DSVM** has the following Jupyter Kernels:-</br>
+* Python3.8-default</br>
+* Python3.8-Tensorflow-Pytorch</br>
+* Python3.8-AzureML</br>
+* R</br>
+* Python 3.7 - Spark (local)</br>
+* Julia 1.6.0</br>
+* R Spark ΓÇô HDInsight</br>
+* Scala Spark ΓÇô HDInsight</br>
+* Python 3 Spark ΓÇô HDInsight</br>
+
+**Ubuntu 18.04 DSVM, 20.04 DSVM and Windows Server 2019 DSVM** has the following conda environments:-</br>
* Python3.8-defaultΓÇ» </br> * Python3.8-Tensorflow-PytorchΓÇ»</br> * Python3.8-AzureMLΓÇ» </br> ## Use your preferred editor or IDE
-| Tool | Windows Server 2019 DSVM | Ubuntu 18.04 DSVM | Usage notes |
-|--|:-:|:-:|:-:|
-| [Notepad++](https://notepad-plus-plus.org/) | <span class='green-check'>&#9989;</span></br> | <span class='red-x'>&#10060;</span></br> | |
-| [Nano](https://www.nano-editor.org/) | <span class='green-check'>&#9989;</span></br> | <span class='red-x'>&#10060;</span></br> | |
-| [Visual Studio 2019 Community Edition](https://www.visualstudio.com/community/) | <span class='green-check'>&#9989;</span> | <span class='red-x'>&#10060;</span> | [Visual Studio on the DSVM](dsvm-tools-development.md#visual-studio-community-edition) |
-| [Visual Studio Code](https://code.visualstudio.com/) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | [Visual Studio Code on the DSVM](./dsvm-tools-development.md#visual-studio-code) |
-| [RStudio Desktop](https://www.rstudio.com/products/rstudio/#Desktop) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | [RStudio Desktop on the DSVM](./dsvm-tools-development.md#rstudio-desktop) |
-| [RStudio Server](https://www.rstudio.com/products/rstudio/#Server) <br/> (disabled by default) | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
-| [PyCharm Community Edition](https://www.jetbrains.com/pycharm/) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | [PyCharm on the DSVM](./dsvm-tools-development.md#pycharm) |
-| [IntelliJ IDEA](https://www.jetbrains.com/idea/) | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
-| [Vim](https://www.vim.org) | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span></br> | |
-| [Emacs](https://www.gnu.org/software/emacs) | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span></br> | |
-| [Git](https://git-scm.com/) and Git Bash | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | |
-| [OpenJDK](https://openjdk.java.net) 11 | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | |
-| .NET Framework | <span class='green-check'>&#9989;</span></br> | <span class='red-x'>&#10060;</span> | |
-| Azure SDK | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
+| Tool | Windows Server 2019 DSVM | Ubuntu 18.04 DSVM | Ubuntu 20.04 DSVM | Usage notes |
+|--|:-:|:-:|:-:|:-:|
+| [Notepad++](https://notepad-plus-plus.org/) | <span class='green-check'>&#9989;</span></br> | <span class='red-x'>&#10060;</span></br> | <span class='red-x'>&#10060;</span></br> | |
+| [Nano](https://www.nano-editor.org/) | <span class='green-check'>&#9989;</span></br> | <span class='red-x'>&#10060;</span></br> | <span class='red-x'>&#10060;</span></br> | |
+| [Visual Studio 2019 Community Edition](https://www.visualstudio.com/community/) | <span class='green-check'>&#9989;</span> | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | [Visual Studio on the DSVM](dsvm-tools-development.md#visual-studio-community-edition) |
+| [Visual Studio Code](https://code.visualstudio.com/) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | [Visual Studio Code on the DSVM](./dsvm-tools-development.md#visual-studio-code) |
+| [RStudio Desktop](https://www.rstudio.com/products/rstudio/#Desktop) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | [RStudio Desktop on the DSVM](./dsvm-tools-development.md#rstudio-desktop) |
+| [RStudio Server](https://www.rstudio.com/products/rstudio/#Server) <br/> (disabled by default) | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
+| [PyCharm Community Edition](https://www.jetbrains.com/pycharm/) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | [PyCharm on the DSVM](./dsvm-tools-development.md#pycharm) |
+| [IntelliJ IDEA](https://www.jetbrains.com/idea/) | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
+| [Vim](https://www.vim.org) | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | |
+| [Emacs](https://www.gnu.org/software/emacs) | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | |
+| [Git](https://git-scm.com/) and Git Bash | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | |
+| [OpenJDK](https://openjdk.java.net) 11 | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | |
+| .NET Framework | <span class='green-check'>&#9989;</span></br> | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | |
+| Azure SDK | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
## Organize & present results
-| Tool | Windows Server 2019 DSVM | Ubuntu 18.04 DSVM | Usage notes |
-|--|:-:|:-:|:-:|
-| [Microsoft 365](https://www.microsoft.com/microsoft-365) (Word, Excel, PowerPoint) | <span class='green-check'>&#9989;</span> | <span class='red-x'>&#10060;</span> | |
-| [Microsoft Teams](https://www.microsoft.com/microsoft-teams) | <span class='green-check'>&#9989;</span> | <span class='red-x'>&#10060;</span> | |
-| [Power BI Desktop](https://powerbi.microsoft.com/) | <span class='green-check'>&#9989;</span></br> | <span class='red-x'>&#10060;</span> | |
-| Microsoft Edge Browser | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
+| Tool | Windows Server 2019 DSVM | Ubuntu 18.04 DSVM | Ubuntu 20.04 DSVM | Usage notes |
+|--|:-:|:-:|:-:|:-:|
+| [Microsoft 365](https://www.microsoft.com/microsoft-365) (Word, Excel, PowerPoint) | <span class='green-check'>&#9989;</span> | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | |
+| [Microsoft Teams](https://www.microsoft.com/microsoft-teams) | <span class='green-check'>&#9989;</span> | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | |
+| [Power BI Desktop](https://powerbi.microsoft.com/) | <span class='green-check'>&#9989;</span></br> | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | |
+| Microsoft Edge Browser | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
machine-learning How To Add Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-add-users.md
To add a custom role, you must have `Microsoft.Authorization/roleAssignments/wri
1. Replace these two lines with: ```json
- "actions": [
- "Microsoft.MachineLearningServices/workspaces/read",
- "Microsoft.MachineLearningServices/workspaces/labeling/projects/read",
- "Microsoft.MachineLearningServices/workspaces/labeling/labels/write"],
- "notActions": [
- "Microsoft.MachineLearningServices/workspaces/labeling/projects/summary/read"],
+ "actions": [
+ "Microsoft.MachineLearningServices/workspaces/read",
+ "Microsoft.MachineLearningServices/workspaces/labeling/projects/read",
+ "Microsoft.MachineLearningServices/workspaces/labeling/projects/summary/read",
+ "Microsoft.MachineLearningServices/workspaces/labeling/labels/read",
+ "Microsoft.MachineLearningServices/workspaces/labeling/labels/write"
+ ],
+ "notActions": [
+ ],
``` 1. Select **Save** at the top of the edit box to save your changes.
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-nlp-models.md
Before training, automated ML applies data validation checks on the input data t
Task | Data validation check |
-All tasks | At least 50 training samples are required
+All tasks | - Both training and validation sets must be provided <br> - At least 50 training samples are required
Multi-class and Multi-label | The training data and validation data must have <br> - The same set of columns <br>- The same order of columns from left to right <br>- The same data type for columns with the same name <br>- At least two unique labels <br> - Unique column names within each dataset (For example, the training set can't have multiple columns named **Age**) Multi-class only | None Multi-label only | - The label column format must be in [accepted format](#multi-label) <br> - At least one sample should have 0 or 2+ labels, otherwise it should be a `multiclass` task <br> - All labels should be in `str` or `int` format, with no overlapping. You should not have both label `1` and label `'1'`
marketplace Azure Resource Manager Test Drive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-resource-manager-test-drive.md
Last updated 12/06/2021-+ # Azure Resource Manager test drive
restrictions in [this article](/azure/cloud-adoption-framework/ready/azure-best-
### Deployment Location
-You can make you test drive available in different Azure regions.
+You can make your test drive available in different Azure regions.
When test drive creates an instance of the Lab, it always creates a resource group in one of the selected regions, and then executes your deployment template in this group context. So, your template should pick the deployment location from resource group:
The final section to complete is to be able to deploy the test drives automatica
1. From the Azure portal:
- 1. Select the **Subscription** being used for the test drive.
- 1. Select **Access control (IAM)**.<br>
+ 1. Select the subscription being used for the test drive.
- ![Add a new Access Control (I A M) contributor](media/test-drive/access-control-principal.png)
+ 1. Select **Access control (IAM)**.
- 1. Select the **Role assignments** tab, then **+ Add role assignment**.
+ 1. Select **Add > Add role assignment**.
- ![On the Select Access Control (I A M) window, shows how to select the Role assignments tab, then + Add role assignment.](media/test-drive/access-control-principal-add-assignments.jpg)
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot showing Access control (IAM) page with Add role assignment menu open.":::
- 1. Enter this Azure AD application name: `Microsoft TestDrive`. Select the application to which you want to assign the **Contributor** role.
+ 1. On the **Role** tab, select **Contributor**.
- ![Hows how to assign the contributor role](media/test-drive/access-control-permissions.jpg)
+ 1. On the **Members** tab, select **User, group, or service principal**, and then choose **Select members**.
+
+ 1. Select the **Microsoft TestDrive** service principal that you created previously.
+
+ 1. On the **Review + assign** tab, select **Review + assign** to assign the role.
+
+ For more information about role assignments, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md)
- 1. Select **Save**.
1. If using PowerShell: 1. Run this to get the ServicePrincipal object-id: `(Get-AzADServicePrincipal -DisplayName 'Microsoft TestDrive').id`. 1. Run this with the ObjectId and subscription ID: `New-AzRoleAssignment -ObjectId <objectId> -RoleDefinitionName Contributor -Scope /subscriptions/<subscriptionId>`.
marketplace Gtm Your Marketplace Benefits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/gtm-your-marketplace-benefits.md
description: Go-To-Market Services - Describes Microsoft resources that publishe
Previously updated : 04/14/2022 Last updated : 04/21/2022
marketplace Update Existing Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/update-existing-offer.md
Previously updated : 07/09/2021 Last updated : 04/21/2022
This article explains how to make updates to existing offers and plans, and also how to remove an offer from the commercial marketplace. You can view your offers in the [Commercial Marketplace portal](https://go.microsoft.com/fwlink/?linkid=2165935.) in Partner Center.
+## Request access to manage an offer
+
+If you see an offer you need to update but donΓÇÖt have access, contact the publisher owner(s) associated with the offer. On the [**Marketplace offers**](https://partner.microsoft.com/dashboard/marketplace-offers/overview) page, the owner list for an inaccessible offer is available by selecting **Request access** in the **Status** column of the table. A publisher owner can grant you the _developer_ or _manager_ role for the offer by following the instructions to [add existing users](add-manage-users.md#add-existing-users) to their account.
+
+> [!NOTE]
+> Requesting access to an offer will give you access permissions to all the offers associated with the same publisher.
+ ## Update a published offer Use these steps to update an offer that's been successfully published to Preview or Live state.
mysql Howto Troubleshoot High Cpu Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-high-cpu-utilization.md
+
+ Title: Troubleshoot high CPU utilization in Azure Database for MySQL
+description: Learn how to troubleshoot high CPU utilization in Azure Database for MySQL.
++++ Last updated : 4/22/2022++
+# Troubleshoot high CPU utilization in Azure Database for MySQL
++
+Azure Database for MySQL provides a range of metrics that you can use to identify resource bottlenecks and performance issues on the server. To determine whether your server is experiencing high CPU utilization, monitor metrics such as ΓÇ£Host CPU percentΓÇ¥, ΓÇ£Total ConnectionsΓÇ¥, ΓÇ£Host Memory PercentΓÇ¥, and ΓÇ£IO PercentΓÇ¥. At times, viewing a combination of these metrics will provide insights into what might be causing the increased CPU utilization on your Azure Database for MySQL server.
+
+For example, consider a sudden surge in connections that initiates surge of database queries that cause CPU utilization to shoot up.
+
+Besides capturing metrics, itΓÇÖs important to also trace the workload to understand if one or more queries are causing the spike in CPU utilization.
+
+## Capturing details of the current workload
+
+The SHOW (FULL) PROCESSLIST command displays a list of all user sessions currently connected to the Azure Database for MySQL server. It also provides details about the current state and activity of each session.
+This command only produces a snapshot of the current session status and doesn't provide information about historical session activity.
+
+LetΓÇÖs take a look at sample output from running this command.
+
+```
+mysql> SHOW FULL PROCESSLIST;
++-++--++-+--+--++
+| Id | User | Host | db | Command | Time | State | Info |
++-++--++-+--+--++
+| 1 | event_scheduler | localhost | NULL | Daemon | 13 | Waiting for next activation | NULL |
+| 6 | azure_superuser | 127.0.0.1:33571 | NULL | Sleep | 115 | | NULL
+|
+| 24835 | adminuser | 10.1.1.4:39296 | classicmodels | Query | 7 | Sending data | select * from classicmodels.orderdetails;|
+| 24837 | adminuser | 10.1.1.4:38208 | NULL | Query | 0 | starting | SHOW FULL PROCESSLIST |
++-++--++-+--+--++
+5 rows in set (0.00 sec)
+```
+
+Notice that there are two sessions owned by customer owned user ΓÇ£adminuserΓÇ¥, both from the same IP address:
+
+* Session 24835 has been executing a SELECT statement for the last seven seconds.
+* Session 24837 is executing ΓÇ£show full processlistΓÇ¥ statement.
+
+When necessary, it may be required to terminate a query, such as a reporting or HTAP query that has caused your production workload CPU usage to spike. However, always consider the potential consequences of terminating a query before taking the action in an attempt to reduce CPU utilization. Other times if there are any long running queries identified that are leading to CPU spikes, tune these queries so the resources are optimally utilized.
+
+## Detailed current workload analysis
+
+You need to use at least two sources of information to obtain accurate information about the status of a session, transaction, and query:
+
+* The serverΓÇÖs process list from the INFORMATION_SCHEMA.PROCESSLIST table, which you can also access by running the SHOW [FULL] PROCESSLIST command.
+* InnoDBΓÇÖs transaction metadata from the INFORMATION_SCHEMA.INNODB_TRX table.
+
+With information from only one of these sources, itΓÇÖs impossible to describe the connection and transaction state. For example, the process list doesnΓÇÖt inform you whether thereΓÇÖs an open transaction associated with any of the sessions. On the other hand, the transaction metadata doesnΓÇÖt show session state and time spent in that state.
+
+An example query that combines process list information with some of the important pieces of InnoDB transaction metadata is shown below:
+
+```
+mysql> select p.id as session_id, p.user, p.host, p.db, p.command, p.time, p.state, substring(p.info, 1, 50) as info, t.trx_started, unix_timestamp(now()) - unix_timestamp(t.trx_started) as trx_age_seconds, t.trx_rows_modified, t.trx_isolation_level from information_schema.processlist p left join information_schema.innodb_trx t on p.id = t.trx_mysql_thread_id \G
+```
+
+An example of the output from this query is shown below:
+
+```
+*************************** 1. row ***************************
+ session_id: 11
+ user: adminuser
+ host: 172.31.19.159:53624
+ db: NULL
+ command: Sleep
+ time: 636
+ state: cleaned up
+ info: NULL
+ trx_started: 2019-08-01 15:25:07
+ trx_age_seconds: 2908
+ trx_rows_modified: 17825792
+trx_isolation_level: REPEATABLE READ
+*************************** 2. row ***************************
+ session_id: 12
+ user: adminuser
+ host: 172.31.19.159:53622
+ db: NULL
+ command: Query
+ time: 15
+ state: executing
+ info: select * from classicmodels.orders
+ trx_started: NULL
+ trx_age_seconds: NULL
+ trx_rows_modified: NULL
+trx_isolation_level: NULL
+```
+
+An analysis of this information, by session, is listed in the following table.
+
+| **Area** | **Analysis** |
+|-|-|
+| Session 11 | This session is currently idle (sleeping) with no queries running, and it has been for 636 seconds. Within the session, a transaction thatΓÇÖs been open for 2908 seconds has modified 17,825,792 rows, and it uses REPEATABLE READ isolation. |
+| Session 12 | The session is currently executing a SELECT statement, which has been running for 15 seconds. There's no query running within the session, as indicated by the NULL values for trx_started and trx_age_seconds. The session will continue to hold the garbage collection boundary as long as it runs unless itΓÇÖs using the more relaxed READ COMMITTED isolation. |
+
+Note that if a session is reported as idle, itΓÇÖs no longer executing any statements. At this point, the session has completed any prior work and is waiting for new statements from the client. However, idle sessions are still responsible for some CPU consumption and memory usage.
++
+## Understanding thread states
+
+Transactions that contribute to higher CPU utilization during execution can have threads in various states, as described in the following sections. Use this information to better understand the query lifecycle and various thread states.
+
+### Checking permissions/Opening tables
+
+This state usually means the open table operation is consuming a long time. Usually, you can increase the table cache size to improve the issue. However, tables opening slowly can also be indicative of other issues, such as having too many tables under the same database.
+
+### Sending data
+
+While this state can mean that the thread is sending data through the network, it can also indicate that the query is reading data from the disk or memory. This state can be caused by a sequential table scan. You should check the values of the innodb_buffer_pool_reads and innodb_buffer_pool_read_requests to determine whether a large number of pages are being served from the disk into the memory.
+
+### Updating
+
+This state usually means that the thread is performing a write operation. Check the IO-related metric in the Performance Monitor to get a better understanding on what the current sessions are doing.
+
+### Waiting for <lock_type> lock
+
+This state indicates that the thread is waiting for a second lock. In most cases, it may be a metadata lock. You should review all other threads and see who is taking the lock.
+
+## Understanding and analyzing wait events
+
+ItΓÇÖs important to understand the underlying wait events in MySQL engine, because long waits or a large number of waits in a database can lead to increased CPU utilization. The following shows the appropriate command and sample output.
+
+```
+SELECT event_name AS wait_event,
+count_star AS all_occurrences,
+Concat(Round(sum_timer_wait / 1000000000000, 2), ' s') AS total_wait_time,
+ Concat(Round(avg_timer_wait / 1000000000, 2), ' ms') AS
+avg_wait_time
+FROM performance_schema.events_waits_summary_global_by_event_name
+WHERE count_star > 0 AND event_name <> 'idle'
+ORDER BY sum_timer_wait DESC LIMIT 10;
++--+--+--++
+| wait_event | all_occurrences | total_wait_time | avg_wait_time |
++--+--+--++
+| wait/io/file/sql/binlog | 7090 | 255.54 s | 36.04 ms |
+| wait/io/file/innodb/innodb_log_file | 17798 | 55.43 s | 3.11 ms |
+| wait/io/file/innodb/innodb_data_file | 260227 | 39.67 s | 0.15 ms |
+| wait/io/table/sql/handler | 5548985 | 11.73 s | 0.00 ms |
+| wait/io/file/sql/FRM | 1237 | 7.61 s | 6.15 ms |
+| wait/io/file/sql/dbopt | 28 | 1.89 s | 67.38 ms |
+| wait/io/file/myisam/kfile | 92 | 0.76 s | 8.30 ms |
+| wait/io/file/myisam/dfile | 271 | 0.53 s | 1.95 ms |
+| wait/io/file/sql/file_parser | 18 | 0.32 s | 17.75 ms |
+| wait/io/file/sql/slow_log | 2 | 0.05 s | 25.79 ms |
++--+--+--++
+10 rows in set (0.00 sec)
+```
+
+## Restrict SELECT Statements execution time
+
+If you donΓÇÖt know about the execution cost and execution time for database operations involving SELECT queries, any long running SELECTs can lead to unpredictability or volatility in the database server. The size of statements and transactions, as well as the associated resource utilization, continues to grow depending on the underlying data set growth. Because of this unbounded growth, end user statements and transactions take longer and longer, consuming increasingly more resources until they overwhelm the database server. When using unbounded SELECT queries, itΓÇÖs recommended to configure the max_execution_time parameter so that any queries exceeding this duration will be aborted.
+
+## Recommendations
+
+* Ensure that your database has enough resources allocated to run your queries. At times, you may need to scale up the instance size to get more CPU cores to accommodate your workload.
+* Avoid large or long-running transactions by breaking them into smaller transactions.
+* Run SELECT statements on read replica servers when possible.
+* Use alerts on ΓÇ£Host CPU PercentΓÇ¥ so that you get notifications if the system exceeds any of the specified thresholds.
+* Use Query Performance Insights or Azure Workbooks to identify any problematic or slowly running queries, and then optimize them.
+* For production database servers, collect diagnostics at regular intervals to ensure that everything is running smoothly. If not, troubleshoot and resolve any issues that you identify.
+
+## Next steps
+
+To find peer answers to your most important questions or to post or answer a question, visit [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-database-mysql).
mysql Howto Troubleshoot Low Memory Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-low-memory-issues.md
+
+ Title: Troubleshoot low memory issues in Azure Database for MySQL
+description: Learn how to troubleshoot low memory issues in Azure Database for MySQL.
++++ Last updated : 4/22/2022++
+# Troubleshoot low memory issues in Azure Database for MySQL
++
+To help ensure that a MySQL database server performs optimally, it's very important to have the appropriate memory allocation and utilization. By default, when you create an instance of Azure Database for MySQL, the available physical memory is dependent on the tier and size you select for your workload. In addition, memory is allocated for buffers and caches to improve database operations. For more information, see [How MySQL Uses Memory](https://dev.mysql.com/doc/refman/5.7/en/memory-use.html).
+
+Note that the Azure Database for MySQL service consumes memory to achieve as much cache hit as possible. As a result, memory utilization can often hover between 80- 90% of the available physical memory of an instance. Unless there's an issue with the progress of the query workload, it isn't a concern. However, you may run into out of memory issues for reasons such as that you have:
+
+* Configured too large buffers.
+* Sub optimal queries running.
+* Queries performing joins and sorting large data sets.
+* Set the maximum connections on a database server too high.
+
+A majority of a serverΓÇÖs memory is used by InnoDBΓÇÖs global buffers and caches, which include components such as **innodb_buffer_pool_size**, **innodb_log_buffer_size**, **key_buffer_size**, and **query_cache_size**.
+
+The value of the **innodb_buffer_pool_size** parameter specifies the area of memory in which InnoDB caches the database tables and index-related data. MySQL tries to accommodate as much table and index-related data in the buffer pool as possible. A larger buffer pool requires fewer I/O operations being diverted to the disk.
+
+## Monitoring memory usage
+
+Azure Database for MySQL provides a range of metrics to gauge the performance of your database instance. To better understand the memory utilization for your database server, view the **Host Memory Percent** or **Memory Percent** metrics.
+
+![Viewing memory utilization metrics](media/howto-troubleshoot-low-memory-issues/avg-host-memory-percentage.png)
+
+If you notice that memory utilization has suddenly increased and that available memory is dropping quickly, monitor other metrics, such as **Host CPU Percent**, **Total Connections**, and **IO Percent**, to determine if a sudden spike in the workload is the source of the issue.
+
+ItΓÇÖs important to note that each connection established with the database server requires the allocation of some amount of memory. As a result, a surge in database connections can cause memory shortages.
+
+## Causes of high memory utilization
+
+LetΓÇÖs look at some more causes of high memory utilization in MySQL. These causes are dependent on the characteristics of the workload.
+
+### An increase in temporary tables
+
+MySQL uses ΓÇ£temporary tablesΓÇ¥, which are a special type of table designed to store a temporary result set. Temporary tables can be reused several times during a session. Since any temporary tables created are local to a session, different sessions can have different temporary tables. In production systems with many sessions performing compilations of large temporary result sets, you should regularly check the global status counter created_tmp_tables, which tracks the number of temporary tables being created during peak hours. A large number of in-memory temporary tables can quickly lead to low available memory in an instance of Azure Database for MySQL.
+
+With MySQL, temporary table size is determined by the values of two parameters, as described in the following table.
+
+| **Parameter** | **Description** |
+|-|-|
+| tmp_table_size | Specifies the maximum size of internal, in-memory temporary tables. |
+| max_heap_table_size | Specifies the maximum size to which user created MEMORY tables can grow. |
+
+> [!NOTE]
+> When determining the maximum size of an internal, in-memory temporary table, MySQL considers the lower of the values set for the tmp_table_size and max_heap_table_size parameters.
+>
+
+#### Recommendations
+
+To troubleshoot low memory issues related to temporary tables, consider the following recommendations.
+
+* Before increasing the tmp_table_size value, verify that your database is indexed properly, especially for columns involved in joins and grouped by operations. Using the appropriate indexes on underlying tables limits the number of temporary tables that are created. Increasing the value of this parameter and the max_heap_table_size parameter without verifying your indexes can allow inefficient queries to run without indexes and create more temp tables than are necessary.
+* Tune the values of the max_heap_table_size and tmp_table_size parameters to address the needs of your workload.
+* If the values you set for the max_heap_table_size and tmp_table_size parameters are too low, temporary tables may regularly spill to storage, adding latency to your queries. You can track temporary tables spilling to disk using the global status counter created_tmp_disk_tables. By comparing the values of the created_tmp_disk_tables and created_tmp_tables variables, you view the number of internal, on-disk temporary tables that have been created to the total number of internal temporary tables created.
+
+### Table cache
+
+As a multi-threaded system, MySQL maintains a cache of table file descriptors so that the tables can be concurrently opened independently by multiple sessions. MySQL uses some amount of memory and OS file descriptors to maintain this table cache. The variable table_open_cache defines the size of the table cache.
+
+#### Recommendations
+
+To troubleshoot low memory issues related to the table cache, consider the following recommendations.
+
+* The parameter table_open_cache specifies the number of open tables for all threads. Increasing this value increases the number of file descriptors that mysqld requires. You can check whether you need to increase the table cache by checking the opened_tables status variable in the show global status counter. Increase the value of this parameter in increments to accommodate your workload.
+* Setting table_open_cache too low may cause MySQL to spend more time in opening and closing tables needed for query processing.
+* Setting this value too high may cause usage of more memory and the operating system running of file descriptors leading to refused connections or failing to process queries.
+
+### Other buffers and the query cache
+
+When troubleshooting issues related to low memory, you can work with a few more buffers and a cache to help with the resolution.
+
+#### Net buffer (net_buffer_length)
+
+The net buffer is size for connection and thread buffers for each client thread and can grow to value specified for max_allowed_packet. If a query statement is large, for example, all the inserts/updates have a very large value, then increasing the value of the net_buffer_length parameter will help to improve performance.
+
+#### Join buffer (join_buffer_size)
+
+The join buffer is allocated to cache table rows when a join canΓÇÖt use an index. If your database has many joins performed without indexes, consider adding indexes for faster joins. If you canΓÇÖt add indexes, then consider increasing the value of the join_buffer_size parameter, which specifies the amount of memory allocated per connection.
+
+#### Sort buffer (sort_buffer_size)
+
+The sort buffer is used for performing sorts for some ORDER BY and GROUP BY queries. If you see many Sort_merge_passes per second in the SHOW GLOBAL STATUS output, consider increasing the sort_buffer_size value to speed up ORDER BY or GROUP BY operations that canΓÇÖt be improved using query optimization or better indexing.
+
+Avoid arbitrarily increasing the sort_buffer_size value unless you have related information that indicates otherwise. Memory for this buffer is assigned per connection. In the MySQL documentation, the Server System Variables article calls out that on Linux, there are two thresholds, 256 KB and 2 MB, and that using larger values can significantly slow down memory allocation. As a result, avoid increasing the sort_buffer_size value beyond 2M, as the performance penalty will outweigh any benefits.
+
+#### Query cache (query_cache_size)
+
+The query cache is an area of memory that is used for caching query result sets. The query_cache_size parameter determines the amount of memory that is allocated for caching query results. By default, the query cache is disabled. In addition, the query cache is deprecated in MySQL version 5.7.20 and removed in MySQL version 8.0. If the query cache is currently enabled in your solution, before disabling it, verify that there arenΓÇÖt any queries relying on it.
+
+### Calculating buffer cache hit ratio
+
+Buffer cache hit ratio is important in MySQL environment to understand if the buffer pool can accommodate the workload requests or not, and as a general rule of thumb itΓÇÖs a good practice to always have a buffer pool cache hit ratio more than 99%.
+
+To compute the InnoDB buffer pool hit ratio for read requests, you can run the SHOW GLOBAL STATUS to retrieve counters ΓÇ£Innodb_buffer_pool_read_requestsΓÇ¥ and ΓÇ£Innodb_buffer_pool_readsΓÇ¥ and then compute the value using the formula shown below.
+
+```
+InnoDB Buffer pool hit ratio = Innodb_buffer_pool_read_requests / (Innodb_buffer_pool_read_requests + Innodb_buffer_pool_reads) * 100
+```
+
+Consider the following example.
+
+```
+mysql> show global status like "innodb_buffer_pool_reads";
++--+-+
+| Variable_name | Value |
++--+-+
+| Innodb_buffer_pool_reads | 197 |
++--+-+
+1 row in set (0.00 sec)
+
+mysql> show global status like "innodb_buffer_pool_read_requests";
++-+-+
+| Variable_name | Value |
++-+-+
+| Innodb_buffer_pool_read_requests | 22479167 |
++-+-+
+1 row in set (0.00 sec)
+```
+
+Using the above values, computing the InnoDB buffer pool hit ratio for read requests yields the following result:
+
+```
+InnoDB Buffer pool hit ratio = 22479167/(22479167+197) * 100
+
+Buffer hit ratio = 99.99%
+```
+
+In addition to select statements buffer cache hit ratio, for any DML statements, writes to the InnoDB Buffer Pool happen in the background. However, if it's necessary to read or create a page and no clean pages are available, it's also necessary to wait for pages to be flushed first.
+
+The Innodb_buffer_pool_wait_free counter counts how many times this has happened. Innodb_buffer_pool_wait_free greater than 0 is a strong indicator that the InnoDB Buffer Pool is too small and increase in buffer pool size or instance size is required to accommodate the writes coming into the database.
+
+## Recommendations
+
+* Ensure that your database has enough resources allocated to run your queries. At times, you may need to scale up the instance size to get more physical memory so the buffers and caches to accommodate your workload.
+* Avoid large or long-running transactions by breaking them into smaller transactions.
+* Use alerts ΓÇ£Host Memory PercentΓÇ¥ so that you get notifications if the system exceeds any of the specified thresholds.
+* Use Query Performance Insights or Azure Workbooks to identify any problematic or slowly running queries, and then optimize them.
+* For production database servers, collect diagnostics at regular intervals to ensure that everything is running smoothly. If not, troubleshoot and resolve any issues that you identify.
+
+## Next steps
+
+To find peer answers to your most important questions or to post or answer a question, visit [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-database-mysql).
mysql Howto Troubleshoot Query Performance New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-query-performance-new.md
+
+ Title: Troubleshoot query performance in Azure Database for MySQL
+description: Learn how to troubleshoot query performance in Azure Database for MySQL.
++++ Last updated : 4/22/2022++
+# Troubleshoot query performance in Azure Database for MySQL
++
+Query performance can be impacted by multiple factors, so itΓÇÖs first important to look at the scope of the symptoms youΓÇÖre experiencing in your Azure Database for MySQL server. For example, is query performance slow for:
+
+* All queries running on the Azure Database for MySQL server?
+* A specific set of queries?
+* A specific query?
+
+Also keep in mind that any recent changes to the structure or underlying data of the tables youΓÇÖre querying can affect performance.
+
+## Enabling logging functionality
+
+Before analyzing individual queries, you need to define query benchmarks. With this information, you can implement logging functionality on the database server to trace queries that exceed a threshold you specify based on the needs of the application.
+
+With Azure Database for MySQL, itΓÇÖs recommended to use the slow query log feature to identify queries that take longer than *N* seconds to run. After you've identified the queries from the slow query log, you can use MySQL diagnostics to troubleshoot these queries.
+
+Before you can begin to trace long running queries, you need enable the `slow_query_log` parameter by using the Azure portal or Azure CLI. With this parameter enabled, you should also configure the value of the `long_query_time` parameter to specify the number of seconds that queries can run before being identified as ΓÇ£slow runningΓÇ¥ queries. The default value of the parameter is 10 seconds, but you can adjust the value to address the needs of your applicationΓÇÖs SLA.
+
+[ ![Flexible Server slow query log interface.](media/howto-troubleshoot-query-performance-new/slow-query-log.png) ](media/howto-troubleshoot-query-performance-new/slow-query-log.png#lightbox)
+
+While the slow query log is a great tool for tracing long running queries, there are certain scenarios in which it might not be effective. For example, the slow query log:
+
+* Negatively impacts performance if the number of queries is very high or if the query statement is very large. Adjust the value of the `long_query_time` parameter accordingly.
+* May not be helpful if youΓÇÖve also enabled the `log_queries_not_using_index` parameter, which specifies to log queries expected to retrieve all rows. Queries performing a full index scan take advantage of an index, but theyΓÇÖd be logged because the index doesn't limit the number of rows returned.
+
+## Retrieving information from the logs
+
+Logs are available for up to seven days from their creation. You can list and download slow query logs via the Azure portal or Azure CLI. In the Azure portal, navigate to your server, under **Monitoring**, select **Server logs**, and then select the downward arrow next to an entry to download the logs associated with the date and time youΓÇÖre investigating.
+
+[ ![Flexible Server retrieving data from the logs.](media/howto-troubleshoot-query-performance-new/retrieving-information-logs.png) ](media/howto-troubleshoot-query-performance-new/retrieving-information-logs.png#lightbox)
+
+In addition, if your slow query logs are integrated with Azure Monitor logs through Diagnostic logs, you can run queries in an editor to analyze them further:
+
+```kusto
+AzureDiagnostics
+| where Resource == '<your server name>'
+| where Category == 'MySqlSlowLogs'
+| project TimeGenerated, Resource , event_class_s, start_time_t , query_time_d, sql_text_s
+| where query_time_d > 10
+```
+
+> [!NOTE]
+> For more examples to get you started with diagnosing slow query logs via Diagnostic logs, see [Analyze logs in Azure Monitor Logs](./concepts-server-logs.md#analyze-logs-in-azure-monitor-logs).
+>
+
+The following snapshot depicts a sample slow query.
+
+```
+# Time: 2021-11-13T10:07:52.610719Z
+# User@Host: root[root] @ [172.30.209.6] Id: 735026
+# Query_time: 25.314811 Lock_time: 0.000000 Rows_sent: 126 Rows_examined: 443308
+use employees;
+SET timestamp=1596448847;
+select * from titles where DATE(from_date) > DATE('1994-04-05') AND title like '%senior%';;
+```
+
+Notice that the query ran in 26 seconds, examined over 443k rows, and returned 126 rows of results.
+
+Usually, you should focus on queries with high values for Query_time and Rows_examined. However, if you notice queries with a high Query_time but only a few Rows_examined, this often indicates the presence of a resource bottleneck. For these cases, you should check if there's any IO throttle or CPU usage.
+
+## Profiling a query
+
+After youΓÇÖve identified a specific slow running query, you can use the EXPLAIN command and profiling to gather additional detail.
+
+To check the query plan, run the following command:
+
+```
+EXPLAIN <QUERY>
+```
+
+> [!NOTE]
+> For more information about using EXPLAIN statements, see [How to use EXPLAIN to profile query performance in Azure Database for MySQL](./howto-troubleshoot-query-performance.md).
+>
+
+In addition to creating an EXPLAIN plan for a query, you can use the SHOW PROFILE command, which allows you to diagnose the execution of statements that have been run within the current session.
+
+To enable profiling and profile a specific query in a session, run the following set of commands:
+
+```
+SET profiling = 1;
+<QUERY>;
+SHOW PROFILES;
+SHOW PROFILE FOR QUERY <X>;
+```
+
+> [!NOTE]
+> Profiling individual queries is only available in a session and historical statements cannot be profiled.
+>
+
+LetΓÇÖs take a closer look at using these commands to profile a query. First, enable profiling for the current session, run the `SET PROFILING = 1` command:
+
+```
+mysql> SET PROFILING = 1;
+Query OK, 0 rows affected, 1 warning (0.00 sec)
+```
+
+Next, execute a suboptimal query that performs a full table scan:
+
+```
+mysql> select * from sbtest8 where c like '%99098187165%';
++-++-+-+
+| id | k | c | pad |
++-++-+-+
+| 10 | 5035785 | 81674956652-89815953173-84507133182-62502329576-99098187165-62672357237-37910808188-52047270287-89115790749-78840418590 | 91637025586-81807791530-84338237594-90990131533-07427691758 |
++-++-+-+
+1 row in set (27.60 sec)
+```
+
+Then, display a list of all available query profiles by running the `SHOW PROFILES` command:
+
+```
+mysql> SHOW PROFILES;
++-+-+-+
+| Query_ID | Duration | Query |
++-+-+-+
+| 1 | 27.59450000 | select * from sbtest8 where c like '%99098187165%' |
++-+-+-+
+1 row in set, 1 warning (0.00 sec)
+```
+
+Finally, to display the profile for query 1, run the `SHOW PROFILE FOR QUERY 1` command.
+
+```
+mysql> SHOW PROFILE FOR QUERY 1;
++-+--+
+| Status | Duration |
++-+--+
+| starting | 0.000102 |
+| checking permissions | 0.000028 |
+| Opening tables | 0.000033 |
+| init | 0.000035 |
+| System lock | 0.000018 |
+| optimizing | 0.000017 |
+| statistics | 0.000025 |
+| preparing | 0.000019 |
+| executing | 0.000011 |
+| Sending data | 27.594038 |
+| end | 0.000041 |
+| query end | 0.000014 |
+| closing tables | 0.000013 |
+| freeing items | 0.000088 |
+| cleaning up | 0.000020 |
++-+--+
+15 rows in set, 1 warning (0.00 sec)
+```
+
+## Listing the most used queries on the database server
+
+Whenever you're troubleshooting query performance, itΓÇÖs helpful to understand which queries are most often run on your MySQL server. You can use this information to gauge if any of the top queries are taking longer than usual to run. In addition, a developer or DBA could use this information to identify if any query has a sudden increase in query execution count and duration.
+
+To list the top 10 most executed queries against your Azure Database for MySQL server, run the following query:
+
+```
+SELECT digest_text AS normalized_query,
+ count_star AS all_occurrences,
+ Concat(Round(sum_timer_wait / 1000000000000, 3), ' s') AS total_time,
+ Concat(Round(min_timer_wait / 1000000000000, 3), ' s') AS min_time,
+ Concat(Round(max_timer_wait / 1000000000000, 3), ' s') AS max_time,
+ Concat(Round(avg_timer_wait / 1000000000000, 3), ' s') AS avg_time,
+ Concat(Round(sum_lock_time / 1000000000000, 3), ' s') AS total_locktime,
+ sum_rows_affected AS sum_rows_changed,
+ sum_rows_sent AS sum_rows_selected,
+ sum_rows_examined AS sum_rows_scanned,
+ sum_created_tmp_tables,
+ sum_select_scan,
+ sum_no_index_used,
+ sum_no_good_index_used
+FROM performance_schema.events_statements_summary_by_digest
+ORDER BY sum_timer_wait DESC LIMIT 10;
+```
+
+> [!NOTE]
+> Use this query to benchmark the top executed queries in your database server and determine if thereΓÇÖs been a change in the top queries or if any existing queries in the initial benchmark have increased in run duration.
+>
+
+## Monitoring InnoDB garbage collection
+
+When InnoDB garbage collection is blocked or delayed, the database can develop a substantial purge lag that can negatively affect storage utilization and query performance.
+
+The InnoDB rollback segment history list length (HLL) measures the number of change records stored in the undo log. A growing HLL value indicates that InnoDBΓÇÖs garbage collection threads (purge threads) arenΓÇÖt keeping up with write workload or that purging is blocked by a long running query or transaction.
+
+Excessive delays in garbage collection can have severe, negative consequences:
+
+* The InnoDB system tablespace will expand, thus accelerating the growth of the underlying storage volume. At times, the system tablespace can swell by several terabytes as a result of a blocked purge.
+* Delete-marked records wonΓÇÖt be removed in a timely fashion. This can cause InnoDB tablespaces to grow and prevents the engine from reusing the storage occupied by these records.
+* The performance of all queries might degrade, and CPU utilization might increase because of the growth of InnoDB storage structures.
+
+As a result, itΓÇÖs important to monitor HLL values, patterns, and trends.
+
+### Finding HLL values
+
+You can find the HLL value by running the show engine innodb status command. The value will be listed in the output, under the TRANSACTIONS heading:
+
+```
+mysql> show engine innodb status\G
+*************************** 1. row ***************************
+
+(...)
+
+
+TRANSACTIONS
+
+Trx id counter 52685768
+Purge done for trx's n:o < 52680802 undo n:o < 0 state: running but idle
+History list length 2964300
+
+(...)
+```
+
+You can also determine the HLL value by querying the information_schema.innodb_metrics table:
+
+```
+mysql> select count from information_schema.innodb_metrics
+ -> where name = 'trx_rseg_history_len';
+| count |
+| 2964300 |
+1 row in set (0.00 sec)
+```
+
+### Interpreting HLL values
+
+When interpreting HLL values, consider the guidelines listed in the following table:
+
+| **Value** | **Notes** |
+|||
+| Less than ~10,000 | Normal values, indicating that garbage collection isn't falling behind. |
+| Between ~10,000 and ~1,000,000 | These values indicate a minor lag in garbage collection. Such values may be acceptable if they remain steady and don't increase. |
+| Greater than ~1,000,000 | These values should be investigated and may require corrective actions |
+
+### Addressing excessive HLL values
+
+If the HLL shows large spikes or exhibits a pattern of periodic growth, investigate the queries and transactions running on your Azure Database for MySQL instance immediately. Then you can resolve any workload issues that might be preventing the progress of the garbage collection process. While itΓÇÖs not expected for the database to be free of purge lag, you must not let the lag grow uncontrollably.
+
+To obtain transaction information from the `information_schema.innodb_trx` table, for example, run the following commands:
+
+```
+select * from information_schema.innodb_trx
+order by trx_started asc\G
+```
+
+The detail in the `trx_started` column will help you calculate the transaction age.
+
+```
+mysql> select * from information_schema.innodb_trx
+ -> order by trx_started asc\G
+*************************** 1. row ***************************
+ trx_id: 8150550
+ trx_state: RUNNING
+ trx_started: 2021-11-13 20:50:11
+ trx_requested_lock_id: NULL
+ trx_wait_started: NULL
+ trx_weight: 0
+ trx_mysql_thread_id: 19
+ trx_query: select * from employees where DATE(hire_date) > DATE('1998-04-05') AND first_name like '%geo%';
+(…)
+```
+
+For information about current database sessions, including the time spent in the sessionΓÇÖs current state, check the `information_schema.processlist` table. The following output, for example, shows a session thatΓÇÖs been actively executing a query for the last 1462 seconds:
+
+```
+mysql> select user, host, db, command, time, info
+ -> from information_schema.processlist
+ -> order by time desc\G
+*************************** 1. row ***************************
+ user: test
+ host: 172.31.19.159:38004
+ db: employees
+command: Query
+ time: 1462
+ info: select * from employees where DATE(hire_date) > DATE('1998-04-05') AND first_name like '%geo%';
+
+(...)
+```
+
+## Recommendations
+
+* Ensure that your database has enough resources allocated to run your queries. At times, you may need to scale up the instance size to get more CPU cores and additional memory to accommodate your workload.
+* Avoid large or long-running transactions by breaking them into smaller transactions.
+* Configure innodb_purge_threads as per your workload to improve efficiency for background purge operations.
+ > [!NOTE]
+ > Test any changes to this server variable for each environment to gauge the change in engine behavior.
+ >
+
+* Use alerts on ΓÇ£Host CPU PercentΓÇ¥, ΓÇ£Host Memory PercentΓÇ¥ and ΓÇ£Total ConnectionsΓÇ¥ so that you get notifications if the system exceeds any of the specified thresholds.
+* Use Query Performance Insights or Azure Workbooks to identify any problematic or slowly running queries, and then optimize them.
+* For production database servers, collect diagnostics at regular intervals to ensure that everything is running smoothly. If not, troubleshoot and resolve any issues that you identify.
+
+## Next steps
+
+To find peer answers to your most important questions or to post or answer a question, visit [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-database-mysql).
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/overview.md
Previously updated : 09/01/2020
+recommendations: false
Last updated : 04/20/2022
-# What is Azure Database for PostgreSQL - Hyperscale (Citus)?
+# What is Hyperscale (Citus)?
-Azure Database for PostgreSQL is a relational database service in the Microsoft
-cloud built for developers. It's based on the community version of open-source
-[PostgreSQL](https://www.postgresql.org/) database engine.
+## The superpower of distributed tables
-Hyperscale (Citus) is a deployment option that horizontally scales queries
-across multiple machines using sharding. Its query engine parallelizes incoming
-SQL queries across these servers for faster responses on large datasets. It
-serves applications that require greater scale and performance than other
-deployment options: generally workloads that are approaching--or already
-exceed--100 GB of data.
+Hyperscale (Citus) is PostgreSQL extended with the superpower of "distributed
+tables." This superpower enables you to build highly scalable relational apps.
+You can start building apps on a single node server group, the same way you
+would with PostgreSQL. As your app's scalability and performance requirements
+grow, you can seamlessly scale to multiple nodes by transparently distributing
+your tables.
-Hyperscale (Citus) delivers:
+![distributed architecture](../media/overview-hyperscale/distributed.png)
-- Horizontal scaling across multiple machines using sharding-- Query parallelization across these servers for faster responses on large
- datasets
-- Excellent support for multi-tenant applications, real-time operational
- analytics, and high throughput transactional workloads
+Real-world customer applications built on Citus include SaaS apps, real-time
+operational analytics apps, and high throughput transactional apps. These apps
+span various verticals such as sales & marketing automation, healthcare,
+IOT/telemetry, finance, logistics, and search.
-Applications built for PostgreSQL can run distributed queries on Hyperscale
-(Citus) with standard [connection
-libraries](../concepts-connection-libraries.md) and minimal changes.
+> [!div class="nextstepaction"]
+> [Try the quickstart >](quickstart-create-portal.md)
+
+## Fully managed, resilient database
+
+As Hyperscale (Citus) is a fully managed service, it has all the features for
+worry-free operation in production. Features include:
+
+* automatic high availability
+* backups
+* built-in pgBouncer
+* read-replicas
+* easy monitoring
+* private endpoints
+* encryption
+* and more
+
+## Always the latest PostgreSQL features
+
+Hyperscale (Citus) is built around the open-source
+[Citus](https://github.com/citusdata/citus) extension to PostgreSQL. Because
+Citus is an extension--not a fork--of the underlying database, it always
+supports the latest PostgreSQL version within one day of release.
+
+Your apps can use the newest PostgreSQL features and extensions, such as
+native partitioning for performance, JSONB support to store and query
+unstructured data, and geospatial functionality via the PostGIS extension.
+It's the speed you need, on the database you love.
+
+## Start simply, scale seamlessly
+
+The Basic Tier allows you to deploy Hyperscale (Citus) as a single node, while
+having the superpower of distributing tables. At a few dollars a day, it's the
+most cost-effective way to experience Hyperscale (Citus). Later, if your
+application requires greater scale, you can add nodes and rebalance your data.
+
+![graduating to standard tier](../media/overview-hyperscale/graduate.png)
+
+## Watch a demo
+
+> [!VIDEO https://www.youtube.com/embed/Q30KQ5wRGxU]
## Next steps -- Get started by [creating your
- first](./quickstart-create-portal.md) Azure Database for
-PostgreSQL - Hyperscale (Citus) server group.
-- See the [pricing
- page](https://azure.microsoft.com/pricing/details/postgresql/) for cost
-comparisons and calculators. Hyperscale (Citus) offers prepaid Reserved
-Instance discounts as well, see [Hyperscale (Citus) RI
-pricing](concepts-reserved-pricing.md) pages for details.
-- Determine the best [initial
- size](howto-scale-initial.md) for your server group
+> [!div class="nextstepaction"]
+> [Try the quickstart >](quickstart-create-portal.md)
postgresql Quickstart Connect Psql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-connect-psql.md
Title: 'Quickstart: connect to a server group with psql - Hyperscale (Citus) - A
description: Quickstart to connect psql to Azure Database for PostgreSQL - Hyperscale (Citus).
+recommendations: false
Previously updated : 02/09/2022 Last updated : 04/20/2022 # Connect to a Hyperscale (Citus) server group with psql
Now that you've connected to the server group, the next step is to create
tables and shard them for horizontal scaling. > [!div class="nextstepaction"]
-> [Create and distribute tables](quickstart-distribute-tables.md)
+> [Create and distribute tables >](quickstart-distribute-tables.md)
postgresql Quickstart Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-create-portal.md
Title: 'Quickstart: create a server group - Hyperscale (Citus) - Azure Database
description: Quickstart to create and query distributed tables on Azure Database for PostgreSQL Hyperscale (Citus).
+recommendations: false
Previously updated : 02/09/2022 Last updated : 04/20/2022 #Customer intent: As a developer, I want to provision a hyperscale server group so that I can run queries quickly on large datasets. # Create a Hyperscale (Citus) server group in the Azure portal Azure Database for PostgreSQL - Hyperscale (Citus) is a managed service that
-you to run horizontally scalable PostgreSQL databases in the cloud. This
-Quickstart shows you how to create a Hyperscale (Citus) server group using the
-Azure portal. You'll explore distributed data: sharding tables across nodes,
-generating sample data, and running queries that execute on multiple nodes.
+allows you to run horizontally scalable PostgreSQL databases in the cloud.
## Prerequisites
To follow this quickstart, you'll first need to:
an Azure subscription). * Sign in to the [Azure portal](https://portal.azure.com).
-## Create server group
+## Get started with the Basic Tier
-1. Select **Create a resource** (+) in the upper-left corner of the portal.
-2. Select **Databases** > **Azure Database for PostgreSQL**.
- ![create a resource menu](../media/quickstart-hyperscale-create-portal/database-service.png)
+The Basic Tier allows you to deploy Hyperscale (Citus) as a single node, while
+having the superpower of distributing tables. At a few dollars a day, it's the
+most cost-effective way to experience Hyperscale (Citus). Later, if your
+application requires greater scale, you can add nodes and rebalance your data.
+
+Let's get started!
+
+# [Direct link](#tab/direct)
+
+Visit [Create Hyperscale (Citus) server group](https://portal.azure.com/#create/Microsoft.PostgreSQLServerGroup) in the Azure portal.
+
+# [Via portal search](#tab/portal-search)
+
+1. Visit the [Azure portal](https://portal.azure.com/) and search for
+ **citus**. Select **Azure Database for PostgreSQL Hyperscale (Citus)**.
+ ![search for citus](../media/quickstart-hyperscale-create-portal/portal-search.png)
+2. Select **+ Create**.
+ ![create button](../media/quickstart-hyperscale-create-portal/create-button.png)
3. Select the **Hyperscale (Citus) server group** deployment option. ![deployment options](../media/quickstart-hyperscale-create-portal/deployment-option.png)
-4. Fill out the **Basics** form with the following information:
+++
+1. Fill out the **Basics** form.
![basic info form](../media/quickstart-hyperscale-create-portal/basics.png)
- | Setting | Description |
- |-|-|
- | Subscription | The Azure subscription that you want to use for your server. If you have multiple subscriptions, choose the subscription in which you'd like to be billed for the resource. |
- | Resource group | A new resource group name or an existing one from your subscription. |
- | Server group name | A unique name that identifies your Hyperscale server group. The domain name postgres.database.azure.com is appended to the server group name you provide. The server can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain fewer than 40 characters. |
- | Location | The location that is closest to you. |
- | Admin username | Currently required to be the value `citus`, and can't be changed. |
- | Password | A new password for the server admin account. It must contain between 8 and 128 characters. Your password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (!, $, #, %, etc.). |
- | Version | The latest PostgreSQL major version, unless you have specific requirements. |
+ Most options are self-explanatory. Note that the server group name will
+ determine the DNS name your applications use to connect, in the form
+ `server-group-name.postgres.database.azure.com`. Also, the admin username
+ is required to be the value `citus`.
-5. Select **Configure server group**.
+2. Select **Configure server group**.
![compute and storage](../media/quickstart-hyperscale-create-portal/compute.png) For this quickstart, you can accept the default value of **Basic** for
- **Tiers**. The other option, standard tier, creates worker nodes for
- greater total data capacity and query parallelism. See
- [tiers](concepts-server-group.md#tiers) for a more in-depth comparison.
+ **Tiers**. The Basic Tier allows you to experiment with a single-node
+ server group for a few dollars a day.
-6. Select **Save**.
+3. Select **Save**.
-7. Select **Next : Networking >** at the bottom of the screen.
-8. In the **Networking** tab, select **Allow public access from Azure services
+4. Select **Next : Networking >** at the bottom of the screen.
+5. In the **Networking** tab, select **Allow public access from Azure services
and resources within Azure to this server group**. ![networking configuration](../media/quickstart-hyperscale-create-portal/networking.png)
-9. Select **Review + create** and then **Create** to create the server.
+6. Select **Review + create** and then **Create** to create the server.
Provisioning takes a few minutes.
-10. The page will redirect to monitor deployment. When the live status changes
+7. The page will redirect to monitor deployment. When the live status changes
from **Deployment is in progress** to **Your deployment is complete**. After this transition, select **Go to resource**.
To follow this quickstart, you'll first need to:
With your server group created, it's time to connect with a SQL client. > [!div class="nextstepaction"]
-> [Connect to your server group](quickstart-connect-psql.md)
+> [Connect to your server group >](quickstart-connect-psql.md)
postgresql Quickstart Distribute Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-distribute-tables.md
Title: 'Quickstart: distribute tables - Hyperscale (Citus) - Azure Database for
description: Quickstart to distribute table data across nodes in Azure Database for PostgreSQL - Hyperscale (Citus).
+recommendations: false
Previously updated : 02/09/2022 Last updated : 04/20/2022 # Model and load data
Now we have a table sharded and loaded with data. Next, let's try running
queries across the data in these shards. > [!div class="nextstepaction"]
-> [Run distributed queries](quickstart-run-queries.md)
+> [Run distributed queries >](quickstart-run-queries.md)
postgresql Quickstart Run Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-run-queries.md
Title: 'Quickstart: Run queries - Hyperscale (Citus) - Azure Database for Postgr
description: Quickstart to run queries on table data in Azure Database for PostgreSQL - Hyperscale (Citus).
+recommendations: false
Previously updated : 02/09/2022 Last updated : 04/20/2022 # Run queries
purview Abap Functions Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/abap-functions-deployment-guide.md
This article describes the steps required to deploy this module.
## Prerequisites
-Download the SAP ABAP function module source code from Microsoft Purview Studio. After you register a source for [SAP ECC](register-scan-sapecc-source.md), [SAP S/4HANA](register-scan-saps4hana-source.md), or [SAP BW](register-scan-sap-bw.md), you can find a download link on top as shown in the following image. You can also see the link when you create a new scan or edit a scan.
+Download the SAP ABAP function module source code from the Microsoft Purview governance portal. After you register a source for [SAP ECC](register-scan-sapecc-source.md), [SAP S/4HANA](register-scan-saps4hana-source.md), or [SAP BW](register-scan-sap-bw.md), you can find a download link on top as shown in the following image. You can also see the link when you create a new scan or edit a scan.
## Deploy the module
After the module is created, specify the following information:
1. Go to the **Source code** tab. There are two ways to deploy code for the function:
- 1. On the main menu, upload the text file you downloaded from Microsoft Purview Studio as described in [Prerequisites](#prerequisites). To do so, select **Utilities** > **More Utilities** > **Upload/Download** > **Upload**.
+ 1. On the main menu, upload the text file you downloaded from the Microsoft Purview governance portal as described in [Prerequisites](#prerequisites). To do so, select **Utilities** > **More Utilities** > **Upload/Download** > **Upload**.
1. Alternatively, open the file and copy and paste the contents in the **Source code** area.
purview Asset Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/asset-insights.md
In Microsoft Purview, you can register and scan source types. Once the scan is c
1. Navigate to your Microsoft Purview account in the Azure portal.
-1. On the **Overview** page, in the **Get Started** section, select the **Open Microsoft Purview Studio** tile.
+1. On the **Overview** page, in the **Get Started** section, select the **Open Microsoft Purview governance portal** tile.
:::image type="content" source="./media/asset-insights/portal-access.png" alt-text="Launch Microsoft Purview from the Azure portal":::
purview Catalog Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-conditional-access.md
The following steps show how to configure Microsoft Purview to enforce a Conditi
## Prerequisites -- When multi-factor authentication is enabled, to sign in to Microsoft Purview Studio, you must perform multi-factor authentication.
+- When multi-factor authentication is enabled, to sign in to the Microsoft Purview governance portal, you must perform multi-factor authentication.
## Configure conditional access
The following steps show how to configure Microsoft Purview to enforce a Conditi
## Next steps -- [Use Microsoft Purview Studio](./use-purview-studio.md)
+- [Use the Microsoft Purview governance portal](./use-purview-studio.md)
purview Catalog Lineage User Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-lineage-user-guide.md
To access lineage information for an asset in Microsoft Purview, follow the step
1. In the Azure portal, go to the [Microsoft Purview accounts page](https://aka.ms/purviewportal).
-1. Select your Microsoft Purview account from the list, and then select **Open Microsoft Purview Studio** from the **Overview** page.
+1. Select your Microsoft Purview account from the list, and then select **Open Microsoft Purview governance portal** from the **Overview** page.
-1. On the Microsoft Purview Studio **Home** page, search for a dataset name or the process name such as ADF Copy or Data Flow activity. And then press Enter.
+1. On the Microsoft Purview governance portal **Home** page, search for a dataset name or the process name such as ADF Copy or Data Flow activity. And then press Enter.
1. From the search results, select the asset and select its **Lineage** tab.
purview Catalog Managed Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-managed-vnet.md
Previously updated : 03/17/2022 Last updated : 04/21/2022 # Customer intent: As a Microsoft Purview admin, I want to set up Managed Virtual Network and managed private endpoints for my Microsoft Purview account. # Use a Managed VNet with your Microsoft Purview account
-> [!IMPORTANT]
-> Microsoft Purview Managed Vnet, VNet Integration Runtime, and managed private endpoint connections are currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- > [!IMPORTANT] > Currently, Managed Virtual Network and managed private endpoints are available for Microsoft Purview accounts that are deployed in the following regions: > - Australia East
Currently, the following data sources are supported to have a managed private en
Additionally, you can deploy managed private endpoints for your Azure Key Vault resources if you need to run scans using any authentication options rather than Managed Identities, such as SQL Authentication or Account Key. > [!IMPORTANT]
-> If you are planning to scan Azure Synapse workspaces using Managed Virtual Network, you are also required to [configure Azure Synapse workspace firewall access](register-scan-synapse-workspace.md#set-up-azure-synapse-workspace-firewall-access) to enable **Allow Azure services and resources to access this workspace**. Currently, we do not support setting up scans for an Azure Synapse workspace from Microsoft Purview Studio, if you cannot enable **Allow Azure services and resources to access this workspace** on your Azure Synapse workspaces. If you cannot enable the firewall:
+> If you are planning to scan Azure Synapse workspaces using Managed Virtual Network, you are also required to [configure Azure Synapse workspace firewall access](register-scan-synapse-workspace.md#set-up-azure-synapse-workspace-firewall-access) to enable **Allow Azure services and resources to access this workspace**. Currently, we do not support setting up scans for an Azure Synapse workspace from the Microsoft Purview governance portal, if you cannot enable **Allow Azure services and resources to access this workspace** on your Azure Synapse workspaces. If you cannot enable the firewall:
> - You can use [Microsoft Purview Rest API - Scans - Create Or Update](/rest/api/purview/scanningdataplane/scans/create-or-update/) to create a new scan for your Synapse workspaces including dedicated and serverless pools. > - You must use **SQL Authentication** as authentication mechanism.
Before deploying a Managed VNet and Managed VNet Runtime for a Microsoft Purview
:::image type="content" source="media/catalog-managed-vnet/purview-managed-azure-portal.png" alt-text="Screenshot that shows the Microsoft Purview account":::
-2. **Open Microsoft Purview Studio** and navigate to the **Data Map --> Integration runtimes**.
+2. **Open Microsoft Purview governance portal** and navigate to the **Data Map --> Integration runtimes**.
:::image type="content" source="media/catalog-managed-vnet/purview-managed-vnet.png" alt-text="Screenshot that shows Microsoft Purview Data Map menus":::
Before deploying a Managed VNet and Managed VNet Runtime for a Microsoft Purview
:::image type="content" source="media/catalog-managed-vnet/purview-managed-ir-region.png" alt-text="Screenshot that shows to create a Managed VNet Runtime":::
-5. Deploying the Managed VNet Runtime for the first time triggers multiple workflows in Microsoft Purview Studio for creating managed private endpoints for Microsoft Purview and its Managed Storage Account. Click on each workflow to approve the private endpoint for the corresponding Azure resource.
+5. Deploying the Managed VNet Runtime for the first time triggers multiple workflows in the Microsoft Purview governance portal for creating managed private endpoints for Microsoft Purview and its Managed Storage Account. Click on each workflow to approve the private endpoint for the corresponding Azure resource.
:::image type="content" source="media/catalog-managed-vnet/purview-managed-ir-workflows.png" alt-text="Screenshot that shows deployment of a Managed VNet Runtime":::
To scan any data sources using Managed VNet Runtime, you need to deploy and appr
:::image type="content" source="media/catalog-managed-vnet/purview-managed-data-source-pe-azure-approved.png" alt-text="Screenshot that shows approved private endpoint for data sources in Azure portal":::
-7. Inside Microsoft Purview Studio, the managed private endpoint must be shown as approved as well.
+7. Inside the Microsoft Purview governance portal, the managed private endpoint must be shown as approved as well.
- :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-list-2.png" alt-text="Screenshot that shows managed private endpoints including data sources' in purview studio":::
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-list-2.png" alt-text="Screenshot that shows managed private endpoints including data sources' in Purview governance portal":::
### Register and scan a data source using Managed VNet Runtime
You can use any of the following options to scan data sources using Microsoft Pu
To scan a data source using a Managed VNet Runtime and Microsoft Purview managed identity perform these steps:
-1. Select the **Data Map** tab on the left pane in the Microsoft Purview Studio.
+1. Select the **Data Map** tab on the left pane in the Microsoft Purview governance portal.
1. Select the data source that you registered.
To set up a scan using Account Key or SQL Authentication follow these steps:
6. Provide a name for the managed private endpoint, select the Azure subscription and the Azure Key Vault from the drop down lists. Select **create**.
- :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-key-vault-create.png" alt-text="Screenshot that shows how to create a managed private endpoint for Azure Key Vault in Microsoft Purview Studio":::
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-key-vault-create.png" alt-text="Screenshot that shows how to create a managed private endpoint for Azure Key Vault in the Microsoft Purview governance portal":::
7. From the list of managed private endpoints, click on the newly created managed private endpoint for your Azure Key Vault and then click on **Manage approvals in the Azure portal**, to approve the private endpoint in Azure portal.
To set up a scan using Account Key or SQL Authentication follow these steps:
:::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-key-vault-az-approved.png" alt-text="Screenshot that shows approved private endpoint for Azure Key Vault in Azure portal":::
-9. Inside Microsoft Purview Studio, the managed private endpoint must be shown as approved as well.
+9. Inside the Microsoft Purview governance portal, the managed private endpoint must be shown as approved as well.
- :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-list-3.png" alt-text="Screenshot that shows managed private endpoints including Azure Key Vault in purview studio":::
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-list-3.png" alt-text="Screenshot that shows managed private endpoints including Azure Key Vault in Purview governance portal":::
-10. Select the **Data Map** tab on the left pane in the Microsoft Purview Studio.
+10. Select the **Data Map** tab on the left pane in the Microsoft Purview governance portal.
11. Select the data source that you registered.
purview Catalog Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-permissions.md
Microsoft Purview uses a set of predefined roles to control who can access what
- **Data readers** - a role that provides read-only access to data assets, classifications, classification rules, collections and glossary terms. - **Data source administrator** - a role that allows a user to manage data sources and scans. If a user is granted only to **Data source admin** role on a given data source, they can run new scans using an existing scan rule. To create new scan rules, the user must be also granted as either **Data reader** or **Data curator** roles. - **Policy author (Preview)** - a role that allows a user to view, update, and delete Microsoft Purview policies through the policy management app within Microsoft Purview.-- **Workflow administrator** - a role that allows a user to access the workflow authoring page in the Microsoft Purview studio, and publish workflows on collections where they have access permissions. Workflow administrator only has access to authoring, and so will need at least Data reader permission on a collection to be able to access the Purview Studio.
+- **Workflow administrator** - a role that allows a user to access the workflow authoring page in the Microsoft Purview governance portal, and publish workflows on collections where they have access permissions. Workflow administrator only has access to authoring, and so will need at least Data reader permission on a collection to be able to access the Purview governance portal.
> [!NOTE] > At this time, Microsoft Purview Policy author role is not sufficient to create policies. The Microsoft Purview Data source admin role is also required.
Microsoft Purview uses a set of predefined roles to control who can access what
|I need to edit the glossary or set up new classification definitions|Data curator| |I need to view Insights to understand the governance posture of my data estate|Data curator| |My application's Service Principal needs to push data to Microsoft Purview|Data curator|
-|I need to set up scans via the Microsoft Purview Studio|Data curator on the collection **or** data curator **and** data source administrator where the source is registered.|
+|I need to set up scans via the Microsoft Purview governance portal|Data curator on the collection **or** data curator **and** data source administrator where the source is registered.|
|I need to enable a Service Principal or group to set up and monitor scans in Microsoft Purview without allowing them to access the catalog's information |Data source administrator| |I need to put users into roles in Microsoft Purview | Collection administrator | |I need to create and publish access policies | Data source administrator and policy author |
Microsoft Purview uses a set of predefined roles to control who can access what
## Understand how to use Microsoft Purview's roles and collections
-All access control is managed in Microsoft Purview's collections. Microsoft Purview's collections can be found in the [Microsoft Purview Studio](https://web.purview.azure.com/resource/). Open your Microsoft Purview account in the [Azure portal](https://portal.azure.com) and select the Microsoft Purview Studio tile on the Overview page. From there, navigate to the data map on the left menu, and then select the 'Collections' tab.
+All access control is managed in Microsoft Purview's collections. Microsoft Purview's collections can be found in the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/). Open your Microsoft Purview account in the [Azure portal](https://portal.azure.com) and select the Microsoft Purview governance portal tile on the Overview page. From there, navigate to the data map on the left menu, and then select the 'Collections' tab.
When a Microsoft Purview account is created, it starts with a root collection that has the same name as the Microsoft Purview account itself. The creator of the Microsoft Purview account is automatically added as a Collection Admin, Data Source Admin, Data Curator, and Data Reader on this root collection, and can edit and manage this collection.
You can assign Microsoft Purview roles to users, security groups and service pri
After creating a Microsoft Purview account, the first thing to do is create collections and assign users to roles within those collections. > [!NOTE]
-> If you created your Microsoft Purview account using a service principal, to be able to access the Microsoft Purview Studio and assign permissions to users, you will need to grant a user collection admin permissions on the root collection.
+> If you created your Microsoft Purview account using a service principal, to be able to access the Microsoft Purview governance portal and assign permissions to users, you will need to grant a user collection admin permissions on the root collection.
> You can use [this Azure CLI command](/cli/azure/purview/account#az-purview-account-add-root-collection-admin): > > ```azurecli
Similarly with the Data Curator and Data Source Admin roles, permissions for tho
### Add users to roles
-Role assignment is managed through the collections. Only a user with the [collection admin role](#roles) can grant permissions to other users on that collection. When new permissions need to be added, a collection admin will access the [Microsoft Purview Studio](https://web.purview.azure.com/resource/), navigate to data map, then the collections tab, and select the collection where a user needs to be added. From the Role Assignments tab they'll be able to add and manage users who need permissions.
+Role assignment is managed through the collections. Only a user with the [collection admin role](#roles) can grant permissions to other users on that collection. When new permissions need to be added, a collection admin will access the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/), navigate to data map, then the collections tab, and select the collection where a user needs to be added. From the Role Assignments tab they'll be able to add and manage users who need permissions.
For full instructions, see our [how-to guide for adding role assignments](how-to-create-and-manage-collections.md#add-role-assignments).
purview Catalog Private Link Account Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-private-link-account-portal.md
In this guide, you will learn how to deploy private endpoints for your Microsoft
The Microsoft Purview _account_ private endpoint is used to add another layer of security by enabling scenarios where only client calls that originate from within the virtual network are allowed to access the Microsoft Purview account. This private endpoint is also a prerequisite for the portal private endpoint.
-The Microsoft Purview _portal_ private endpoint is required to enable connectivity to [Microsoft Purview Studio](https://web.purview.azure.com/resource/) using a private network.
+The Microsoft Purview _portal_ private endpoint is required to enable connectivity to [the Microsoft Purview governance portal](https://web.purview.azure.com/resource/) using a private network.
> [!NOTE] > If you only create _account_ and _portal_ private endpoints, you won't be able to run any scans. To enable scanning on a private network, you will need to [create an ingestion private endpoint also](catalog-private-link-end-to-end.md).
purview Catalog Private Link End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-private-link-end-to-end.md
In this guide, you will learn how to deploy _account_, _portal_ and _ingestion_
The Microsoft Purview _account_ private endpoint is used to add another layer of security by enabling scenarios where only client calls that originate from within the virtual network are allowed to access the Microsoft Purview account. This private endpoint is also a prerequisite for the portal private endpoint.
-The Microsoft Purview _portal_ private endpoint is required to enable connectivity to [Microsoft Purview Studio](https://web.purview.azure.com/resource/) using a private network.
+The Microsoft Purview _portal_ private endpoint is required to enable connectivity to [Microsoft Purview governance portal](https://web.purview.azure.com/resource/) using a private network.
Microsoft Purview can scan data sources in Azure or an on-premises environment by using _ingestion_ private endpoints. Three private endpoint resources are required to be deployed and linked to Microsoft Purview managed resources when ingestion private endpoint is deployed:
purview Catalog Private Link Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-private-link-faqs.md
The Microsoft Purview account private endpoint is used to add another layer of s
### What's the purpose of deploying the Microsoft Purview portal private endpoint?
-The Microsoft Purview portal private endpoint provides private connectivity to Microsoft Purview Studio.
+The Microsoft Purview portal private endpoint provides private connectivity to the Microsoft Purview governance portal.
### What's the purpose of deploying the Microsoft Purview ingestion private endpoints?
Yes. Data sources that aren't connected through a private endpoint can be scanne
Make sure you enable **Allow trusted Microsoft services** to access the resources inside the service endpoint configuration of the data source resource in Azure. For example, if you're going to scan Azure Blob Storage in which the firewalls and virtual networks settings are set to **selected networks**, make sure the **Allow trusted Microsoft services to access this storage account** checkbox is selected as an exception.
-### Can I access Microsoft Purview Studio from a public network if Public network access is set to Deny in Microsoft Purview account networking?
+### Can I access the Microsoft Purview governance portal from a public network if Public network access is set to Deny in Microsoft Purview account networking?
No. Connecting to Microsoft Purview from a public endpoint where **Public network access** is set to **Deny** results in the following error message: "Not authorized to access this Microsoft Purview account. This Microsoft Purview account is behind a private endpoint. Please access the account from a client in the same virtual network (VNet) that has been configured for the Microsoft Purview account's private endpoint."
-In this case, to open Microsoft Purview Studio, either use a machine that's deployed in the same virtual network as the Microsoft Purview portal private endpoint or use a VM that's connected to your CorpNet in which hybrid connectivity is allowed.
+In this case, to open the Microsoft Purview governance portal, either use a machine that's deployed in the same virtual network as the Microsoft Purview portal private endpoint or use a VM that's connected to your CorpNet in which hybrid connectivity is allowed.
### Is it possible to restrict access to the Microsoft Purview managed storage account and event hub namespace (for private endpoint ingestion only) but keep portal access enabled for users across the web?
The VMs in which self-hosted integration runtime is deployed must have outbound
No. However, it's expected that the virtual machine running self-hosted integration runtime can connect to your instance of Microsoft Purview through an internal IP address by using port 443. Use common troubleshooting tools for name resolution and connectivity testing, such as nslookup.exe and Test-NetConnection.
-### Why do I receive the following error message when I try to launch Microsoft Purview Studio from my machine?
+### Why do I receive the following error message when I try to launch Microsoft Purview governance portal from my machine?
"This Microsoft Purview account is behind a private endpoint. Please access the account from a client in the same virtual network (VNet) that has been configured for the Microsoft Purview account's private endpoint."
-It's likely your Microsoft Purview account is deployed by using Private Link and public access is disabled on your Microsoft Purview account. As a result, you have to browse Microsoft Purview Studio from a virtual machine that has internal network connectivity to Microsoft Purview.
+It's likely your Microsoft Purview account is deployed by using Private Link and public access is disabled on your Microsoft Purview account. As a result, you have to browse the Microsoft Purview governance portal from a virtual machine that has internal network connectivity to Microsoft Purview.
If you're connecting from a VM behind a hybrid network or using a jump machine connected to your virtual network, use common troubleshooting tools for name resolution and connectivity testing, such as nslookup.exe and Test-NetConnection.
purview Catalog Private Link Name Resolution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-private-link-name-resolution.md
As an example, if a Microsoft Purview account name is 'Contoso-Purview', when it
| `Contoso-Purview.purview.azure.com` | CNAME | `Contoso-Purview.privatelink.purview.azure.com` | | `Contoso-Purview.privatelink.purview.azure.com` | CNAME | \<Microsoft Purview public endpoint\> | | \<Microsoft Purview public endpoint\> | A | \<Microsoft Purview public IP address\> |
-| `Web.purview.azure.com` | CNAME | \<Microsoft Purview Studio public endpoint\> |
+| `Web.purview.azure.com` | CNAME | \<Microsoft Purview governance portal public endpoint\> |
The DNS resource records for Contoso-Purview, when resolved in the virtual network hosting the private endpoint, will be:
As an example, if a Microsoft Purview account name is 'Contoso-Purview', when it
| `Contoso-Purview.purview.azure.com` | CNAME | `Contoso-Purview.privatelink.purview.azure.com` | | `Contoso-Purview.privatelink.purview.azure.com` | CNAME | \<Microsoft Purview public endpoint\> | | \<Microsoft Purview public endpoint\> | A | \<Microsoft Purview public IP address\> |
-| `Web.purview.azure.com` | CNAME | \<Microsoft Purview Studio public endpoint\> |
+| `Web.purview.azure.com` | CNAME | \<Microsoft Purview governance portal public endpoint\> |
The DNS resource records for Contoso-Purview, when resolved in the virtual network hosting the private endpoint, will be:
purview Catalog Private Link Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-private-link-troubleshoot.md
This guide summarizes known limitations related to using private endpoints for M
2. If portal private endpoint is deployed, make sure you also deploy account private endpoint.
-3. If portal private endpoint is deployed, and public network access is set to deny in your Microsoft Purview account, make sure you launch [Microsoft Purview Studio](https://web.purview.azure.com/resource/) from internal network.
+3. If portal private endpoint is deployed, and public network access is set to deny in your Microsoft Purview account, make sure you launch [the Microsoft Purview governance portal](https://web.purview.azure.com/resource/) from internal network.
<br> - To verify the correct name resolution, you can use a **NSlookup.exe** command line tool to query `web.purview.azure.com`. The result must return a private IP address that belongs to portal private endpoint. - To verify network connectivity, you can use any network test tools to test outbound connectivity to `web.purview.azure.com` endpoint to port **443**. The connection must be successful.
This guide summarizes known limitations related to using private endpoints for M
10. If management machine and self-hosted integration runtime VMs are deployed in on-premises network and you have set up DNS forwarder in your environment, verify DNS and network settings in your environment.
-11. If ingestion private endpoint is used, make sure self-hosted integration runtime is registered successfully inside Microsoft Purview account and shows as running both inside the self-hosted integration runtime VM and in the [Microsoft Purview Studio](https://web.purview.azure.com/resource/) .
+11. If ingestion private endpoint is used, make sure self-hosted integration runtime is registered successfully inside Microsoft Purview account and shows as running both inside the self-hosted integration runtime VM and in the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/) .
## Common errors and messages
Not authorized to access this Microsoft Purview account. This Microsoft Purview
User is trying to connect to Microsoft Purview from a public endpoint or using Microsoft Purview public endpoints where **Public network access** is set to **Deny**. ### Resolution
-In this case, to open Microsoft Purview Studio, either use a machine that is deployed in the same virtual network as the Microsoft Purview portal private endpoint or use a VM that is connected to your CorpNet in which hybrid connectivity is allowed.
+In this case, to open the Microsoft Purview governance portal, either use a machine that is deployed in the same virtual network as the Microsoft Purview portal private endpoint or use a VM that is connected to your CorpNet in which hybrid connectivity is allowed.
### Issue You may receive the following error message when scanning a SQL server, using a self-hosted integration runtime:
purview Catalog Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-private-link.md
You can use [Azure private endpoints](../private-link/private-endpoint-overview.
You can deploy Microsoft Purview _account_ private endpoint, to allow only client calls to Microsoft Purview that originate from within the private network.
-To connect to Microsoft Purview Studio using a private network connectivity, you can deploy _portal_ private endpoint.
+To connect to the Microsoft Purview governance portal using a private network connectivity, you can deploy _portal_ private endpoint.
You can deploy _ingestion_ private endpoints if you need to scan Azure IaaS and PaaS data sources inside Azure virtual networks and on-premises data sources through a private connection. This method ensures network isolation for your metadata flowing from the data sources to Microsoft Purview Data Map.
Use the following recommended checklist to perform deployment of Microsoft Purvi
|Scenario |Objectives | |||
-|**Scenario 1** - [Connect to your Microsoft Purview and scan data sources privately and securely](./catalog-private-link-end-to-end.md) |You need to restrict access to your Microsoft Purview account only via a private endpoint, including access to Microsoft Purview Studio, Atlas APIs and scan data sources in on-premises and Azure behind a virtual network using self-hosted integration runtime ensuring end to end network isolation. (Deploy _account_, _portal_ and _ingestion_ private endpoints.) |
-|**Scenario 2** - [Connect privately and securely to your Microsoft Purview account](./catalog-private-link-account-portal.md) | You need to enable access to your Microsoft Purview account, including access to _Microsoft Purview Studio_ and Atlas API through private endpoints. (Deploy _account_ and _portal_ private endpoints). |
+|**Scenario 1** - [Connect to your Microsoft Purview and scan data sources privately and securely](./catalog-private-link-end-to-end.md) |You need to restrict access to your Microsoft Purview account only via a private endpoint, including access to the Microsoft Purview governance portal, Atlas APIs and scan data sources in on-premises and Azure behind a virtual network using self-hosted integration runtime ensuring end to end network isolation. (Deploy _account_, _portal_ and _ingestion_ private endpoints.) |
+|**Scenario 2** - [Connect privately and securely to your Microsoft Purview account](./catalog-private-link-account-portal.md) | You need to enable access to your Microsoft Purview account, including access to _the Microsoft Purview governance portal_ and Atlas API through private endpoints. (Deploy _account_ and _portal_ private endpoints). |
|**Scenario 3** - [Scan data source securely using Managed Virtual Network](./catalog-managed-vnet.md) | You need to scan Azure data sources securely, without having to manage a virtual network or a self-hosted integration runtime VM. (Deploy managed private endpoint for Microsoft Purview, managed storage account and Azure data sources). |
To view list of current limitations related to Microsoft Purview private endpoin
## Next steps - [Deploy end to end private networking](./catalog-private-link-end-to-end.md)-- [Deploy private networking for the Microsoft Purview Studio](./catalog-private-link-account-portal.md)
+- [Deploy private networking for the Microsoft Purview governance portal](./catalog-private-link-account-portal.md)
purview Classification Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/classification-insights.md
Microsoft Purview uses the same sensitive information types as Microsoft 365, al
1. Go to the **Microsoft Purview** [instance screen in the Azure portal](https://aka.ms/purviewportal) and select your Microsoft Purview account.
-1. On the **Overview** page, in the **Get Started** section, select the **Microsoft Purview Studio** tile.
+1. On the **Overview** page, in the **Get Started** section, select the **Microsoft Purview governance portal** tile.
1. In Microsoft Purview, select the **Insights** :::image type="icon" source="media/insights/ico-insights.png" border="false"::: menu item on the left to access your **Insights** area.
purview Concept Best Practices Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-automation.md
Last updated 11/23/2021
# Microsoft Purview automation best practices
-While Microsoft Purview provides an out of the box user experience with Microsoft Purview Studio, not all tasks are suited to the point-and-click nature of the graphical user experience.
+While Microsoft Purview provides an out of the box user experience with the Microsoft Purview governance portal, not all tasks are suited to the point-and-click nature of the graphical user experience.
For example: * Triggering a scan to run as part of an automated process.
purview Concept Best Practices Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-classification.md
Here are some considerations to bear in mind as you're defining classifications:
* The sampling rules apply to resource sets as well. For more information, see the "Resource set file sampling" section in [Supported data sources and file types in Microsoft Purview](./sources-and-scans.md#resource-set-file-sampling). * Custom classifications can't be applied on document type assets using custom classification rules. Classifications for such types can be applied manually only. * Custom classifications aren't included in any default scan rules. Therefore, if automatic assignment of custom classifications is expected, you must deploy and use a custom scan rule that includes the custom classification to run the scan.
-* If you apply classifications manually from Microsoft Purview Studio, such classifications are retained in subsequent scans.
+* If you apply classifications manually from the Microsoft Purview governance portal, such classifications are retained in subsequent scans.
* Subsequent scans won't remove any classifications from assets, if they were detected previously, even if the classification rules are inapplicable. * For *encrypted source* data assets, Microsoft Purview picks only file names, fully qualified names, schema details for structured file types, and database tables. For classification to work, decrypt the encrypted data before you run scans.
purview Concept Best Practices Collections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-collections.md
Consider deploying collections in Microsoft Purview to fulfill the following req
- Consider security and access management as part of your design decision-making process when you build collections in Microsoft Purview. -- Each collection has a name attribute and a friendly name attribute. If you use [Microsoft Purview Studio](https://web.purview.azure.com/resource/) to deploy a collection, the system automatically assigns a random six-letter name to the collection to avoid duplication. To reduce complexity, avoid using duplicated friendly names across your collections, especially in the same level.
+- Each collection has a name attribute and a friendly name attribute. If you use [the Microsoft Purview governance portal](https://web.purview.azure.com/resource/) to deploy a collection, the system automatically assigns a random six-letter name to the collection to avoid duplication. To reduce complexity, avoid using duplicated friendly names across your collections, especially in the same level.
- When you can, avoid duplicating your organizational structure into a deeply nested collection hierarchy. If you can't avoid doing so, be sure to use different names for every collection in the hierarchy to make the collections easy to distinguish.
Consider deploying collections in Microsoft Purview to fulfill the following req
## Define an authorization model
-Microsoft Purview data-plane roles are managed in Microsoft Purview. After you deploy a Microsoft Purview account, the creator of the Microsoft Purview account is automatically assigned the following roles at the root collection. You can use [Microsoft Purview Studio](https://web.purview.azure.com/resource/) or a programmatic method to directly assign and manage roles in Microsoft Purview.
+Microsoft Purview data-plane roles are managed in Microsoft Purview. After you deploy a Microsoft Purview account, the creator of the Microsoft Purview account is automatically assigned the following roles at the root collection. You can use [the Microsoft Purview governance portal](https://web.purview.azure.com/resource/) or a programmatic method to directly assign and manage roles in Microsoft Purview.
- **Collection Admins** can edit Microsoft Purview collections and their details and add subcollections. They can also add users to other Microsoft Purview roles on collections where they're admins. - **Data Source Admins** can manage data sources and data scans.
purview Concept Best Practices Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-glossary.md
For more information, see [Create, import, and export glossary terms](./how-to-c
## Recommendations for exporting glossary terms
-Exporting terms may be useful in Microsoft Purview account to account, Backup, or Disaster Recovery scenarios. Exporting terms in Microsoft Purview Studio must be done one term template at a time. Choosing terms from multiple templates will disable the "Export terms" button. As a best practice, using the "Term template" filter before bulk selecting will make the export process quick.
+Exporting terms may be useful in Microsoft Purview account to account, Backup, or Disaster Recovery scenarios. Exporting terms in the Microsoft Purview governance portal must be done one term template at a time. Choosing terms from multiple templates will disable the "Export terms" button. As a best practice, using the "Term template" filter before bulk selecting will make the export process quick.
## Glossary Management
Exporting terms may be useful in Microsoft Purview account to account, Backup, o
- While classifications and sensitivity labels are applied to assets automatically by the system based on classification rules, glossary terms are not applied automatically. - Similar to classifications, glossary terms can be mapped to assets at the asset level or scheme level. - In Microsoft Purview, terms can be added to assets in different ways:
- - Manually, using Microsoft Purview Studio.
- - Using Bulk Edit mode to update up to 25 assets, using Microsoft Purview Studio.
+ - Manually, using the Microsoft Purview governance portal.
+ - Using Bulk Edit mode to update up to 25 assets, using the Microsoft Purview governance portal.
- Curated Code using the Atlas API. - Use Bulk Edit Mode when assigning terms manually. This feature allows a curator to assign glossary terms, owners, experts, classifications and certified in bulk based on selected items from a search result. Multiple searches can be chained by selecting objects in the results. The Bulk Edit will apply to all selected objects. Be sure to clear the selections after the bulk edit has been performed. - Other bulk edit operations can be performed by using the Atlas API. An example would be using the API to add descriptions or other custom properties to assets in bulk programmatically.
purview Concept Best Practices Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-migration.md
Organizations need a failover mechanism for their Microsoft Purview instances, s
> Not all Azure mirrored regions support deploying Microsoft Purview accounts. For example, For a DR scenario, you cannot choose to deploy a new Microsoft Purview account in Canada East if the primary region is Canada Central. Even with Customers managed DR, not all customer may able to trigger a DR. ## Implementation steps
-This section provides high level guidance on required tasks to copy assets, glossaries, classifications & relationships across regions or subscriptions either using the Microsoft Purview Studio or the REST APIs. The approach is to perform the tasks as programmatically as possible at scale.
+This section provides high level guidance on required tasks to copy assets, glossaries, classifications & relationships across regions or subscriptions either using the Microsoft Purview governance portal or the REST APIs. The approach is to perform the tasks as programmatically as possible at scale.
### High-level task outline 1. Create the new account
To complete the asset migration, you must remap the relationships. There are thr
> [!Note] > Before migrating terms, you need to migrate the term templates. This step should be already covered in the custom `typedef` migration.
-#### Using Microsoft Purview Portal
-The quickest way to migrate glossary terms is to [export terms to a .csv file](how-to-create-import-export-glossary.md). You can do this using the Microsoft Purview Studio.
+#### Using the Microsoft Purview governance portal
+The quickest way to migrate glossary terms is to [export terms to a .csv file](how-to-create-import-export-glossary.md). You can do this using the Microsoft Purview governance portal.
#### Using Microsoft Purview API To automate glossary migration, you first need to get the glossary `guid` (`glossaryGuid`) via [List Glossaries API](/rest/api/purview/catalogdataplane/glossary/list-glossaries). The `glossaryGuid` is the top/root level glossary `guid`.
purview Concept Best Practices Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-network.md
Microsoft Purview is a platform as a service (PaaS) solution for data governance. Microsoft Purview accounts have public endpoints that are accessible through the internet to connect to the service. However, all endpoints are secured through Azure Active Directory (Azure AD) logins and role-based access control (RBAC).
-For an added layer of security, you can create private endpoints for your Microsoft Purview account. You then get a private IP address from your virtual network in Azure to the Microsoft Purview account and its managed resources. This address will restrict all traffic between your virtual network and the Microsoft Purview account to a private link for user interaction with the APIs and Microsoft Purview Studio, or for scanning and ingestion.
+For an added layer of security, you can create private endpoints for your Microsoft Purview account. You then get a private IP address from your virtual network in Azure to the Microsoft Purview account and its managed resources. This address will restrict all traffic between your virtual network and the Microsoft Purview account to a private link for user interaction with the APIs and Microsoft Purview governance portal, or for scanning and ingestion.
Currently, the Microsoft Purview firewall provides access control for the public endpoint of your purview account. You can use the firewall to allow all access or to block all access through the public endpoint when using private endpoints.
By default, you can use Microsoft Purview accounts through public endpoints acce
- No private connectivity is required when scanning or connecting to Microsoft Purview endpoints. - All data sources are SaaS applications only. - All data sources have a public endpoint that's accessible through the internet. -- Business users require access to a Microsoft Purview account and Microsoft Purview Studio through the internet.
+- Business users require access to a Microsoft Purview account and the Microsoft Purview governance portal through the internet.
### Integration runtime options
You must use private endpoints for your Microsoft Purview account if you have an
### Design considerations -- To connect to your Microsoft Purview account privately and securely, you need to deploy an account and a portal private endpoint. For example, this deployment is necessary if you intend to connect to Microsoft Purview through the API or use Microsoft Purview Studio.
+- To connect to your Microsoft Purview account privately and securely, you need to deploy an account and a portal private endpoint. For example, this deployment is necessary if you intend to connect to Microsoft Purview through the API or use the Microsoft Purview governance portal.
-- If you need to connect to Microsoft Purview Studio by using private endpoints, you have to deploy both account and portal private endpoints.
+- If you need to connect to the Microsoft Purview governance portal by using private endpoints, you have to deploy both account and portal private endpoints.
- To scan data sources through private connectivity, you need to configure at least one account and one ingestion private endpoint for Microsoft Purview. You must configure scans by using a self-hosted integration runtime through an authentication method other than a Microsoft Purview managed identity.
It is recommended to follow these recommendations, if your organization needs to
:::image type="content" source="media/concept-best-practices/network-pe-dns.png" alt-text="Screenshot that shows how to handle private endpoints and DNS records for multiple Microsoft Purview accounts."lightbox="media/concept-best-practices/network-pe-dns.png":::
-This scenario also applies if multiple Microsoft Purview accounts are deployed across multiple subscriptions and multiple VNets that are connected through VNet peering. _Portal_ private endpoint mainly renders static assets related to Microsoft Purview Studio, thus, it is independent of Microsoft Purview account, therefore, only one _portal_ private endpoint is needed to visit all Microsoft Purview accounts in the Azure environment if VNets are connected.
+This scenario also applies if multiple Microsoft Purview accounts are deployed across multiple subscriptions and multiple VNets that are connected through VNet peering. _Portal_ private endpoint mainly renders static assets related to the Microsoft Purview governance portal, thus, it is independent of Microsoft Purview account, therefore, only one _portal_ private endpoint is needed to visit all Microsoft Purview accounts in the Azure environment if VNets are connected.
:::image type="content" source="media/concept-best-practices/network-pe-dns-multi-vnet.png" alt-text="Screenshot that shows how to handle private endpoints and DNS records for multiple Microsoft Purview accounts in multiple vnets."lightbox="media/concept-best-practices/network-pe-dns-multi-vnet.png":::
purview Concept Best Practices Scanning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-scanning.md
After you register your source in the relevant [collection](./how-to-create-and-
- If a field or column, table, or a file is removed from the source system after the scan was executed, it will only be reflected (removed) in Microsoft Purview after the next scheduled full or incremental scan. - An asset can be deleted from a Microsoft Purview catalog by using the **Delete** icon under the name of the asset. This action won't remove the object in the source. If you run a full scan on the same source, it would get reingested in the catalog. If you've scheduled a weekly or monthly scan instead (incremental), the deleted asset won't be picked unless the object is modified at the source. An example is if a column is added or removed from the table.-- To understand the behavior of subsequent scans after *manually* editing a data asset or an underlying schema through Microsoft Purview Studio, see [Catalog asset details](./catalog-asset-details.md#scans-on-edited-assets).
+- To understand the behavior of subsequent scans after *manually* editing a data asset or an underlying schema through the Microsoft Purview governance portal, see [Catalog asset details](./catalog-asset-details.md#scans-on-edited-assets).
- For more information, see the tutorial on [how to view, edit, and delete assets](./catalog-asset-details.md). ## Next steps
purview Concept Best Practices Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-security.md
For more information, see [Best practices related to connectivity to Azure PaaS
### Deploy private endpoints for Microsoft Purview accounts
-If you need to use Microsoft Purview from inside your private network, it is recommended to use Azure Private Link Service with your Microsoft Purview accounts for partial or [end-to-end isolation](catalog-private-link-end-to-end.md) to connect to Microsoft Purview Studio, access Microsoft Purview endpoints and to scan data sources.
+If you need to use Microsoft Purview from inside your private network, it is recommended to use Azure Private Link Service with your Microsoft Purview accounts for partial or [end-to-end isolation](catalog-private-link-end-to-end.md) to connect to Microsoft Purview governance portal, access Microsoft Purview endpoints and to scan data sources.
The Microsoft Purview _account_ private endpoint is used to add another layer of security, so only client calls that are originated from within the virtual network are allowed to access the Microsoft Purview account. This private endpoint is also a prerequisite for the portal private endpoint.
-The Microsoft Purview _portal_ private endpoint is required to enable connectivity to Microsoft Purview Studio using a private network.
+The Microsoft Purview _portal_ private endpoint is required to enable connectivity to Microsoft Purview governance portal using a private network.
Microsoft Purview can scan data sources in Azure or an on-premises environment by using ingestion private endpoints.
For more information, see [Microsoft Purview network architecture and best pract
You can disable Microsoft Purview Public access to cut off access to the Microsoft Purview account completely from the public internet. In this case, you should consider the following requirements: - Microsoft Purview must be deployed based on [end-to-end network isolation scenario](catalog-private-link-end-to-end.md).-- To access Microsoft Purview Studio and Microsoft Purview endpoints, you need to use a management machine that is connected to private network to access Microsoft Purview through private network.
+- To access Microsoft Purview governance portal and Microsoft Purview endpoints, you need to use a management machine that is connected to private network to access Microsoft Purview through private network.
- Review [known limitations](catalog-private-link-troubleshoot.md). - To scan Azure platform as a service data sources, review [Support matrix for scanning data sources through ingestion private endpoint](catalog-private-link.md#support-matrix-for-scanning-data-sources-through-ingestion-private-endpoint). - Azure data sources must be also configured with private endpoints.
The following NSG rules are required on **data sources** for Microsoft Purview s
|Inbound | Self-hosted integration runtime VMs' private IP addresses or subnets | * | Data Sources private IP addresses or Subnets | 443 | Any | Allow |
-The following NSG rules are required on from the **management machines** to access Microsoft Purview Studio:
+The following NSG rules are required on from the **management machines** to access Microsoft Purview governance portal:
|Direction |Source |Source port range |Destination |Destination port |Protocol |Action | ||||||||
purview Concept Best Practices Sensitivity Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-sensitivity-labels.md
Title: Best practices for applying sensitivity labels in Microsoft Purview
+ Title: Best practices for applying sensitivity labels in the Microsoft Purview data map
description: This article provides best practices for applying sensitivity labels in Microsoft Purview. Previously updated : 01/12/2022 Last updated : 04/21/2022 # Labeling best practices
-Microsoft Purview supports labeling structured and unstructured data stored across various data sources. Labeling data within Microsoft Purview allows users to easily find data that matches predefined autolabeling rules that were configured in the Microsoft 365 Security and Compliance Center. Microsoft Purview extends the use of Microsoft 365 sensitivity labels to assets stored in infrastructure cloud locations and structured data sources.
+The Microsoft Purview data map supports labeling structured and unstructured data stored across various data sources. Labeling data within the data map allows users to easily find data that matches predefined autolabeling rules that were configured in the Microsoft Purview compliance portal. The data map extends the use of sensitivity labels from Microsoft Purview Information Protection to assets stored in infrastructure cloud locations and structured data sources.
## Protect personal data with custom sensitivity labels for Microsoft Purview Storing and processing personal data is subject to special protection. Labeling personal data is crucial to help you identify sensitive information. You can use the detection and labeling tasks for personal data in different stages of your workflows. Because personal data is ubiquitous and fluid in your organization, you need to define identification rules for building policies that suit your individual situation.
-## Why do you need to use labeling within Microsoft Purview?
+## Why do you need to use labeling within the data map?
-With Microsoft Purview, you can extend your organization's investment in Microsoft 365 sensitivity labels to assets that are stored in files and database columns within Azure, multicloud, and on-premises locations. These locations are defined in [supported data sources](./create-sensitivity-label.md#supported-data-sources).
-When you apply sensitivity labels to your content, you can keep your data secure by stating how sensitive certain data is in your organization. Microsoft Purview also abstracts the data itself, so you can use labels to track the type of data, without exposing sensitive data on another platform.
+With the data map, you can extend your organization's investment in sensitivity labels from Microsoft Purview Information Protection to assets that are stored in files and database columns within Azure, multicloud, and on-premises locations. These locations are defined in [supported data sources](./create-sensitivity-label.md#supported-data-sources).
+When you apply sensitivity labels to your content, you can keep your data secure by stating how sensitive certain data is in your organization. The data map also abstracts the data itself, so you can use labels to track the type of data, without exposing sensitive data on another platform.
-## Microsoft Purview labeling best practices and considerations
+## Microsoft Purview data map labeling best practices and considerations
The following sections walk you through the process of implementing labeling for your assets. ### Get started -- To enable sensitivity labeling in Microsoft Purview, follow the steps in [Automatically apply sensitivity labels to your data in Microsoft Purview](./how-to-automatically-label-your-content.md).-- To find information on required licensing and helpful answers to other questions, see [Sensitivity labels in Microsoft Purview FAQ](./sensitivity-labels-frequently-asked-questions.yml).
+- To enable sensitivity labeling in the data map, follow the steps in [automatically apply sensitivity labels to your data in the Microsoft Purview data map](./how-to-automatically-label-your-content.md).
+- To find information on required licensing and helpful answers to other questions, see [Sensitivity labels in the Microsoft Purview data map FAQ](./sensitivity-labels-frequently-asked-questions.yml).
### Label considerations -- If you already have Microsoft 365 sensitivity labels in use in your environment, continue to use your existing labels. Don't make duplicate or more labels for Microsoft Purview. This approach allows you to maximize the investment you've already made in the Microsoft 365 compliance space. It also ensures consistent labeling across your data estate.-- If you haven't created Microsoft 365 sensitivity labels, review the documentation to [get started with sensitivity labels](/microsoft-365/compliance/get-started-with-sensitivity-labels). Creating a classification schema is a tenant-wide operation. Discuss it thoroughly before you enable it within your organization.
+- If you already have sensitivity labels from Microsoft Purview Information Protection in use in your environment, continue to use your existing labels. Don't make duplicate or more labels for the data map. This approach allows you to maximize the investment you've already made in the Microsoft Purview. It also ensures consistent labeling across your data estate.
+- If you haven't created sensitivity labels in Microsoft Purview Information Protection, review the documentation to [get started with sensitivity labels](/microsoft-365/compliance/get-started-with-sensitivity-labels). Creating a classification schema is a tenant-wide operation. Discuss it thoroughly before you enable it within your organization.
### Label recommendations -- When you configure sensitivity labels for Microsoft Purview, you might define autolabeling rules for files, database columns, or both within the label properties. Microsoft Purview labels files within the Microsoft Purview data map. When the autolabeling rule is configured, Microsoft Purview automatically applies the label or recommends that the label is applied.
+- When you configure sensitivity labels for the Microsoft Purview data map, you might define autolabeling rules for files, database columns, or both within the label properties. Microsoft Purview labels files within the Microsoft Purview data map. When the autolabeling rule is configured, Microsoft Purview automatically applies the label or recommends that the label is applied.
> [!WARNING] > If you haven't configured autolabeling for files and emails on your sensitivity labels, users might be affected within your Office and Microsoft 365 environment. You can test autolabeling on database columns without affecting users. -- If you're defining new autolabeling rules for files when you configure labels for Microsoft Purview, make sure that you have the condition for applying the label set appropriately.
+- If you're defining new autolabeling rules for files when you configure labels for the Microsoft Purview data map, make sure that you have the condition for applying the label set appropriately.
- You can set the detection criteria to **All of these** or **Any of these** in the upper right of the autolabeling for files and emails page of the label properties. - The default setting for detection criteria is **All of these**. This setting means that the asset must contain all the specified sensitive information types for the label to be applied. While the default setting might be valid in some instances, many customers want to use **Any of these**. Then if at least one asset is found, the label is applied. :::image type="content" source="media/concept-best-practices/label-detection-criteria.png" alt-text="Screenshot that shows detection criteria for a label."::: > [!NOTE]
- > Microsoft 365 trainable classifiers aren't used by Microsoft Purview.
+ > Microsoft Purview Information Protection trainable classifiers aren't used by the Microsoft Purview data map.
- Maintain consistency in labeling across your data estate. If you use autolabeling rules for files, use the same sensitive information types for autolabeling database columns.-- [Define your sensitivity labels via Microsoft Information Protection to identify your personal data at a central place](/microsoft-365/compliance/information-protection).
+- [Define your sensitivity labels via Microsoft Purview Information Protection to identify your personal data at a central place](/microsoft-365/compliance/information-protection).
- [Use policy templates as a starting point to build your rule sets](/microsoft-365/compliance/what-the-dlp-policy-templates-include#general-data-protection-regulation-gdpr). - [Combine data classifications to an individual rule set](./supported-classifications.md). - [Force labeling by using autolabel functionality](./how-to-automatically-label-your-content.md). - Build groups of sensitivity labels and store them as a dedicated sensitivity label policy. For example, store all required sensitivity labels for regulatory rules by using the same sensitivity label policy to publish. - Capture all test cases for your labels. Test your label policies with all applications you want to secure.-- Promote sensitivity label policies to Microsoft Purview.-- Run test scans from Microsoft Purview on different data sources like hybrid cloud and on-premises to identify sensitivity labels.
+- Promote sensitivity label policies to the Microsoft Purview data map.
+- Run test scans from the Microsoft Purview data map on different data sources like hybrid cloud and on-premises to identify sensitivity labels.
- Gather and consider insights, for example, by using Microsoft Purview Insights. Use alerting mechanisms to mitigate potential breaches of regulations.
-By using sensitivity labels with Microsoft Purview, you can extend Microsoft Information Protection beyond the border of your Microsoft data estate to your on-premises, hybrid cloud, multicloud, and software as a service (SaaS) scenarios.
+By using sensitivity labels with the Microsoft Purview data map, you can extend Microsoft Purview Information Protection beyond the border of your Microsoft data estate to your on-premises, hybrid cloud, multicloud, and software as a service (SaaS) scenarios.
## Next steps
purview Concept Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-classification.md
Data classification, in the context of Microsoft Purview, is a way of categorizi
When you classify data assets, you make them easier to understand, search, and govern. Classifying data assets also helps you understand the risks associated with them. This in turn can help you implement measures to protect sensitive or important data from ungoverned proliferation and unauthorized access across the data estate.
-Microsoft Purview provides an automated classification capability while you scan your data sources. You get more than 200+ built-in system classifications and the ability to create custom classifications for your data. You can classify assets automatically when they're configured as part of a scan, or you can edit them manually in Microsoft Purview Studio after they're scanned and ingested.
+Microsoft Purview provides an automated classification capability while you scan your data sources. You get more than 200+ built-in system classifications and the ability to create custom classifications for your data. You can classify assets automatically when they're configured as part of a scan, or you can edit them manually in the Microsoft Purview governance portal after they're scanned and ingested.
## Use of classification
Custom classification rules can be based on a *regular expression* pattern or *d
Let's say that the *Employee ID* column follows the EMPLOYEE{GUID} pattern (for example, EMPLOYEE9c55c474-9996-420c-a285-0d0fc23f1f55). You can create your own custom classification by using a regular expression, such as `\^Employee\[A-Za-z0-9\]{8}-\[A-Za-z0-9\]{4}-\[A-Za-z0-9\]{4}-\[A-Za-z0-9\]{4}-\[A-Za-z0-9\]{12}\$`. > [!NOTE]
-> Sensitivity labels are different from classifications. Sensitivity labels categorize assets in the context of data security and privacy, such as *Highly Confidential*, *Restricted*, *Public*, and so on. To use sensitivity labels in Microsoft Purview, you'll need at least one Microsoft 365 license or account within the same Azure Active Directory (Azure AD) tenant as your Microsoft Purview account. For more information about the differences between sensitivity labels and classifications, see [Sensitivity labels in Microsoft Purview FAQ](sensitivity-labels-frequently-asked-questions.yml#what-is-the-difference-between-classifications-and-sensitivity-labels-in-microsoft-purview).
+> Sensitivity labels are different from classifications. Sensitivity labels categorize assets in the context of data security and privacy, such as *Highly Confidential*, *Restricted*, *Public*, and so on. To use sensitivity labels in the Microsoft Purview data map, you'll need at least one Microsoft 365 license or account within the same Azure Active Directory (Azure AD) tenant as your Microsoft Purview data map. For more information about the differences between sensitivity labels and classifications, see [Sensitivity labels in Microsoft Purview FAQ](sensitivity-labels-frequently-asked-questions.yml#what-is-the-difference-between-classifications-and-sensitivity-labels).
## Next steps * [Read about classification best practices](concept-best-practices-classification.md) * [Create custom classifications](create-a-custom-classification-and-classification-rule.md) * [Apply classifications](apply-classifications.md)
-* [Use the Microsoft Purview Studio](use-azure-purview-studio.md)
+* [Use the Microsoft Purview governance portal](use-azure-purview-studio.md)
purview Concept Data Lineage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-data-lineage.md
Lineage is a critical feature of the Microsoft Purview Data Catalog to support q
* [Quickstart: Create a Microsoft Purview account in the Azure portal](create-catalog-portal.md) * [Quickstart: Create a Microsoft Purview account using Azure PowerShell/Azure CLI](create-catalog-powershell.md)
-* [Use the Microsoft Purview Studio](use-azure-purview-studio.md)
+* [Use the Microsoft Purview governance portal](use-azure-purview-studio.md)
purview Concept Data Owner Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-data-owner-policies.md
Last updated 03/20/2022
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-This article discusses concepts related to managing access to data sources in your data estate from within Microsoft Purview Studio.
+This article discusses concepts related to managing access to data sources in your data estate from within the Microsoft Purview governance portal.
> [!Note] > This capability is different from access control for Microsoft Purview itself, which is described in [Access control in Microsoft Purview](catalog-permissions.md).
This article discusses concepts related to managing access to data sources in yo
Access policies in Microsoft Purview enable you to manage access to different data systems across your entire data estate. For example:
-A user needs read access to an Azure Storage account that has been registered in Microsoft Purview. You can grant this access directly in Microsoft Purview by creating a data access policy through the **Policy management** app in Microsoft Purview Studio.
+A user needs read access to an Azure Storage account that has been registered in Microsoft Purview. You can grant this access directly in Microsoft Purview by creating a data access policy through the **Policy management** app in the Microsoft Purview governance portal.
Data access policies can be enforced through Purview on data systems that have been registered for policy.
purview Concept Default Purview Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-default-purview-account.md
Having multiple Microsoft Purview accounts in a tenant now poses the challenge o
* Changing the default account is a two-step process. First you need to change the flag as 'No' to the current default Microsoft Purview account and then set the flag as 'Yes' to the new Microsoft Purview account.
-* Setting up default account is a control plane operation and hence Microsoft Purview studio will not have any changes if an account is defined as default. However, in the studio you can see the account name is appended with "(default)" for the default Microsoft Purview account.
+* Setting up default account is a control plane operation and hence the Microsoft Purview governance portal will not have any changes if an account is defined as default. However, in the studio you can see the account name is appended with "(default)" for the default Microsoft Purview account.
## Next steps
purview Concept Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-workflow.md
Currently, there are two kinds of workflows:
* **Data governance** - for data policy, access governance, and loss prevention. [Scoped](#workflow-scope) at the collection level. * **Data catalog** - to manage approvals for CUD (create, update, delete) operations for glossary terms. [Scoped](#workflow-scope) at the glossary level.
-These workflows can be built from pre-established [workflow templates](#workflow-templates) provided in the Microsoft Purview studio, but are fully customizable using the available workflow connectors.
+These workflows can be built from pre-established [workflow templates](#workflow-templates) provided in the Microsoft Purview governance portal, but are fully customizable using the available workflow connectors.
## Workflow templates
purview Create A Scan Rule Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-a-scan-rule-set.md
A scan rule set is a container for grouping a set of scan rules together so that
To create a scan rule set:
-1. From your Azure [Microsoft Purview Studio](https://web.purview.azure.com/resource/), select **Data Map**.
+1. From your Azure [Microsoft Purview governance portal](https://web.purview.azure.com/resource/), select **Data Map**.
1. Select **Scan rule sets** from the left pane, and then select **New**.
purview Create Azure Purview Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-azure-purview-dotnet.md
The above code with print 'True' if the name is available and 'False' if the nam
The code in this tutorial creates a purview account, deletes a purview account and checks for name availability of purview account. You can now download the .NET SDK and learn about other resource provider actions you can perform for a Microsoft Purview account.
-Follow these next articles to learn how to navigate the Microsoft Purview Studio, create a collection, and grant access to Microsoft Purview.
+Follow these next articles to learn how to navigate the Microsoft Purview governance portal, create a collection, and grant access to Microsoft Purview.
-* [How to use the Microsoft Purview Studio](use-azure-purview-studio.md)
+* [How to use the Microsoft Purview governance portal](use-azure-purview-studio.md)
* [Create a collection](quickstart-create-collection.md) * [Add users to your Microsoft Purview account](catalog-permissions.md)
purview Create Azure Purview Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-azure-purview-python.md
pa = purview_client.accounts.begin_delete(rg_name, purview_name).result()
The code in this tutorial creates a purview account and deletes a purview account. You can now download the Python SDK and learn about other resource provider actions you can perform for a Microsoft Purview account.
-Follow these next articles to learn how to navigate the Microsoft Purview Studio, create a collection, and grant access to Microsoft Purview.
+Follow these next articles to learn how to navigate the Microsoft Purview governance portal, create a collection, and grant access to Microsoft Purview.
-* [How to use the Microsoft Purview Studio](use-azure-purview-studio.md)
+* [How to use the Microsoft Purview governance portal](use-azure-purview-studio.md)
* [Create a collection](quickstart-create-collection.md) * [Add users to your Microsoft Purview account](catalog-permissions.md)
purview Create Catalog Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-catalog-portal.md
For more information about Microsoft Purview, [see our overview page](overview.m
:::image type="content" source="media/create-catalog-portal/create-resource.png" alt-text="Screenshot showing the Create Microsoft Purview account screen with the Review + Create button highlighted":::
-## Open Microsoft Purview Studio
+## Open the Microsoft Purview governance portal
-After your Microsoft Purview account is created, you'll use the Microsoft Purview Studio to access and manage it. There are two ways to open Microsoft Purview Studio:
+After your Microsoft Purview account is created, you'll use the Microsoft Purview governance portal to access and manage it. There are two ways to open the Microsoft Purview governance portal:
-* Open your Microsoft Purview account in the [Azure portal](https://portal.azure.com). Select the "Open Microsoft Purview Studio" tile on the overview page.
- :::image type="content" source="media/create-catalog-portal/open-purview-studio.png" alt-text="Screenshot showing the Microsoft Purview account overview page, with the Microsoft Purview Studio tile highlighted.":::
+* Open your Microsoft Purview account in the [Azure portal](https://portal.azure.com). Select the "Open Microsoft Purview governance portal" tile on the overview page.
+ :::image type="content" source="media/create-catalog-portal/open-purview-studio.png" alt-text="Screenshot showing the Microsoft Purview account overview page, with the Microsoft Purview governance portal tile highlighted.":::
* Alternatively, you can browse to [https://web.purview.azure.com](https://web.purview.azure.com), select your Microsoft Purview account, and sign in to your workspace. ## Next steps
-In this quickstart, you learned how to create a Microsoft Purview account and how to access it through the Microsoft Purview Studio.
+In this quickstart, you learned how to create a Microsoft Purview account and how to access it through the Microsoft Purview governance portal.
Next, you can create a user-assigned managed identity (UAMI) that will enable your new Microsoft Purview account to authenticate directly with resources using Azure Active Directory (Azure AD) authentication. To create a UAMI, follow our [guide to create a user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity).
-Follow these next articles to learn how to navigate the Microsoft Purview Studio, create a collection, and grant access to Microsoft Purview:
+Follow these next articles to learn how to navigate the Microsoft Purview governance portal, create a collection, and grant access to Microsoft Purview:
-* [Using the Microsoft Purview Studio](use-azure-purview-studio.md)
+* [Using the Microsoft Purview governance portal](use-azure-purview-studio.md)
* [Create a collection](quickstart-create-collection.md) * [Add users to your Microsoft Purview account](catalog-permissions.md)
purview Create Catalog Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-catalog-powershell.md
For more information about Microsoft Purview, [see our overview page](overview.m
az purview account add-root-collection-admin --account-name [Microsoft Purview Account Name] --resource-group [Resource Group Name] --object-id [User Object Id] ```
- This command will grant the user account [collection admin](catalog-permissions.md#roles) permissions on the root collection in your Microsoft Purview account. This allows the user to access the Microsoft Purview Studio and add permission for other users. For more information about permissions in Microsoft Purview, see our [permissions guide](catalog-permissions.md). For more information about collections, see our [manage collections article](how-to-create-and-manage-collections.md).
+ This command will grant the user account [collection admin](catalog-permissions.md#roles) permissions on the root collection in your Microsoft Purview account. This allows the user to access the Microsoft Purview governance portal and add permission for other users. For more information about permissions in Microsoft Purview, see our [permissions guide](catalog-permissions.md). For more information about collections, see our [manage collections article](how-to-create-and-manage-collections.md).
## Next steps In this quickstart, you learned how to create a Microsoft Purview account.
-Follow these next articles to learn how to navigate the Microsoft Purview Studio, create a collection, and grant access to Microsoft Purview.
+Follow these next articles to learn how to navigate the Microsoft Purview governance portal, create a collection, and grant access to Microsoft Purview.
-* [How to use the Microsoft Purview Studio](use-azure-purview-studio.md)
+* [How to use the Microsoft Purview governance portal](use-azure-purview-studio.md)
* [Add users to your Microsoft Purview account](catalog-permissions.md) * [Create a collection](quickstart-create-collection.md)
purview Create Sensitivity Label https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-sensitivity-label.md
Title: Labeling in Microsoft Purview
+ Title: Labeling in the Microsoft Purview data map
description: Start utilizing sensitivity labels and classifications to enhance your Microsoft Purview assets -+ Last updated 09/27/2021
-# Labeling in Microsoft Purview
+
+# Labeling in the Microsoft Purview data map
> [!IMPORTANT]
-> Microsoft Purview Sensitivity Labels are currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Labeling in the Microsoft Purview data map is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
> To get work done, people in your organization collaborate with others both inside and outside the organization. Data doesn't always stay in your cloud, and often roams everywhere, across devices, apps, and services. When your data roams, you still want it to be secure in a way that meets your organization's business and compliance policies.</br>
For example, applying a sensitivity label ΓÇÿhighly confidentialΓÇÖ to a documen
Microsoft Purview allows you to apply sensitivity labels to assets, enabling you to classify and protect your data.
-* **Label travels with the data:** The sensitivity labels created in Microsoft 365 can also be extended to Microsoft Purview, SharePoint, Teams, Power BI, and SQL. When you apply a label on an office document and then scan it in Microsoft Purview, the label will flow to Microsoft Purview. While the label is applied to the actual file in M365, it is only added as metadata in the Microsoft Purview catalog. While there are differences in how a label is applied to an asset across various services/applications, labels travel with the data and is recognized by all the services you extend it to.
-* **Overview of your data estate:** Microsoft Purview provides insights into your data through pre-canned reports. When you scan data in Microsoft Purview, we hydrate the reports with information on what assets you have, scan history, classifications found in your data, labels applied, glossary terms, etc.
+* **Label travels with the data:** The sensitivity labels created in Microsoft Purview Information Protection can also be extended to the Microsoft Purview data map, SharePoint, Teams, Power BI, and SQL. When you apply a label on an office document and then scan it into the Microsoft Purview data map, the label will be applied to the data asset. While the label is applied to the actual file in Microsoft Purview Information Protection, it's only added as metadata in the Microsoft Purview map. While there are differences in how a label is applied to an asset across various services/applications, labels travel with the data and is recognized by all the services you extend it to.
+* **Overview of your data estate:** Microsoft Purview provides insights into your data through pre-canned reports. When you scan data into the Microsoft Purview data map, we hydrate the reports with information on what assets you have, scan history, classifications found in your data, labels applied, glossary terms, etc.
* **Automatic labeling:** Labels can be applied automatically based on sensitivity of the data. When an asset is scanned for sensitive data, autolabeling rules are used to decide which sensitivity label to apply. You can create autolabeling rules for each sensitivity label, defining which classification/sensitive information type constitutes a label. * **Apply labels to files and database columns:** Labels can be applied to files in storage like Azure Data Lake, Azure Files, etc. and to schematized data like columns in Azure SQL DB, Cosmos DB, etc. Sensitivity labels are tags that you can apply on assets to classify and protect your data. Learn more about [sensitivity labels here](/microsoft-365/compliance/create-sensitivity-labels).
-## How to apply labels to assets in Microsoft Purview
+## How to apply labels to assets in the Microsoft Purview data map
:::image type="content" source="media/create-sensitivity-label/apply-label-flow.png" alt-text="Applying labels to assets in Microsoft Purview flow. Create labels, register asset, scan asset, classifications found, labels applied.":::
-Being able to apply labels to your asset in Microsoft Purview requires you to perform the following steps:
+Being able to apply labels to your asset in the data map requires you to perform the following steps:
-1. [Create or extend existing sensitivity labels to Microsoft Purview](how-to-automatically-label-your-content.md), in the Microsoft 365 compliance center. Creating sensitivity labels include autolabeling rules that tell us which label should be applied based on the classifications found in your data.
-1. [Register and scan your asset](how-to-automatically-label-your-content.md#scan-your-data-to-apply-sensitivity-labels-automatically) in Microsoft Purview.
-1. Microsoft Purview applies classifications: When you schedule a scan on an asset, Microsoft Purview scans the type of data in your asset and applies classifications to it in the data catalog. Application of classifications is done automatically by Microsoft Purview, there is no action for you.
-1. Microsoft Purview applies labels: Once classifications are found on an asset, Microsoft Purview will apply labels to the assets depending on autolabeling rules. Application of labels is done automatically by Microsoft Purview, there is no action for you as long as you have created labels with autolabeling rules in step 1.
+1. [Create new or apply existing sensitivity labels](how-to-automatically-label-your-content.md) in the Microsoft Purview compliance portal. Creating sensitivity labels include autolabeling rules that tell us which label should be applied based on the classifications found in your data.
+1. [Register and scan your asset](how-to-automatically-label-your-content.md#scan-your-data-to-apply-sensitivity-labels-automatically) in the Microsoft Purview data map.
+1. Microsoft Purview applies **classifications**: When you schedule a scan on an asset, Microsoft Purview scans the type of data in your asset and applies classifications to it in the data map. Application of classifications is done automatically by Microsoft Purview, there's no action for you.
+1. Microsoft Purview applies **labels**: Once classifications are found on an asset, Microsoft Purview will apply labels to the assets depending on autolabeling rules. Application of labels is done automatically by Microsoft Purview, there's no action for you as long as you have created labels with autolabeling rules in step 1.
> [!NOTE] > Autolabeling rules are conditions that you specify, stating when a particular label should be applied. When these conditions are met, the label is automatically assigned to the data. When you create your labels, make sure to define autolabeling rules for both files and database columns to apply your labels automatically with each scan.
Being able to apply labels to your asset in Microsoft Purview requires you to pe
## Supported data sources
-Sensitivity labels are supported in Microsoft Purview for the following data sources:
+Sensitivity labels are supported in the Microsoft Purview data map for the following data sources:
|Data type |Sources | |||
Sensitivity labels are supported in Microsoft Purview for the following data sou
## Labeling for SQL databases
-In addition to Microsoft Purview labeling for schematized data assets, Microsoft also supports labeling for SQL database columns using the SQL data classification in [SQL Server Management Studio (SSMS)](/sql/ssms/sql-server-management-studio-ssms). While Microsoft Purview uses the global [sensitivity labels](/microsoft-365/compliance/sensitivity-labels), SSMS only uses labels defined locally.
+In addition to labeling for schematized data assets, the Microsoft Purview data map also supports labeling for SQL database columns using the SQL data classification in [SQL Server Management Studio (SSMS)](/sql/ssms/sql-server-management-studio-ssms). While Microsoft Purview uses the global [sensitivity labels](/microsoft-365/compliance/sensitivity-labels), SSMS only uses labels defined locally.
-Labeling in Microsoft Purview and labeling in SSMS are separate processes that do not currently interact with each other. Therefore, **labels applied in SSMS are not shown in Microsoft Purview, and vice versa**. We recommend Microsoft Purview for labeling SQL databases, as it uses global MIP labels that can be applied across multiple platforms.
+Labeling in Microsoft Purview and labeling in SSMS are separate processes that don't currently interact with each other. Therefore, **labels applied in SSMS are not shown in Microsoft Purview, and vice versa**. We recommend Microsoft Purview for labeling SQL databases, as it uses global MIP labels that can be applied across multiple platforms.
For more information, see the [SQL data discovery and classification documentation](/sql/relational-databases/security/sql-data-discovery-and-classification). </br></br>
+## Next steps
+ > [!div class="nextstepaction"] > [How to automatically label your content](./how-to-automatically-label-your-content.md)
purview Glossary Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/glossary-insights.md
In Microsoft Purview, you can create glossary terms and attach them to assets. L
1. Go to the **Microsoft Purview** [instance screen in the Azure portal](https://aka.ms/purviewportal) and select your Microsoft Purview account.
-1. On the **Overview** page, in the **Get Started** section, select **Open Microsoft Purview Studio** account tile.
+1. On the **Overview** page, in the **Get Started** section, select **Open Microsoft Purview governance portal** account tile.
:::image type="content" source="./media/glossary-insights/portal-access.png" alt-text="Launch Microsoft Purview from the Azure portal":::
purview How To Automatically Label Your Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-automatically-label-your-content.md
Title: How to automatically apply sensitivity labels to your data in Microsoft Purview
+ Title: How to automatically apply sensitivity labels to your data in Microsoft Purview data map
description: Learn how to create sensitivity labels and automatically apply them to your data during a scan.--++ -+ Previously updated : 09/27/2021 Last updated : 04/21/2021
-# How to automatically apply sensitivity labels to your data in Microsoft Purview
+# How to automatically apply sensitivity labels to your data in the Microsoft Purview data map
-## Create or extend existing sensitivity labels to Microsoft Purview
+## Create new or apply existing sensitivity labels in the data map
> [!IMPORTANT]
-> Microsoft Purview Sensitivity Labels are currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Labeling in the Microsoft Purview data map is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
>
-If you don't already have sensitivity labels, you'll need to create them and make them available for Microsoft Purview. Existing sensitivity labels can also be modified to make them available for Microsoft Purview.
+If you don't already have sensitivity labels, you'll need to create them and make them available for the Microsoft Purview data map. Existing sensitivity labels from Microsoft Purview Information Protection can also be modified to make them available to the data map.
### Step 1: Licensing requirements
-Sensitivity labels are created and managed in the Microsoft 365 compliance center. To create sensitivity labels for use in Microsoft Purview, you must have an active Microsoft 365 license which offers the benefit of automatically applying sensitivity labels.
+Sensitivity labels are created and managed in the Microsoft Purview compliance portal. To create sensitivity labels for use through Microsoft Purview, you must have an active Microsoft 365 license that offers the benefit of automatically applying sensitivity labels.
-For the full list of licenses, see the [Sensitivity labels in Microsoft Purview FAQ](sensitivity-labels-frequently-asked-questions.yml). If you do not already have the required license, you can sign up for a trial of [Microsoft 365 E5](https://www.microsoft.com/microsoft-365/business/compliance-solutions#midpagectaregion).
+For the full list of licenses, see the [Sensitivity labels in Microsoft Purview FAQ](sensitivity-labels-frequently-asked-questions.yml). If you don't already have the required license, you can sign up for a trial of [Microsoft 365 E5](https://www.microsoft.com/microsoft-365/business/compliance-solutions#midpagectaregion).
-### Step 2: Consent to use sensitivity labels in Microsoft Purview
+### Step 2: Consent to use sensitivity labels in the Microsoft Purview data map
-The following steps extend your sensitivity labels and enable them to be available for use in Microsoft Purview, where you can apply sensitivity labels to files and database columns.
+The following steps extend your existing sensitivity labels and enable them to be available for use in the data map, where you can apply sensitivity labels to files and database columns.
-1. In Microsoft 365, navigate to the **Information Protection** page.</br>
+1. In the Microsoft Purview compliance portal, navigate to the **Information Protection** page.</br>
If you've recently provisioned your subscription for Information Protection, it may take a few hours for the **Information Protection** page to display.
-1. In the **Extend labeling to assets in Microsoft Purview** area, select the **Turn on** button, and then select **Yes** in the confirmation dialog that appears.
+1. In the **Extend labeling to assets in the Microsoft Purview Data Map** area, select the **Turn on** button, and then select **Yes** in the confirmation dialog that appears.
For example:
For example:
:::image type="content" source="media/how-to-automatically-label-your-content/extend-sensitivity-labels-to-purview-confirmation-small.png" alt-text="Confirm the choice to extend sensitivity labels to Microsoft Purview" lightbox="media/how-to-automatically-label-your-content/extend-sensitivity-labels-to-purview-confirmation.png"::: > [!TIP]
->If you don't see the button, and you're not sure if consent has been granted to extend labeling to assets in Microsoft Purview, see [this FAQ](sensitivity-labels-frequently-asked-questions.yml#how-can-i-determine-if-consent-has-been-granted-to-extend-labeling-to-microsoft-purview) item on how to determine the status.
+>If you don't see the button, and you're not sure if consent has been granted to extend labeling to assets in the Microsoft Purview Data Map, see [this FAQ](sensitivity-labels-frequently-asked-questions.yml#how-can-i-determine-if-consent-has-been-granted-to-extend-labeling-to-the-microsoft-purview-data-map) item on how to determine the status.
>
-After you've extended labeling to assets in Microsoft Purview, all published sensitivity labels are available for use in Microsoft Purview.
+After you've extended labeling to assets in the Microsoft Purview data map, all published sensitivity labels are available for use in the data map.
### Step 3: Create or modify existing label to automatically label content **To create new sensitivity labels or modify existing labels**:
-1. Open the [Microsoft 365 compliance center](https://compliance.microsoft.com/).
+1. Open the [Microsoft Purview compliance portal](https://compliance.microsoft.com/).
1. Under **Solutions**, select **Information protection**, then select **Create a label**.
After you've extended labeling to assets in Microsoft Purview, all published sen
1. Name the label. Then, under **Define the scope for this label**: - In all cases, select **Schematized data assets**.
- - To label files, also select **Files & emails**. This option is not required to label schematized data assets only
+ - To label files, also select **Files & emails**. This option isn't required to label schematized data assets only
:::image type="content" source="media/how-to-automatically-label-your-content/create-label-scope-small.png" alt-text="Automatically label in the Microsoft 365 compliance center" lightbox="media/how-to-automatically-label-your-content/create-label-scope.png":::
After you've extended labeling to assets in Microsoft Purview, all published sen
To change the order of a label, select **...** **> More actions** > **Move up** or **Move down.**
- For more information, see [Label priority (order matters)](/microsoft-365/compliance/sensitivity-labels#label-priority-order-matters) in the Microsoft 365 documentation.
+ For more information, see the documentation for [label priority (order matters)](/microsoft-365/compliance/sensitivity-labels#label-priority-order-matters).
#### Autolabeling for files
For example:
:::image type="content" source="media/how-to-automatically-label-your-content/create-auto-labeling-rules-files-small.png" alt-text="Define auto-labeling rules for files in the Microsoft 365 compliance center" lightbox="media/how-to-automatically-label-your-content/create-auto-labeling-rules-files.png":::
-For more information, see [Apply a sensitivity label to data automatically](/microsoft-365/compliance/apply-sensitivity-label-automatically#how-to-configure-auto-labeling-for-office-apps) in the Microsoft 365 documentation.
+For more information, see the documentation to [apply a sensitivity label to data automatically](/microsoft-365/compliance/apply-sensitivity-label-automatically#how-to-configure-auto-labeling-for-office-apps).
#### Autolabeling for schematized data assets
For example:
### Step 4: Publish labels
-Once you create a label, you will need to Scan your data in Microsoft Purview to automatically apply the labels you've created, based on the autolabeling rules you've defined.
+Once you create a label, you'll need to Scan your data in the Microsoft Purview data map to automatically apply the labels you've created, based on the autolabeling rules you've defined.
## Scan your data to apply sensitivity labels automatically
-Scan your data in Microsoft Purview to automatically apply the labels you've created, based on the autolabeling rules you've defined. Allow up to 24 hours for sensitivity label changes to reflect in Microsoft Purview.
+Scan your data in the data map to automatically apply the labels you've created, based on the autolabeling rules you've defined. Allow up to 24 hours for sensitivity label changes to reflect in the data map.
-For more information on how to set up scans on various assets in Microsoft Purview, see:
+For more information on how to set up scans on various assets in the Microsoft Purview data map, see:
|Source |Reference | ||| |**Files within Storage** | [Register and Scan Azure Blob Storage](register-scan-azure-blob-storage-source.md) </br> [Register and scan Azure Files](register-scan-azure-files-storage-source.md) [Register and scan Azure Data Lake Storage Gen1](register-scan-adls-gen1.md) </br>[Register and scan Azure Data Lake Storage Gen2](register-scan-adls-gen2.md)</br>[Register and scan Amazon S3](register-scan-amazon-s3.md) |
-|**database columns** | [Register and scan an Azure SQL Database](register-scan-azure-sql-database.md) </br>[Register and scan an Azure SQL Database Managed Instance](register-scan-azure-sql-database-managed-instance.md) </br> [Register and scan Dedicated SQL pools](register-scan-azure-synapse-analytics.md)</br> [Register and scan Azure Synapse Analytics workspaces](register-scan-azure-synapse-analytics.md) </br> [Register and scan Azure Cosmos Database (SQL API)](register-scan-azure-cosmos-database.md) </br> [Register and scan an Azure MySQL database](register-scan-azure-mysql-database.md) </br> [Register and scan an Azure database for PostgreSQL](register-scan-azure-postgresql.md) |
+|**database columns** | [Register and scan an Azure SQL Database](register-scan-azure-sql-database.md) </br>[Register and scan an Azure SQL Managed Instance](register-scan-azure-sql-database-managed-instance.md) </br> [Register and scan Dedicated SQL pools](register-scan-azure-synapse-analytics.md)</br> [Register and scan Azure Synapse Analytics workspaces](register-scan-azure-synapse-analytics.md) </br> [Register and scan Azure Cosmos Database (SQL API)](register-scan-azure-cosmos-database.md) </br> [Register and scan an Azure MySQL database](register-scan-azure-mysql-database.md) </br> [Register and scan an Azure database for PostgreSQL](register-scan-azure-postgresql.md) |
| | | ## View labels on assets in the catalog
-Once you've defined autolabeling rules for your labels in Microsoft 365 and scanned your data in Microsoft Purview, labels are automatically applied to your assets.
+Once you've defined autolabeling rules for your labels in the Microsoft Purview compliance portal and scanned your data in the data map, labels are automatically applied to your assets in the data map.
-**To view the labels applied to your assets in the Microsoft Purview Catalog:**
+**To view the labels applied to your assets in the Microsoft Purview catalog:**
-In the Microsoft Purview Catalog, use the **Label** filtering options to show assets with specific labels only. For example:
+In the Microsoft Purview catalog, use the **Label** filtering options to show assets with specific labels only. For example:
:::image type="content" source="media/how-to-automatically-label-your-content/filter-search-results-small.png" alt-text="Search for assets by label" lightbox="media/how-to-automatically-label-your-content/filter-search-results.png":::
-To view details of an asset including classifications found and label applied, click on the asset in the results.
+To view details of an asset including classifications found and label applied, select the asset in the results.
For example:
For example:
## View Insight reports for the classifications and sensitivity labels
-Find insights on your classified and labeled data in Microsoft Purview use the **Classification** and **Sensitivity labeling** reports.
+Find insights on your classified and labeled data in the Microsoft Purview data map use the **Classification** and **Sensitivity labeling** reports.
> [!div class="nextstepaction"] > [Classification insights](./classification-insights.md)
purview How To Workflow Business Terms Approval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-business-terms-approval.md
This guide will take you through the creation and management of approval workflo
## Create and enable a new approval workflow for business terms
-1. Sign in to the [Microsoft Purview Studio](https://web.purview.azure.com/resource/) and select the Management center. You'll see three new icons in the table of contents.
+1. Sign in to the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/) and select the Management center. You'll see three new icons in the table of contents.
:::image type="content" source="./media/how-to-workflow-business-terms-approval/workflow-section.png" alt-text="Screenshot showing the management center left menu with the new workflow section highlighted.":::
purview How To Workflow Manage Requests Approvals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-manage-requests-approvals.md
This article outlines how to manage requests and approvals that generated by a [workflow](concept-workflow.md) in Microsoft Purview.
-To view requests you've made or request for approvals that have been sent to you by a workflow instance, navigate to management center in the [Microsoft Purview Studio](https://web.purview.azure.com/resource/), and select **Requests and Approvals**.
+To view requests you've made or request for approvals that have been sent to you by a workflow instance, navigate to management center in the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/), and select **Requests and Approvals**.
:::image type="content" source="./media/how-to-workflow-manage-requests-approval/select-requests-and-approvals.png" alt-text="Screenshot showing management center navigation table with the requests and approvals button highlighted.":::
Purview approvals and task connectors have in-built email capabilities. Every ti
:::image type="content" source="./media/how-to-workflow-manage-requests-approval/approval-email.png" alt-text="Sample email from Microsoft Azure with the title 'Action required: Approve or reject the Microsoft Purview request.' Approval and rejection buttons are available in the email.":::
-Users can respond by selecting the links in the email, or by navigating to the Microsoft Purview studio and viewing their pending tasks.
+Users can respond by selecting the links in the email, or by navigating to the Microsoft Purview governance portal and viewing their pending tasks.
## Next steps
purview How To Workflow Manage Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-manage-runs.md
This article outlines how to manage workflows that are already running.
-1. To view workflow runs you triggered, sign in to the [Microsoft Purview Studio](https://web.purview.azure.com/resource/), select the Management center, and select **Workflow runs**.
+1. To view workflow runs you triggered, sign in to the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/), select the Management center, and select **Workflow runs**.
- :::image type="content" source="./media/how-to-workflow-manage-runs/select-workflow-runs.png" alt-text="Screenshot of the management menu in the Microsoft Purview studio. The Workflow runs tab is highlighted.":::
+ :::image type="content" source="./media/how-to-workflow-manage-runs/select-workflow-runs.png" alt-text="Screenshot of the management menu in the Microsoft Purview governance portal. The Workflow runs tab is highlighted.":::
1. You'll be presented with the list of workflow runs and their statuses.
purview How To Workflow Self Service Data Access Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-self-service-data-access-hybrid.md
This guide will take you through the creation and management of self-service dat
## Create and enable self-service data access workflow
-1. Sign in to [Microsoft Purview Studio](https://web.purview.azure.com/resource/) and select the Management center. You'll see three new icons in the table of contents.
+1. Sign in to [the Microsoft Purview governance portal](https://web.purview.azure.com/resource/) and select the Management center. You'll see three new icons in the table of contents.
:::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/workflow-section.png" alt-text="Screenshot showing the management center left menu with the new workflow section highlighted.":::
purview Manage Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-credentials.md
Follow these steps only if permission model in your Azure Key Vault resource is
Before you can create a Credential, first associate one or more of your existing Azure Key Vault instances with your Microsoft Purview account.
-1. From the [Azure portal](https://portal.azure.com), select your Microsoft Purview account and open the [Microsoft Purview Studio](https://web.purview.azure.com/resource/). Navigate to the **Management Center** in the studio and then navigate to **credentials**.
+1. From the [Azure portal](https://portal.azure.com), select your Microsoft Purview account and open the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/). Navigate to the **Management Center** in the studio and then navigate to **credentials**.
2. From the **Credentials** page, select **Manage Key Vault connections**.
The following steps will show you how to create a UAMI for Microsoft Purview to
:::image type="content" source="media/manage-credentials/status-successful.png" alt-text="Screenshot the Microsoft Purview account in the Azure portal with Status highlighted under the overview tab and essentials menu.":::
-1. Once the managed identity is successfully deployed, navigate to the [Microsoft Purview Studio](https://web.purview.azure.com/), by selecting the **Open Microsoft Purview Studio** button.
+1. Once the managed identity is successfully deployed, navigate to the [Microsoft Purview governance portal](https://web.purview.azure.com/), by selecting the **Open Microsoft Purview governance portal** button.
-1. In the [Microsoft Purview Studio](https://web.purview.azure.com/), navigate to the Management Center in the studio and then navigate to the Credentials section.
+1. In the [Microsoft Purview governance portal](https://web.purview.azure.com/), navigate to the Management Center in the studio and then navigate to the Credentials section.
1. Create a user-assigned managed identity by selecting **+New**. 1. Select the Managed identity authentication method, and select your user assigned managed identity from the drop-down menu.
purview Manage Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-data-sources.md
In this article, you learn how to register new data sources, manage collections
Use the following steps to register a new source.
-1. Open [Microsoft Purview Studio](https://web.purview.azure.com/resource/), navigate to the **Data Map**, **Sources**, and select **Register**.
+1. Open [the Microsoft Purview governance portal](https://web.purview.azure.com/resource/), navigate to the **Data Map**, **Sources**, and select **Register**.
- :::image type="content" source="media/manage-data-sources/purview-studio.png" alt-text="Microsoft Purview Studio":::
+ :::image type="content" source="media/manage-data-sources/purview-studio.png" alt-text="the Microsoft Purview governance portal":::
1. Select a source type. This example uses Azure Blob Storage. Select **Continue**.
Use the following steps to register a new source.
## View sources
-You can view all registered sources on the **Data Map** tab of Microsoft Purview Studio. There are two view types: map view and list view.
+You can view all registered sources on the **Data Map** tab of the Microsoft Purview governance portal. There are two view types: map view and list view.
### Map view
In the table view, you can see a sortable list of sources. Hover over the source
## Manage collections
-You can group your data sources into collections. To create a new collection, select **+ New collection** on the *Sources* page of Microsoft Purview Studio. Give the collection a name and select *None* as the Parent. The new collection appears in the map view.
+You can group your data sources into collections. To create a new collection, select **+ New collection** on the *Sources* page of the Microsoft Purview governance portal. Give the collection a name and select *None* as the Parent. The new collection appears in the map view.
To add sources to a collection, select the **Edit** pencil on the source and choose a collection from the **Select a collection** drop-down menu. To create a hierarchy of collections, assign higher-level collections as a parent to lower-level collections. In the following image, *Fabrikam* is a parent to the *Finance* collection, which contains an Azure Blob Storage data source. You can collapse or expand collections by selecting the circle attached to the arrow between levels. You can remove sources from a hierarchy by selecting *None* for the parent. Unparented sources are grouped in a dotted box in the map view with no arrows linking them to parents.
purview Manage Integration Runtimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-integration-runtimes.md
To create and set up a self-hosted integration runtime, use the following proced
### Create a self-hosted integration runtime
-1. On the home page of the [Microsoft Purview Studio](https://web.purview.azure.com/resource/), select **Data Map** from the left navigation pane.
+1. On the home page of the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/), select **Data Map** from the left navigation pane.
2. Under **Sources and scanning** on the left pane, select **Integration runtimes**, and then select **+ New**.
Here are the domains and outbound ports that you need to allow at both **corpora
| Domain names | Outbound ports | Description | | -- | -- | - | | `*.frontend.clouddatahub.net` | 443 | Required to connect to the Microsoft Purview service. Currently wildcard is required as there's no dedicated resource. |
-| `*.servicebus.windows.net` | 443 | Required for setting up scan on Microsoft Purview Studio. This endpoint is used for interactive authoring from UI, for example, test connection, browse folder list and table list to scope scan. Currently wildcard is required as there's no dedicated resource. |
+| `*.servicebus.windows.net` | 443 | Required for setting up scan in the Microsoft Purview governance portal. This endpoint is used for interactive authoring from UI, for example, test connection, browse folder list and table list to scope scan. Currently wildcard is required as there's no dedicated resource. |
| `<purview_account>.purview.azure.com` | 443 | Required to connect to Microsoft Purview service. | | `<managed_storage_account>.blob.core.windows.net` | 443 | Required to connect to the Microsoft Purview managed Azure Blob storage account. | | `<managed_storage_account>.queue.core.windows.net` | 443 | Required to connect to the Microsoft Purview managed Azure Queue storage account. |
purview Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/overview.md
Microsoft Purview automates data discovery by providing data scanning and classi
## Data Map Microsoft Purview Data Map provides the foundation for data discovery and effective data governance. Microsoft Purview Data Map is a cloud native PaaS service that captures metadata about enterprise data present in analytics and operation systems on-premises and cloud. Microsoft Purview Data Map is automatically kept up to date with built-in automated scanning and classification system. Business users can configure and use the Microsoft Purview Data Map through an intuitive UI and developers can programmatically interact with the Data Map using open-source Apache Atlas 2.0 APIs.
-Microsoft Purview Data Map powers the Microsoft Purview Data Catalog and Microsoft Purview data insights as unified experiences within the [Microsoft Purview Studio](https://web.purview.azure.com/resource/).
+Microsoft Purview Data Map powers the Microsoft Purview Data Catalog and Microsoft Purview data insights as unified experiences within the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
For more information, see our [introduction to Data Map](concept-elastic-data-map.md).
purview Quickstart ARM Create Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/quickstart-ARM-create-azure-purview.md
The template performs the following tasks:
* Creates a Microsoft Purview account in the specified resource group.
-## Open Microsoft Purview Studio
+## Open Microsoft Purview governance portal
-After your Microsoft Purview account is created, you'll use the Microsoft Purview Studio to access and manage it. There are two ways to open Microsoft Purview Studio:
+After your Microsoft Purview account is created, you'll use the Microsoft Purview governance portal to access and manage it. There are two ways to open Microsoft Purview governance portal:
-* Open your Microsoft Purview account in the [Azure portal](https://portal.azure.com). Select the "Open Microsoft Purview Studio" tile on the overview page.
- :::image type="content" source="media/create-catalog-portal/open-purview-studio.png" alt-text="Screenshot showing the Microsoft Purview account overview page, with the Microsoft Purview Studio tile highlighted.":::
+* Open your Microsoft Purview account in the [Azure portal](https://portal.azure.com). Select the "Open Microsoft Purview governance portal" tile on the overview page.
+ :::image type="content" source="media/create-catalog-portal/open-purview-studio.png" alt-text="Screenshot showing the Microsoft Purview account overview page, with the Microsoft Purview governance portal tile highlighted.":::
* Alternatively, you can browse to [https://web.purview.azure.com](https://web.purview.azure.com), select your Microsoft Purview account, and sign in to your workspace.
Write-Host "Press [ENTER] to continue..."
## Next steps
-In this quickstart, you learned how to create a Microsoft Purview account and how to access it through the Microsoft Purview Studio.
+In this quickstart, you learned how to create a Microsoft Purview account and how to access it through the Microsoft Purview governance portal.
Next, you can create a user-assigned managed identity (UAMI) that will enable your new Microsoft Purview account to authenticate directly with resources using Azure Active Directory (Azure AD) authentication. To create a UAMI, follow our [guide to create a user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity).
-Follow these next articles to learn how to navigate the Microsoft Purview Studio, create a collection, and grant access to Microsoft Purview:
+Follow these next articles to learn how to navigate the Microsoft Purview governance portal, create a collection, and grant access to Microsoft Purview:
> [!div class="nextstepaction"]
-> [Using the Microsoft Purview Studio](use-azure-purview-studio.md)
+> [Using the Microsoft Purview governance portal](use-azure-purview-studio.md)
> [Create a collection](quickstart-create-collection.md) > [Add users to your Microsoft Purview account](catalog-permissions.md)
purview Quickstart Create Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/quickstart-create-collection.md
Collections are Microsoft Purview's tool to manage ownership and access control
## Check permissions
-In order to create and manage collections in Microsoft Purview, you will need to be a **Collection Admin** within Microsoft Purview. We can check these permissions in the [Microsoft Purview Studio](use-azure-purview-studio.md). You can find the studio by going to your Microsoft Purview account in the [Azure portal](https://portal.azure.com), and selecting the **Open Microsoft Purview Studio** tile on the overview page.
+In order to create and manage collections in Microsoft Purview, you will need to be a **Collection Admin** within Microsoft Purview. We can check these permissions in the [Microsoft Purview governance portal](use-azure-purview-studio.md). You can find the studio by going to your Microsoft Purview account in the [Azure portal](https://portal.azure.com), and selecting the **Open Microsoft Purview governance portal** tile on the overview page.
1. Select Data Map > Collections from the left pane to open collection management page.
- :::image type="content" source="./media/quickstart-create-collection/find-collections.png" alt-text="Screenshot of Microsoft Purview studio opened to the Data Map, with the Collections tab selected." border="true":::
+ :::image type="content" source="./media/quickstart-create-collection/find-collections.png" alt-text="Screenshot of the Microsoft Purview governance portal opened to the Data Map, with the Collections tab selected." border="true":::
1. Select your root collection. This is the top collection in your collection list and will have the same name as your Microsoft Purview account. In our example below, it's called Contoso Microsoft Purview.
- :::image type="content" source="./media/quickstart-create-collection/select-root-collection.png" alt-text="Screenshot of Microsoft Purview studio window, opened to the Data Map, with the root collection highlighted." border="true":::
+ :::image type="content" source="./media/quickstart-create-collection/select-root-collection.png" alt-text="Screenshot of the Microsoft Purview governance portal window, opened to the Data Map, with the root collection highlighted." border="true":::
1. Select role assignments in the collection window.
- :::image type="content" source="./media/quickstart-create-collection/role-assignments.png" alt-text="Screenshot of Microsoft Purview studio window, opened to the Data Map, with the role assignments tab highlighted." border="true":::
+ :::image type="content" source="./media/quickstart-create-collection/role-assignments.png" alt-text="Screenshot of the Microsoft Purview governance portal window, opened to the Data Map, with the role assignments tab highlighted." border="true":::
1. To create a collection, you will need to be in the collection admin list under role assignments. If you created the Microsoft Purview account, you should be listed as a collection admin under the root collection already. If not, you'll need to contact the collection admin to grant you permission.
- :::image type="content" source="./media/quickstart-create-collection/collection-admins.png" alt-text="Screenshot of Microsoft Purview studio window, opened to the Data Map, with the collection admin section highlighted." border="true":::
+ :::image type="content" source="./media/quickstart-create-collection/collection-admins.png" alt-text="Screenshot of the Microsoft Purview governance portal window, opened to the Data Map, with the collection admin section highlighted." border="true":::
## Create a collection in the portal
-To create your collection, we'll start in the [Microsoft Purview Studio](use-azure-purview-studio.md). You can find the studio by going to your Microsoft Purview account in the Azure portal and selecting the **Open Microsoft Purview Studio** tile on the overview page.
+To create your collection, we'll start in the [Microsoft Purview governance portal](use-azure-purview-studio.md). You can find the studio by going to your Microsoft Purview account in the Azure portal and selecting the **Open Microsoft Purview governance portal** tile on the overview page.
1. Select Data Map > Collections from the left pane to open collection management page.
- :::image type="content" source="./media/quickstart-create-collection/find-collections-2.png" alt-text="Screenshot of Microsoft Purview studio window, opened to the Data Map, with the Collections tab selected." border="true":::
+ :::image type="content" source="./media/quickstart-create-collection/find-collections-2.png" alt-text="Screenshot of the Microsoft Purview governance portal window, opened to the Data Map, with the Collections tab selected." border="true":::
1. Select **+ Add a collection**.
- :::image type="content" source="./media/quickstart-create-collection/select-add-collection.png" alt-text="Screenshot of Microsoft Purview studio window, opened to the Data Map, with the Collections tab selected and Add a Collection highlighted." border="true":::
+ :::image type="content" source="./media/quickstart-create-collection/select-add-collection.png" alt-text="Screenshot of the Microsoft Purview governance portal window, opened to the Data Map, with the Collections tab selected and Add a Collection highlighted." border="true":::
1. In the right panel, enter the collection name, description, and search for users to add them as collection admins.
- :::image type="content" source="./media/quickstart-create-collection/create-collection.png" alt-text="Screenshot of Microsoft Purview studio window, showing the new collection window, with a display name and collection admins selected, and the create button highlighted." border="true":::
+ :::image type="content" source="./media/quickstart-create-collection/create-collection.png" alt-text="Screenshot of the Microsoft Purview governance portal window, showing the new collection window, with a display name and collection admins selected, and the create button highlighted." border="true":::
1. Select **Create**. The collection information will reflect on the page.
- :::image type="content" source="./media/quickstart-create-collection/created-collection.png" alt-text="Screenshot of Microsoft Purview studio window, showing the newly created collection window." border="true":::
+ :::image type="content" source="./media/quickstart-create-collection/created-collection.png" alt-text="Screenshot of the Microsoft Purview governance portal window, showing the newly created collection window." border="true":::
## Assign permissions to collection
All assigned roles apply to sources, assets, and other objects within the collec
1. Select **Role assignments** tab to see all the roles in a collection.
- :::image type="content" source="./media/quickstart-create-collection/select-role-assignments.png" alt-text="Screenshot of Microsoft Purview studio collection window, with the role assignments tab highlighted." border="true":::
+ :::image type="content" source="./media/quickstart-create-collection/select-role-assignments.png" alt-text="Screenshot of the Microsoft Purview governance portal collection window, with the role assignments tab highlighted." border="true":::
1. Select **Edit role assignments** or the person icon to edit each role member.
- :::image type="content" source="./media/quickstart-create-collection/edit-role-assignments.png" alt-text="Screenshot of Microsoft Purview studio collection window, with the edit role assignments dropdown list selected." border="true":::
+ :::image type="content" source="./media/quickstart-create-collection/edit-role-assignments.png" alt-text="Screenshot of the Microsoft Purview governance portal collection window, with the edit role assignments dropdown list selected." border="true":::
1. Type in the textbox to search for users you want to add to the role member. Select **OK** to save the change.
purview Reference Azure Purview Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/reference-azure-purview-glossary.md
An operation within a specific Microsoft Purview instance, such as editing an as
## Data reader A role that provides read-only access to data assets, classifications, classification rules, collections, glossary terms, and insights. ## Data source admin
-A role that can manage data sources and scans. A user in the Data source admin role doesn't have access to Microsoft Purview studio. Combining this role with the Data reader or Data curator roles at any collection scope provides Microsoft Purview studio access.
+A role that can manage data sources and scans. A user in the Data source admin role doesn't have access to Microsoft Purview governance portal. Combining this role with the Data reader or Data curator roles at any collection scope provides Microsoft Purview governance portal access.
## Data steward An individual or group responsible for maintaining nomenclature, data quality standards, security controls, compliance requirements, and rules for the associated object. ## Data dictionary
purview Register Scan Adls Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-adls-gen1.md
This article outlines the process to register an Azure Data Lake Storage Gen1 da
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
## Register
It is important to register the data source in Microsoft Purview prior to settin
:::image type="content" source="media/register-scan-adls-gen1/register-adls-gen1-purview-acct.png" alt-text="Screenshot that shows the Microsoft Purview account used to register the data source":::
-1. **Open Microsoft Purview Studio** and navigate to the **Data Map --> Sources**
+1. **Open Microsoft Purview governance portal** and navigate to the **Data Map --> Sources**
- :::image type="content" source="media/register-scan-adls-gen1/register-adls-gen1-open-purview-studio.png" alt-text="Screenshot that shows the link to open Microsoft Purview Studio":::
+ :::image type="content" source="media/register-scan-adls-gen1/register-adls-gen1-open-purview-studio.png" alt-text="Screenshot that shows the link to open Microsoft Purview governance portal":::
:::image type="content" source="media/register-scan-adls-gen1/register-adls-gen1-sources.png" alt-text="Screenshot that navigates to the Sources link in the Data Map":::
It is important to give your service principal the permission to scan the ADLS G
### Creating the scan
-1. Open your **Microsoft Purview account** and select the **Open Microsoft Purview Studio**
+1. Open your **Microsoft Purview account** and select the **Open Microsoft Purview governance portal**
- :::image type="content" source="media/register-scan-adls-gen1/register-adls-gen1-purview-acct.png" alt-text="Screenshot that shows the Open Microsoft Purview Studio":::
+ :::image type="content" source="media/register-scan-adls-gen1/register-adls-gen1-purview-acct.png" alt-text="Screenshot that shows the Open Microsoft Purview governance portal":::
1. Navigate to the **Data map** --> **Sources** to view the collection hierarchy
purview Register Scan Adls Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-adls-gen2.md
This article outlines the process to register an Azure Data Lake Storage Gen2 da
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
## Register
It is important to register the data source in Microsoft Purview prior to settin
:::image type="content" source="media/register-scan-adls-gen2/register-adls-gen2-purview-acct.png" alt-text="Screenshot that shows the Microsoft Purview account used to register the data source":::
-1. **Open Microsoft Purview Studio** and navigate to the **Data Map --> Sources**
+1. **Open Microsoft Purview governance portal** and navigate to the **Data Map --> Sources**
- :::image type="content" source="media/register-scan-adls-gen2/register-adls-gen2-open-purview-studio.png" alt-text="Screenshot that shows the link to open Microsoft Purview Studio":::
+ :::image type="content" source="media/register-scan-adls-gen2/register-adls-gen2-open-purview-studio.png" alt-text="Screenshot that shows the link to open Microsoft Purview governance portal":::
:::image type="content" source="media/register-scan-adls-gen2/register-adls-gen2-sources.png" alt-text="Screenshot that navigates to the Sources link in the Data Map":::
It is important to give your service principal the permission to scan the ADLS G
### Create the scan
-1. Open your **Microsoft Purview account** and select the **Open Microsoft Purview Studio**
+1. Open your **Microsoft Purview account** and select the **Open Microsoft Purview governance portal**
1. Navigate to the **Data map** --> **Sources** to view the collection hierarchy 1. Select the **New Scan** icon under the **ADLS Gen2 data source** registered earlier
It is important to give your service principal the permission to scan the ADLS G
## Access policy
-Access policies allow data owners to manage access to datasets from Microsoft Purview. Owners can monitor and manage data use from within the Microsoft Purview Studio, without directly modifying the storage account where the data is housed.
+Access policies allow data owners to manage access to datasets from Microsoft Purview. Owners can monitor and manage data use from within the Microsoft Purview governance portal, without directly modifying the storage account where the data is housed.
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
purview Register Scan Amazon S3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-amazon-s3.md
Continue with [Create a scan for one or more Amazon S3 buckets](#create-a-scan-f
Once you've added your buckets as Microsoft Purview data sources, you can configure a scan to run at scheduled intervals or immediately.
-1. Select the **Data Map** tab on the left pane in the [Microsoft Purview Studio](https://web.purview.azure.com/resource/), and then do one of the following:
+1. Select the **Data Map** tab on the left pane in the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/), and then do one of the following:
- In the **Map view**, select **New scan** ![New scan icon.](./media/register-scan-amazon-s3/new-scan-button.png) in your data source box. - In the **List view**, hover over the row for your data source, and select **New scan** ![New scan icon.](./media/register-scan-amazon-s3/new-scan-button.png).
purview Register Scan Azure Blob Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-blob-storage-source.md
For file types such as csv, tsv, psv, ssv, the schema is extracted when the foll
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
## Register
It is important to register the data source in Microsoft Purview prior to settin
:::image type="content" source="media/register-scan-azure-blob-storage-source/register-blob-purview-acct.png" alt-text="Screenshot that shows the Microsoft Purview account used to register the data source":::
-1. **Open Microsoft Purview Studio** and navigate to the **Data Map --> Sources**
+1. **Open Microsoft Purview governance portal** and navigate to the **Data Map --> Sources**
- :::image type="content" source="media/register-scan-azure-blob-storage-source/register-blob-open-purview-studio.png" alt-text="Screenshot that shows the link to open Microsoft Purview Studio":::
+ :::image type="content" source="media/register-scan-azure-blob-storage-source/register-blob-open-purview-studio.png" alt-text="Screenshot that shows the link to open Microsoft Purview governance portal":::
:::image type="content" source="media/register-scan-azure-blob-storage-source/register-blob-sources.png" alt-text="Screenshot that navigates to the Sources link in the Data Map":::
It is important to give your service principal the permission to scan the Azure
### Creating the scan
-1. Open your **Microsoft Purview account** and select the **Open Microsoft Purview Studio**
+1. Open your **Microsoft Purview account** and select the **Open Microsoft Purview governance portal**
1. Navigate to the **Data map** --> **Sources** to view the collection hierarchy 1. Select the **New Scan** icon under the **Azure Blob data source** registered earlier
Scans can be managed or run again on completion
## Access policy
-Access policies allow data owners to manage access to datasets from Microsoft Purview. Owners can monitor and manage data use from within the Microsoft Purview Studio, without directly modifying the storage account where the data is housed.
+Access policies allow data owners to manage access to datasets from Microsoft Purview. Owners can monitor and manage data use from within the Microsoft Purview governance portal, without directly modifying the storage account where the data is housed.
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
purview Register Scan Azure Cosmos Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-cosmos-database.md
This article outlines the process to register an Azure Cosmos database (SQL API)
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
## Register
It is important to register the data source in Microsoft Purview prior to settin
:::image type="content" source="media/register-scan-azure-cosmos-database/register-cosmos-db-purview-acct.png" alt-text="Screenshot that shows the Microsoft Purview account used to register the data source":::
-1. **Open Microsoft Purview Studio** and navigate to the **Data Map --> Collections**
+1. **Open Microsoft Purview governance portal** and navigate to the **Data Map --> Collections**
:::image type="content" source="media/register-scan-azure-cosmos-database/register-cosmos-db-open-purview-studio.png" alt-text="Screenshot that navigates to the Sources link in the Data Map":::
You need to get your access key and store in the key vault:
### Creating the scan
-1. Open your **Microsoft Purview account** and select the **Open Microsoft Purview Studio**
+1. Open your **Microsoft Purview account** and select the **Open Microsoft Purview governance portal**
1. Navigate to the **Data map** --> **Sources** to view the collection hierarchy 1. Select the **New Scan** icon under the **Azure Cosmos database** registered earlier
purview Register Scan Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-data-explorer.md
This article outlines how to register Azure Data Explorer, and how to authentica
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
## Register
-This section describes how to register Azure Data Explorer in Microsoft Purview using the [Microsoft Purview Studio](https://web.purview.azure.com/).
+This section describes how to register Azure Data Explorer in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
### Authentication for registration
Follow the steps below to scan Azure Data Explorer to automatically identify ass
To create and run a new scan, follow these steps:
-1. Select the **Data Map** tab on the left pane in the [Microsoft Purview Studio](https://web.purview.azure.com/resource/).
+1. Select the **Data Map** tab on the left pane in the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
1. Select the Azure Data Explorer source that you registered.
purview Register Scan Azure Files Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-files-storage-source.md
For file types such as csv, tsv, psv, ssv, the schema is extracted when the foll
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
## Register
-This section describes how to register Azure Files in Microsoft Purview using the [Microsoft Purview Studio](https://web.purview.azure.com/).
+This section describes how to register Azure Files in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
### Authentication for registration
Follow the steps below to scan Azure Files to automatically identify assets and
To create and run a new scan, follow these steps:
-1. Select the **Data Map** tab on the left pane in the [Microsoft Purview Studio](https://web.purview.azure.com/resource/).
+1. Select the **Data Map** tab on the left pane in the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
1. Select the Azure Files source that you registered.
purview Register Scan Azure Multiple Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-multiple-sources.md
This article outlines how to register multiple Azure sources and how to authenti
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
## Register
-This section describes how to register multiple Azure sources in Microsoft Purview using the [Microsoft Purview Studio](https://web.purview.azure.com/).
+This section describes how to register multiple Azure sources in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
### Prerequisites for registration
Follow the steps below to scan multiple Azure sources to automatically identify
To create and run a new scan, do the following:
-1. Select the **Data Map** tab on the left pane in the Microsoft Purview Studio.
+1. Select the **Data Map** tab on the left pane in the Microsoft Purview governance portal.
1. Select the data source that you registered. 1. Select **View details** > **+ New scan**, or use the **Scan** quick-action icon on the source tile. 1. For **Name**, fill in the name.
purview Register Scan Azure Mysql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-mysql-database.md
This article outlines how to register a database in Azure Database for MySQL, an
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
## Register
-This section describes how to register an Azure Database for MySQL in Microsoft Purview using the [Microsoft Purview Studio](https://web.purview.azure.com/).
+This section describes how to register an Azure Database for MySQL in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
### Authentication for registration
Follow the steps below to scan Azure Database for MySQL to automatically identif
To create and run a new scan, do the following:
-1. Select the **Data Map** tab on the left pane in the [Microsoft Purview Studio](https://web.purview.azure.com/resource/).
+1. Select the **Data Map** tab on the left pane in the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
1. Select the Azure Database for MySQL source that you registered.
purview Register Scan Azure Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-postgresql.md
This article outlines how to register an Azure Database for PostgreSQL deployed
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
## Register
-This section describes how to register an Azure Database for PostgreSQL in Microsoft Purview using the [Microsoft Purview Studio](https://web.purview.azure.com/).
+This section describes how to register an Azure Database for PostgreSQL in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
### Authentication for registration
Follow the steps below to scan an Azure Database for PostgreSQL database to auto
To create and run a new scan, do the following:
-1. Select the **Data Map** tab on the left pane in the [Microsoft Purview Studio](https://web.purview.azure.com/resource/).
+1. Select the **Data Map** tab on the left pane in the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
1. Select the Azure Database for PostgreSQL source that you registered.
purview Register Scan Azure Sql Database Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database-managed-instance.md
This article outlines how to register and Azure SQL Database Managed Instance, a
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
* [Configure public endpoint in Azure SQL Managed Instance](../azure-sql/managed-instance/public-endpoint-configure.md)
This article outlines how to register and Azure SQL Database Managed Instance, a
## Register
-This section describes how to register an Azure SQL Database Managed Instance in Microsoft Purview using the [Microsoft Purview Studio](https://web.purview.azure.com/).
+This section describes how to register an Azure SQL Database Managed Instance in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
### Authentication for registration
You can follow the instructions in [CREATE LOGIN](/sql/t-sql/statements/create-l
### Steps to register
-1. Navigate to your [Microsoft Purview Studio](https://web.purview.azure.com/resource/)
+1. Navigate to your [Microsoft Purview governance portal](https://web.purview.azure.com/resource/)
1. Select **Data Map** on the left navigation.
Follow the steps below to scan an Azure SQL Database Managed Instance to automat
To create and run a new scan, complete the following steps:
-1. Select the **Data Map** tab on the left pane in the Microsoft Purview Studio.
+1. Select the **Data Map** tab on the left pane in the Microsoft Purview governance portal.
1. Select the Azure SQL Database Managed Instance source that you registered.
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
When setting up scan, you can further scope the scan after providing the databas
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
## Register
It's important to register the data source in Microsoft Purview before setting u
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-purview-acct.png" alt-text="Screenshot that shows the Microsoft Purview account used to register the data source.":::
-1. **Open Microsoft Purview Studio** and navigate to the **Data Map**
+1. **Open Microsoft Purview governance portal** and navigate to the **Data Map**
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-open-purview-studio.png" alt-text="Screenshot that navigates to the Sources link in the Data Map.":::
The service principal needs permission to get metadata for the database, schemas
### Creating the scan
-1. Open your **Microsoft Purview account** and select the **Open Microsoft Purview Studio**
+1. Open your **Microsoft Purview account** and select the **Open Microsoft Purview governance portal**
1. Navigate to the **Data map** --> **Sources** to view the collection hierarchy 1. Select the **New Scan** icon under the **Azure SQL DB** registered earlier
purview Register Scan Azure Synapse Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-synapse-analytics.md
This article outlines how to register dedicated SQL pools (formerly SQL DW), and
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
## Register
-This section describes how to register dedicated SQL pools in Microsoft Purview using the [Microsoft Purview Studio](https://web.purview.azure.com/).
+This section describes how to register dedicated SQL pools in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
### Authentication for registration
Follow the steps below to scan dedicated SQL pools to automatically identify ass
To create and run a new scan, complete the following steps:
-1. Select the **Data Map** tab on the left pane in the [Microsoft Purview Studio](https://web.purview.azure.com/resource/).
+1. Select the **Data Map** tab on the left pane in the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
1. Select the SQL dedicated pool source that you registered.
purview Register Scan Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-salesforce.md
When setting up scan, you can choose to scan an entire Salesforce organization,
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
You can use the fully managed Azure integration runtime for scan - make sure to provide the security token to authenticate to Salesforce, learn more from the credential configuration in [Scan](#scan) section. Otherwise, if you want the scan to be initiated from a Salesforce trusted IP range for your organization, you can configure a self-hosted integration runtime to connect to it:
For Standard Objects, ensure that the "Documents" section has the Read permissio
## Register
-This section describes how to register Salesforce in Microsoft Purview using the [Microsoft Purview Studio](https://web.purview.azure.com/).
+This section describes how to register Salesforce in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
### Steps to register To register a new Salesforce source in your data catalog, do the following:
-1. Navigate to your Microsoft Purview account in the [Microsoft Purview Studio](https://web.purview.azure.com/resource/).
+1. Navigate to your Microsoft Purview account in the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
1. Select **Data Map** on the left navigation. 1. Select **Register** 1. On Register sources, select **Salesforce**. Select **Continue**.
purview Register Scan Sap Bw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sap-bw.md
When scanning SAP BW source, Microsoft Purview supports extracting technical met
* An active [Microsoft Purview resource](create-catalog-portal.md).
-* You need Data Source Administrator and Data Reader permissions to register a source and manage it in Microsoft Purview Studio. For more information about permissions, see [Access control in Microsoft Purview](catalog-permissions.md).
+* You need Data Source Administrator and Data Reader permissions to register a source and manage it in the Microsoft Purview governance portal. For more information about permissions, see [Access control in Microsoft Purview](catalog-permissions.md).
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.15.8079.1.
When scanning SAP BW source, Microsoft Purview supports extracting technical met
## Register
-This section describes how to register SAP BW in Microsoft Purview using the [Microsoft Purview Studio](https://web.purview.azure.com/).
+This section describes how to register SAP BW in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
### Authentication for registration
purview Register Scan Sap Hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sap-hana.md
When setting up scan, you can choose to scan an entire SAP HANA database, or sco
* You must have an active [Microsoft Purview account](create-catalog-portal.md).
-* You need Data Source Administrator and Data Reader permissions to register a source and manage it in Microsoft Purview Studio. For more information about permissions, see [Access control in Microsoft Purview](catalog-permissions.md).
+* You need Data Source Administrator and Data Reader permissions to register a source and manage it in the Microsoft Purview governance portal. For more information about permissions, see [Access control in Microsoft Purview](catalog-permissions.md).
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [Create and configure a self-hosted integration runtime](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.13.8013.1.
GRANT SELECT ON SCHEMA _SYS_BIC TO <user>;
## Register
-This section describes how to register a SAP HANA in Microsoft Purview by using [Microsoft Purview Studio](https://web.purview.azure.com/).
+This section describes how to register a SAP HANA in Microsoft Purview by using [the Microsoft Purview governance portal](https://web.purview.azure.com/).
1. Go to your Microsoft Purview account.
purview Register Scan Sapecc Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sapecc-source.md
When scanning SAP ECC source, Microsoft Purview supports:
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
When scanning SAP ECC source, Microsoft Purview supports:
## Register
-This section describes how to register SAP ECC in Microsoft Purview using the [Microsoft Purview Studio](https://web.purview.azure.com/).
+This section describes how to register SAP ECC in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
### Authentication for registration
purview Register Scan Saps4hana Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-saps4hana-source.md
When scanning SAP S/4HANA source, Microsoft Purview supports:
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
When scanning SAP S/4HANA source, Microsoft Purview supports:
## Register
-This section describes how to register SAP S/4HANA in Microsoft Purview using the [Microsoft Purview Studio](https://web.purview.azure.com/).
+This section describes how to register SAP S/4HANA in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
### Authentication for registration
purview Register Scan Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-snowflake.md
When setting up scan, you can choose to scan one or more Snowflake database(s) e
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
**If your data store is not publicly accessible** (if your data store limits access from on-premises network, private network or specific IPs, etc.) you need to configure a self-hosted integration runtime to connect to it:
Here's a sample walkthrough to create a user specifically for Microsoft Purview
## Register
-This section describes how to register Snowflake in Microsoft Purview using the [Microsoft Purview Studio](https://web.purview.azure.com/).
+This section describes how to register Snowflake in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
### Steps to register To register a new Snowflake source in your data catalog, do the following:
-1. Navigate to your Microsoft Purview account in the [Microsoft Purview Studio](https://web.purview.azure.com/resource/).
+1. Navigate to your Microsoft Purview account in the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
1. Select **Data Map** on the left navigation. 1. Select **Register** 1. On Register sources, select **Snowflake**. Select **Continue**.
purview Register Scan Synapse Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-synapse-workspace.md
Required. Add any relevant/source-specific prerequisites for connecting with thi
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
## Register
-This section describes how to register Azure Synapse Analytics workspaces in Microsoft Purview using the [Microsoft Purview Studio](https://web.purview.azure.com/).
+This section describes how to register Azure Synapse Analytics workspaces in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
### Authentication for registration
GRANT REFERENCES ON DATABASE SCOPED CREDENTIAL::[scoped_credential] TO [PurviewA
1. Select **Save**. > [!IMPORTANT]
-> Currently, we do not support setting up scans for an Azure Synapse workspace from Microsoft Purview Studio, if you cannot enable **Allow Azure services and resources to access this workspace** on your Azure Synapse workspaces. In this case:
+> Currently, we do not support setting up scans for an Azure Synapse workspace from the Microsoft Purview governance portal, if you cannot enable **Allow Azure services and resources to access this workspace** on your Azure Synapse workspaces. In this case:
> - You can use [Microsoft Purview Rest API - Scans - Create Or Update](/rest/api/purview/scanningdataplane/scans/create-or-update/) to create a new scan for your Synapse workspaces including dedicated and serverless pools. > - You must use **SQL Auth** as authentication mechanism.
GRANT REFERENCES ON DATABASE SCOPED CREDENTIAL::[scoped_credential] TO [PurviewA
To create and run a new scan, do the following:
-1. Select the **Data Map** tab on the left pane in [Microsoft Purview Studio](https://web.purview.azure.com/resource/).
+1. Select the **Data Map** tab on the left pane in [the Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
1. Select the data source that you registered.
purview Register Scan Teradata Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-teradata-source.md
To retrieve data types of view columns, Microsoft Purview issues a prepare state
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview Studio. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
To retrieve data types of view columns, Microsoft Purview issues a prepare state
## Register
-This section describes how to register Teradata in Microsoft Purview using the [Microsoft Purview Studio](https://web.purview.azure.com/).
+This section describes how to register Teradata in Microsoft Purview using the [Microsoft Purview governance portal](https://web.purview.azure.com/).
### Steps to register
Follow the steps below to scan Teradata to automatically identify assets and cla
1. In the Management Center, select **Integration runtimes**. Make sure a self-hosted integration runtime is set up. If it isn't set up, use the steps mentioned [here](./manage-integration-runtimes.md) to set up a self-hosted integration runtime
-1. Select the **Data Map** tab on the left pane in the [Microsoft Purview Studio](https://web.purview.azure.com/resource/).
+1. Select the **Data Map** tab on the left pane in the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
1. Select the registered Teradata source.
purview Supported Browsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/supported-browsers.md
Microsoft Purview supports the following browsers. We recommend that you use the
## Chrome Incognito mode
- Chrome Incognito blocking 3rd party cookies must be disabled for Microsoft Purview Studio to work.
+ Chrome Incognito blocking 3rd party cookies must be disabled for the Microsoft Purview governance portal to work.
:::image type="content" source="./media/supported-browsers/incognito-chrome.png" alt-text="Screenshot showing chrome."::: ## Chromium Edge InPrivate mode
-Chromium Edge InPrivate using Strict Tracking Prevention must be disabled for Microsoft Purview Studio to work.
+Chromium Edge InPrivate using Strict Tracking Prevention must be disabled for the Microsoft Purview governance portal to work.
:::image type="content" source="./media/supported-browsers/incognito-edge.png" alt-text="Screenshot showing edge.":::
purview Troubleshoot Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/troubleshoot-connections.md
There are specific instructions for each [source type](azure-purview-connector-o
> Verify that you have followed all prerequisite and authentication steps for the source you are connecting to. > You can find all available sources listed in the [Microsoft Purview supported sources article](azure-purview-connector-overview.md).
-## Verifying Azure Role-based Access Control to enumerate Azure resources in Microsoft Purview Studio
+## Verifying Azure Role-based Access Control to enumerate Azure resources in the Microsoft Purview governance portal
### Registering single Azure data source
purview Tutorial Azure Purview Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-azure-purview-checklist.md
This article lists prerequisites that help you get started quickly on Microsoft
|7 |An Azure Virtual Network and Subnet(s) for Microsoft Purview private endpoints. | *Network Contributor* to create or update Azure VNet. |Use this step if you're planning to deploy [private endpoint connectivity with Microsoft Purview](catalog-private-link.md): <ul><li>Private endpoints for **Ingestion**.</li><li>Private endpoint for Microsoft Purview **Account**.</li><li>Private endpoint for Microsoft Purview **Portal**.</li></ul> <br> Deploy [Azure Virtual Network](../virtual-network/quick-create-portal.md) if you need one. | |8 |Deploy private endpoint for Azure data sources. |*Network Contributor* to set up private endpoints for each data source. |Perform this step, if you're planning to use [Private Endpoint for Ingestion](catalog-private-link-end-to-end.md). | |9 |Define whether to deploy new or use existing Azure Private DNS Zones. |Required [Azure Private DNS Zones](catalog-private-link-name-resolution.md) can be created automatically during Purview Account deployment using Subscription Owner / Contributor role |Use this step if you're planning to use Private Endpoint connectivity with Microsoft Purview. Required DNS Zones for Private Endpoint: <ul><li>privatelink.purview.azure.com</li><li>privatelink.purviewstudio.azure.com</li><li>privatelink.blob.core.windows.net</li><li>privatelink.queue.core.windows.net</li><li>privatelink.servicebus.windows.net</li></ul> |
-|10 |A management machine in your CorpNet or inside Azure VNet to launch Microsoft Purview Studio. |N/A |Use this step if you're planning to set **Allow Public Network** to **deny** on your Microsoft Purview Account. |
+|10 |A management machine in your CorpNet or inside Azure VNet to launch the Microsoft Purview governance portal. |N/A |Use this step if you're planning to set **Allow Public Network** to **deny** on your Microsoft Purview Account. |
|11 |Deploy a Microsoft Purview Account |Subscription Owner / Contributor |Purview account is deployed with 1 Capacity Unit and will scale up based [on demand](concept-elastic-data-map.md). | |12 |Deploy a Managed Integration Runtime and Managed private endpoints for Azure data sources. |*Data source admin* to setup Managed VNet inside Microsoft Purview. <br> *Network Contributor* to approve managed private endpoint for each Azure data source. |Perform this step if you're planning to use [Managed VNet](catalog-managed-vnet.md). within your Microsoft Purview account for scanning purposes. | |13 |Deploy Self-hosted integration runtime VMs inside your network. |Azure: *Virtual Machine Contributor* <br> On-prem: Application owner |Use this step if you're planning to perform any scans using [Self-hosted Integration Runtime](manage-integration-runtimes.md). |
This article lists prerequisites that help you get started quickly on Microsoft
|26 |Create an **Azure Key Vault** and a **Secret** to save data source credentials or service principal secret. |*Contributor* or *Key Vault Administrator* |Use this step if you have **on-premises** or **VM-based data sources** (e.g. SQL Server). <br> Use this step are using **ingestion private endpoints** to scan a data source. | |27 |Grant Key **Vault Access Policy** to Microsoft Purview MSI: **Secret: get/list** |*Key Vault Administrator* |Use this step if you have **on-premises** / **VM-based data sources** (e.g. SQL Server) <br> Use this step if **Key Vault Permission Model** is set to [Vault Access Policy](../key-vault/general/assign-access-policy.md). | |28 |Grant **Key Vault RBAC role** Key Vault Secrets User to Microsoft Purview MSI. | *Owner* or *User Access Administrator* |Use this step if you have **on-premises** or **VM-based data sources** (e.g. SQL Server) <br> Use this step if **Key Vault Permission Model** is set to [Azure role-based access control](../key-vault/general/rbac-guide.md). |
-|29 | Create a new connection to Azure Key Vault from Microsoft Purview Studio | *Data source admin* | Use this step if you are planing to use any of the following [authentication options](manage-credentials.md#create-a-new-credential) to scan a data source in Microsoft Purview: <ul><li>Account key</li><li>Basic Authentication</li><li>Delegated Auth</li><li>SQL Authentication</li><li>Service Principal</li><li>Consumer Key</li></ul>
+|29 | Create a new connection to Azure Key Vault from the Microsoft Purview governance portal | *Data source admin* | Use this step if you are planing to use any of the following [authentication options](manage-credentials.md#create-a-new-credential) to scan a data source in Microsoft Purview: <ul><li>Account key</li><li>Basic Authentication</li><li>Delegated Auth</li><li>SQL Authentication</li><li>Service Principal</li><li>Consumer Key</li></ul>
|30 |Deploy a private endpoint for Power BI tenant |*Power BI Administrator* <br> *Network contributor* |Use this step if you're planning to register a Power BI tenant as data source and your Microsoft Purview account is set to **deny public access**. <br> For more information, see [How to configure private endpoints for accessing Power BI](/power-bi/enterprise/service-security-private-links). | |31 |Connect Azure Data Factory to Microsoft Purview from Azure Data Factory Portal. **Manage** -> **Microsoft Purview**. Select **Connect to a Purview account**. <br> Validate if Azure resource tag **catalogUri** exists in ADF Azure resource. |Azure Data Factory Contributor / Data curator |Use this step if you have **Azure Data Factory**. | |32 |Verify if you have at least one **Microsoft 365 required license** in your Azure Active Directory tenant to use sensitivity labels in Microsoft Purview. |Azure Active Directory *Global Reader* |Perform this step if you're planning in extending **Sensitivity Labels from Microsoft 365 to Microsoft Purview** <br> For more information, see [licensing requirements to use sensitivity labels on files and database columns in Microsoft Purview](sensitivity-labels-frequently-asked-questions.yml) |
purview Tutorial Azure Purview Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-azure-purview-tools.md
This article lists several open-source tools and utilities (command-line, python
- *Microsoft Purview Adopters*: Adopters who have migrated from starting up and exploring Microsoft Purview and are smoothly using Microsoft Purview for more than a few months. -- *Microsoft Purview Long-Term Regular Users*: Long-term users who have been using Microsoft Purview for more than one year and are now confident and comfortable using most advanced Microsoft Purview use cases on the Azure portal and Microsoft Purview Studio; furthermore they have near perfect knowledge and awareness of the Microsoft Purview REST APIs and the other use cases supported via Microsoft Purview APIs.
+- *Microsoft Purview Long-Term Regular Users*: Long-term users who have been using Microsoft Purview for more than one year and are now confident and comfortable using most advanced Microsoft Purview use cases on the Azure portal and Microsoft Purview governance portal; furthermore they have near perfect knowledge and awareness of the Microsoft Purview REST APIs and the other use cases supported via Microsoft Purview APIs.
## Microsoft Purview open-source tools and utilities list
This article lists several open-source tools and utilities (command-line, python
1. [Purview-API-via-PowerShell](https://github.com/Azure/Azure-Purview-API-PowerShell/blob/main/README.md) - **Recommended customer journey stages**: *Learners, Innovators, Enthusiasts, Adopters, Long-Term Regular Users*
- - **Description**: This utility is based on and covers the entire set of [Microsoft Purview REST API Reference](/rest/api/purview/) Microsoft Docs. [Download & Install from PowerShell Gallery](https://aka.ms/purview-api-ps). It helps you execute all the documented Microsoft Purview REST APIs through a breezy fast and easy to use PowerShell interface. Use and automate Microsoft Purview APIs for regular and long-term usage via command-line and scripted methods. This is an alternative for customers looking to do bulk tasks in automated manner, batch-mode, or scheduled cron jobs; as against the GUI method of using the Azure portal and Microsoft Purview Studio. Detailed documentation, sample usage guide, self-help, and examples are available on [GitHub:Azure-Purview-API-PowerShell](https://github.com/Azure/Azure-Purview-API-PowerShell).
+ - **Description**: This utility is based on and covers the entire set of [Microsoft Purview REST API Reference](/rest/api/purview/) Microsoft Docs. [Download & Install from PowerShell Gallery](https://aka.ms/purview-api-ps). It helps you execute all the documented Microsoft Purview REST APIs through a breezy fast and easy to use PowerShell interface. Use and automate Microsoft Purview APIs for regular and long-term usage via command-line and scripted methods. This is an alternative for customers looking to do bulk tasks in automated manner, batch-mode, or scheduled cron jobs; as against the GUI method of using the Azure portal and Microsoft Purview governance portal. Detailed documentation, sample usage guide, self-help, and examples are available on [GitHub:Azure-Purview-API-PowerShell](https://github.com/Azure/Azure-Purview-API-PowerShell).
1. [Purview-Starter-Kit](https://aka.ms/PurviewKickstart)
purview Tutorial Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-data-owner-policies-storage.md
To register your resource and enable data use governance, follow these steps:
:::image type="content" source="media/tutorial-data-owner-policies-storage/register-blob-permission.png" alt-text="Screenshot that shows the exceptions to allow trusted Microsoft services to access the storage account.":::
-1. Once you have set up authentication for your storage account, go to the [Microsoft Purview Studio](https://web.purview.azure.com/).
+1. Once you have set up authentication for your storage account, go to the [Microsoft Purview governance portal](https://web.purview.azure.com/).
1. Select **Data Map** on the left menu.
- :::image type="content" source="media/tutorial-data-owner-policies-storage/select-data-map.png" alt-text="Screenshot that shows the far left menu in the Microsoft Purview Studio open with Data Map highlighted.":::
+ :::image type="content" source="media/tutorial-data-owner-policies-storage/select-data-map.png" alt-text="Screenshot that shows the far left menu in the Microsoft Purview governance portal open with Data Map highlighted.":::
1. Select **Register**.
- :::image type="content" source="media/tutorial-data-owner-policies-storage/select-register.png" alt-text="Screenshot that shows Microsoft Purview Studio Data Map sources, with the register button highlighted at the top.":::
+ :::image type="content" source="media/tutorial-data-owner-policies-storage/select-register.png" alt-text="Screenshot that shows the Microsoft Purview governance portal Data Map sources, with the register button highlighted at the top.":::
1. On **Register sources**, select **Azure Blob Storage**.
To register your resource and enable data use governance, follow these steps:
## Create a data owner policy
-1. Sign in to the [Microsoft Purview Studio](https://web.purview.azure.com/resource/).
+1. Sign in to the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
1. Navigate to the **Data policy** feature using the left side panel. Then select **Data policies**.
To register your resource and enable data use governance, follow these steps:
## Publish a data owner policy
-1. Sign in to the [Microsoft Purview Studio](https://web.purview.azure.com/resource/).
+1. Sign in to the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
1. Navigate to the **Data policy** feature using the left side panel. Then select **Data policies**.
- :::image type="content" source="./media/access-policies-common/policy-onboard-guide-2.png" alt-text="Screenshot showing the Microsoft Purview studio with the leftmost menu open, Policy Management highlighted, and Data Policies selected on the next page.":::
+ :::image type="content" source="./media/access-policies-common/policy-onboard-guide-2.png" alt-text="Screenshot showing the Microsoft Purview governance portal with the leftmost menu open, Policy Management highlighted, and Data Policies selected on the next page.":::
1. The Policy portal will present the list of existing policies in Microsoft Purview. Locate the policy that needs to be published. Select the **Publish** button on the right top corner of the page.
To register your resource and enable data use governance, follow these steps:
To delete a policy in Microsoft Purview, follow these steps:
-1. Sign in to the [Microsoft Purview Studio](https://web.purview.azure.com/resource/).
+1. Sign in to the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
1. Navigate to the **Data policy** feature using the left side panel. Then select **Data policies**.
purview Tutorial Metadata Policy Collections Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-metadata-policy-collections-apis.md
Last updated 09/24/2021
In August 2021, access control in Microsoft Purview moved from Azure Identity & Access Management (IAM) (control plane) to [Microsoft Purview collections](how-to-create-and-manage-collections.md) (data plane). This change gives enterprise data curators and administrators more precise, granular access control on their data sources scanned by Microsoft Purview. The change also enables organizations to audit right access and right use of their data.
-This tutorial guides you through step-by-step usage of the Microsoft Purview Metadata Policy APIs to help you add users, groups, or service principals to a collection, and manage or remove their roles within that collection. REST APIs are an alternative method to using the Azure portal or Microsoft Purview Studio to achieve the same granular role-based access control.
+This tutorial guides you through step-by-step usage of the Microsoft Purview Metadata Policy APIs to help you add users, groups, or service principals to a collection, and manage or remove their roles within that collection. REST APIs are an alternative method to using the Azure portal or Microsoft Purview governance portal to achieve the same granular role-based access control.
For more information about the built-in roles in Microsoft Purview, see the [Microsoft Purview permissions guide](catalog-permissions.md#roles). The guide maps the roles to the level of access permissions that are granted to users.
purview Tutorial Purview Audit Logs Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-purview-audit-logs-diagnostics.md
For step-by-step explanations and manual setup:
Now that Event Hubs is deployed and created, connect Microsoft Purview diagnostics audit logging to Event Hubs.
-1. Go to your Microsoft Purview account home page. This page is where the overview information is displayed. It's not the Microsoft Purview Studio home page.
+1. Go to your Microsoft Purview account home page. This page is where the overview information is displayed. It's not the Microsoft Purview governance portal home page.
1. On the left menu, select **Monitoring** > **Diagnostic settings**.
purview Tutorial Register Scan On Premises Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-register-scan-on-premises-sql-server.md
Microsoft Purview is designed to connect to data sources to help you manage sens
In this tutorial, you'll learn how to: > [!div class="checklist"]
-> * Sign in to the Microsoft Purview Studio.
+> * Sign in to the Microsoft Purview governance portal.
> * Create a collection in Microsoft Purview. > * Create a self-hosted integration runtime. > * Store credentials in an Azure Key Vault.
In this tutorial, you'll learn how to:
- An Microsoft Purview account. If you don't already have one, you can [follow our quickstart guide to create one](create-catalog-portal.md). - An [on-premises SQL Server](https://www.microsoft.com/sql-server/sql-server-downloads).
-## Sign in to Microsoft Purview Studio
+## Sign in to the Microsoft Purview governance portal
-To interact with Microsoft Purview, you'll connect to the [Microsoft Purview Studio](https://web.purview.azure.com/resource/) through the Azure portal. You can find the studio by going to your Microsoft Purview account in the [Azure portal](https://portal.azure.com), and selecting the **Open Microsoft Purview Studio** tile on the overview page.
+To interact with Microsoft Purview, you'll connect to the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/) through the Azure portal. You can find the studio by going to your Microsoft Purview account in the [Azure portal](https://portal.azure.com), and selecting the **Open Microsoft Purview governance portal** tile on the overview page.
## Create a collection
Collections in Microsoft Purview are used to organize assets and sources into a
### Check permissions
-To create and manage collections in Microsoft Purview, you'll need to be a **Collection Admin** within Microsoft Purview. We can check these permissions in the [Microsoft Purview Studio](use-azure-purview-studio.md).
+To create and manage collections in Microsoft Purview, you'll need to be a **Collection Admin** within Microsoft Purview. We can check these permissions in the [Microsoft Purview governance portal](use-azure-purview-studio.md).
1. Select **Data Map > Collections** from the left pane to open the collection management page.
- :::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/find-collections.png" alt-text="Screenshot of Microsoft Purview studio window, opened to the Data Map, with the Collections tab selected." border="true":::
+ :::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/find-collections.png" alt-text="Screenshot of the Microsoft Purview governance portal window, opened to the Data Map, with the Collections tab selected." border="true":::
1. Select your root collection. The root collection is the top collection in your collection list and will have the same name as your Microsoft Purview account. In our example below, it is called Microsoft Purview Account.
- :::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/select-root-collection.png" alt-text="Screenshot of Microsoft Purview studio window, opened to the Data Map, with the root collection highlighted." border="true":::
+ :::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/select-root-collection.png" alt-text="Screenshot of the Microsoft Purview governance portal window, opened to the Data Map, with the root collection highlighted." border="true":::
1. Select **Role assignments** in the collection window.
- :::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/role-assignments.png" alt-text="Screenshot of Microsoft Purview studio window, opened to the Data Map, with the role assignments tab highlighted." border="true":::
+ :::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/role-assignments.png" alt-text="Screenshot of the Microsoft Purview governance portal window, opened to the Data Map, with the role assignments tab highlighted." border="true":::
1. To create a collection, you'll need to be in the collection admin list under role assignments. If you created the Microsoft Purview account, you should be listed as a collection admin under the root collection already. If not, you'll need to contact the collection admin to grant you permission.
- :::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/collection-admins.png" alt-text="Screenshot of Microsoft Purview studio window, opened to the Data Map, with the collection admin section highlighted." border="true":::
+ :::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/collection-admins.png" alt-text="Screenshot of the Microsoft Purview governance portal window, opened to the Data Map, with the collection admin section highlighted." border="true":::
### Create the collection 1. Select **+ Add a collection**. Again, only [collection admins](#check-permissions) can manage collections.
- :::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/select-add-a-collection.png" alt-text="Screenshot of Microsoft Purview studio window, showing the new collection window, with the 'add a collection' buttons highlighted." border="true":::
+ :::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/select-add-a-collection.png" alt-text="Screenshot of the Microsoft Purview governance portal window, showing the new collection window, with the 'add a collection' buttons highlighted." border="true":::
1. In the right panel, enter the collection name and description. If needed you can also add users or groups as collection admins to the new collection. 1. Select **Create**.
- :::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/create-collection.png" alt-text="Screenshot of Microsoft Purview studio window, showing the new collection window, with a display name and collection admins selected, and the create button highlighted." border="true":::
+ :::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/create-collection.png" alt-text="Screenshot of the Microsoft Purview governance portal window, showing the new collection window, with a display name and collection admins selected, and the create button highlighted." border="true":::
1. The new collection's information will reflect on the page.
- :::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/created-collection.png" alt-text="Screenshot of Microsoft Purview studio window, showing the newly created collection window." border="true":::
+ :::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/created-collection.png" alt-text="Screenshot of the Microsoft Purview governance portal window, showing the newly created collection window." border="true":::
## Create a self-hosted integration runtime
The Self-Hosted Integration Runtime (SHIR) is the compute infrastructure used by
This tutorial assumes the machine where you'll install your self-hosted integration runtime can make network connections to the internet. This connection allows the SHIR to communicate between your source and Microsoft Purview. If your machine has a restricted firewall, or if you would like to secure your firewall, look into the [network requirements for the self-hosted integration runtime](manage-integration-runtimes.md#networking-requirements).
-1. On the home page of Microsoft Purview Studio, select **Data Map** from the left navigation pane.
+1. On the home page of the Microsoft Purview governance portal, select **Data Map** from the left navigation pane.
1. Under **Source management** on the left pane, select **Integration runtimes**, and then select **+ New**.
If you would like to create a new login and user to be able to scan your SQL ser
:::image type="content" source="media/tutorial-register-scan-on-premises-sql-server/create-credential-secret.png" alt-text="Add values to key vault credential."::: 1. Select **Create** to complete.
-1. In the [Microsoft Purview Studio](#sign-in-to-microsoft-purview-studio), navigate to the **Management** page in the left menu.
+1. In the [Microsoft Purview governance portal](#sign-in-to-the-microsoft-purview-governance-portal), navigate to the **Management** page in the left menu.
:::image type="content" source="media/tutorial-register-scan-on-premises-sql-server/select-management.png" alt-text="Select Management page on left menu.":::
If you would like to create a new login and user to be able to scan your SQL ser
## Register SQL Server
-1. Navigate to your Microsoft Purview account in the [Azure portal](https://portal.azure.com), and select the [Microsoft Purview Studio](#sign-in-to-microsoft-purview-studio).
+1. Navigate to your Microsoft Purview account in the [Azure portal](https://portal.azure.com), and select the [Microsoft Purview governance portal](#sign-in-to-the-microsoft-purview-governance-portal).
1. Under Sources and scanning in the left navigation, select **Integration runtimes**. Make sure a self-hosted integration runtime is set up. If it's not set up, follow the steps mentioned [here](manage-integration-runtimes.md) to create a self-hosted integration runtime for scanning on an on-premises or Azure VM that has access to your on-premises network.
If you would like to create a new login and user to be able to scan your SQL ser
To create and run a new scan, do the following:
-1. Select the **Data Map** tab on the left pane in the Microsoft Purview Studio.
+1. Select the **Data Map** tab on the left pane in the Microsoft Purview governance portal.
1. Select the SQL Server source that you registered.
If you're not going to continue to use this Microsoft Purview or SQL source movi
### Remove SHIR from Microsoft Purview
-1. On the home page of [Microsoft Purview Studio](https://web.purview.azure.com/resource/), select **Data Map** from the left navigation pane.
+1. On the home page of [the Microsoft Purview governance portal](https://web.purview.azure.com/resource/), select **Data Map** from the left navigation pane.
1. Under **Source management** on the left pane, select **Integration runtimes**.
purview Tutorial Using Rest Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-using-rest-apis.md
its password. Here's how:
Once service principal is created, you need to assign Data plane roles of your purview account to the service principal created above. The below steps need to be followed to assign role to establish trust between the service principal and purview account.
-1. Navigate to your [Microsoft Purview Studio](https://web.purview.azure.com/resource/).
+1. Navigate to your [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
1. Select the Data Map in the left menu. 1. Select Collections. 1. Select the root collection in the collections menu. This will be the top collection in the list, and will have the same name as your Microsoft Purview account.
purview Use Azure Purview Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/use-azure-purview-studio.md
Title: Use the Microsoft Purview Studio
-description: This article describes how to use Microsoft Purview Studio.
+ Title: Use the Microsoft Purview governance portal
+description: This article describes how to use the Microsoft Purview governance portal.
Last updated 02/12/2022
-# Use Microsoft Purview Studio
+# Use the Microsoft Purview governance portal
This article gives an overview of some of the main features of Microsoft Purview. ## Prerequisites
-* An Active Microsoft Purview account is already created in Azure portal and the user has permissions to access [Microsoft Purview Studio](https://web.purview.azure.com/resource/).
+* An Active Microsoft Purview account is already created in Azure portal and the user has permissions to access [the Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
## Launch Microsoft Purview account * To launch your Microsoft Purview account, go to Microsoft Purview accounts in Azure portal, select the account you want to launch and launch the account.
- :::image type="content" source="./media/use-purview-studio/open-purview-studio.png" alt-text="Screenshot of Microsoft Purview window in Azure portal, with Microsoft Purview Studio button highlighted." border="true":::
+ :::image type="content" source="./media/use-purview-studio/open-purview-studio.png" alt-text="Screenshot of Microsoft Purview window in Azure portal, with the Microsoft Purview governance portal button highlighted." border="true":::
* Another way to launch Microsoft Purview account is to go to `https://web.purview.azure.com`, select **Azure Active Directory** and an account name to launch the account.
Knowledge center is where you can find all the videos and tutorials related to M
Microsoft Purview is localized in 18 languages. To change the language used, go to the **Settings** from the top bar and select the desired language from the dropdown. > [!NOTE] > Only generally available features are localized. Features still in preview are in English regardless of which language is selected.
Microsoft Purview is localized in 18 languages. To change the language used, go
## Guided tours
-Each UX in Microsoft Purview Studio will have guided tours to give overview of the page. To start the guided tour, select **help** on the top bar and select **guided tours**.
+Each UX in the Microsoft Purview governance portal will have guided tours to give overview of the page. To start the guided tour, select **help** on the top bar and select **guided tours**.
:::image type="content" source="./media/use-purview-studio/guided-tour.png" alt-text="Screenshot of the guided tour.":::
remote-rendering Create An Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/how-tos/create-an-account.md
Last updated 02/11/2020 + # Create an Azure Remote Rendering account
The values for **`arrAccountId`** and **`arrAccountKey`** can be found in the po
* Go to the [Azure portal](https://www.portal.azure.com) * Find your **"Remote Rendering Account"** - it should be in the **"Recent Resources"** list. You can also search for it in the search bar at the top. In that case, make sure that the subscription you want to use is selected in the Default subscription filter (filter icon next to search bar): Clicking on your account brings you to this screen, which shows the **Account ID** right away:
This paragraph explains how to link storage accounts to your Remote Rendering ac
The steps in this paragraph have to be performed for each storage account that should use this access method. If you haven't created storage accounts yet, you can walk through the respective step in the [convert a model for rendering quickstart](../quickstarts/convert-model.md#storage-account-creation).
-Now it is assumed you have a storage account. Navigate to the storage account in the portal and go to the **Access Control (IAM)** tab for that storage account:
+1. Navigate to your storage account in the Azure portal
+1. Select **Access control (IAM)**.
-Ensure you have owner permissions over this storage account to ensure that you can add role assignments. If you don't have access, the **Add a role assignment** option will be disabled.
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
-Click on the **Add** button in the "Add a role assignment" tile to add the role.
+ If you don't have owner permissions to this storage account, the **Add a role assignment** option will be disabled.
+1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
-Search for the role **Storage Blob Data Contributor** in the list or by typing it in the search field. Select the role by clicking on the item in the list and click **Next**.
+ | Setting | Value |
+ | | |
+ | Role | Storage Blob Data Contributor |
+ | Assign access to | User, group, or service principal |
+ | Members | Remote Rendering Account |
-
-Now select the new member for this role assignment:
-
-1. Click **+ Select members**.
-2. Search for the account name of your **Remote Rendering Account** in the *Select members* panel and click on the item corresponding to your **Remote Rendering Account** in the list.
-3. Confirm your selection with a click on **Select**.
-4. Click on **Next** until you are in the **Review + assign** tab.
--
-Finally check that the correct member is listed under *Members > Name* and then finish up the assignment by clicking **Review + assign**.
+ ![Screenshot showing Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
> [!WARNING]
-> In case your Remote Rendering account is not listed, refer to this [troubleshoot section](../resources/troubleshoot.md#cant-link-storage-account-to-arr-account).
+> If your Remote Rendering account is not listed, refer to this [troubleshoot section](../resources/troubleshoot.md#cant-link-storage-account-to-arr-account).
> [!IMPORTANT] > Azure role assignments are cached by Azure Storage, so there may be a delay of up to 30 minutes between when you grant access to your remote rendering account and when it can be used to access your storage account. See the [Azure role-based access control (Azure RBAC) documentation](../../role-based-access-control/troubleshooting.md#role-assignment-changes-are-not-being-detected) for details.
Finally check that the correct member is listed under *Members > Name* and then
* [Authentication](authentication.md) * [Using the Azure Frontend APIs for authentication](frontend-apis.md)
-* [Example PowerShell scripts](../samples/powershell-example-scripts.md)
+* [Example PowerShell scripts](../samples/powershell-example-scripts.md)
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/resource-provider-operations.md
Azure service: [App Service](../app-service/index.yml), [Azure Functions](../azu
> | microsoft.web/sites/networktrace/action | Network Trace Web Apps. | > | microsoft.web/sites/newpassword/action | Newpassword Web Apps. | > | microsoft.web/sites/sync/action | Sync Web Apps. |
-> | microsoft.web/sites/migratemysql/action | Migrate MySql Web Apps. |
+> | microsoft.web/sites/migratemysql/action | Migrate MySQL Web Apps. |
> | microsoft.web/sites/recover/action | Recover Web Apps. | > | microsoft.web/sites/restoresnapshot/action | Restore Web Apps Snapshots. | > | microsoft.web/sites/restorefromdeletedapp/action | Restore Web Apps From Deleted App. |
Azure service: [App Service](../app-service/index.yml), [Azure Functions](../azu
> | microsoft.web/sites/metricdefinitions/read | Get Web Apps Metric Definitions. | > | microsoft.web/sites/metrics/read | Get Web Apps Metrics. | > | microsoft.web/sites/metricsdefinitions/read | Get Web Apps Metrics Definitions. |
-> | microsoft.web/sites/migratemysql/read | Get Web Apps Migrate MySql. |
+> | microsoft.web/sites/migratemysql/read | Get Web Apps Migrate MySQL. |
> | microsoft.web/sites/networkConfig/read | Get App Service Network Configuration. | > | microsoft.web/sites/networkConfig/write | Update App Service Network Configuration. | > | microsoft.web/sites/networkConfig/delete | Delete App Service Network Configuration. |
Azure service: [App Service](../app-service/index.yml), [Azure Functions](../azu
> | microsoft.web/sites/slots/instances/processes/delete | Delete Web Apps Slots Instances Processes. | > | microsoft.web/sites/slots/metricdefinitions/read | Get Web Apps Slots Metric Definitions. | > | microsoft.web/sites/slots/metrics/read | Get Web Apps Slots Metrics. |
-> | microsoft.web/sites/slots/migratemysql/read | Get Web Apps Slots Migrate MySql. |
+> | microsoft.web/sites/slots/migratemysql/read | Get Web Apps Slots Migrate MySQL. |
> | microsoft.web/sites/slots/networkConfig/read | Get App Service Slots Network Configuration. | > | microsoft.web/sites/slots/networkConfig/write | Update App Service Slots Network Configuration. | > | microsoft.web/sites/slots/networkConfig/delete | Delete App Service Slots Network Configuration. |
search Search Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-language-support.md
Previously updated : 09/08/2021 Last updated : 04/21/2022 # Create an index for multiple languages in Azure Cognitive Search
-A key requirement in a multilingual search application is the ability to search over and retrieve results in the user's own language. In Azure Cognitive Search, one way to meet the language requirements of a multilingual app is to create dedicated fields for storing strings in a specific language, and then constrain full text search to just those fields at query time.
+A multilingual search application supports searching over and retrieving results in the user's own language. In Azure Cognitive Search, one way to meet the language requirements of a multilingual app is to create dedicated fields for storing strings in a specific language, and then constrain full text search to just those fields at query time.
+ On field definitions, [specify a language analyzer](index-add-language-analyzers.md) that invokes the linguistic rules of the target language. + On the query request, set the `searchFields` parameter to scope full text search to specific fields, and then use `select` to return just those fields that have compatible content.
-The success of this technique hinges on the integrity of field contents. Azure Cognitive Search does not translate strings or perform language detection as part of query execution. It is up to you to make sure that fields contain the strings you expect.
+The success of this technique hinges on the integrity of field content. By itself, Azure Cognitive Search does not translate strings or perform language detection as part of query execution. It's up to you to make sure that fields contain the strings you expect.
+
+> [!TIP]
+> If text translation is a requirement, you can [create a skillset](cognitive-search-defining-skillset.md) that adds [text translation](cognitive-search-skill-text-translation.md) to the indexing pipeline. This approach requires [using an indexer](search-howto-create-indexers.md) and [attaching a Cognitive Services resource](cognitive-search-attach-cognitive-services.md).
+>
+> Text translation is built into the [Import data wizard](cognitive-search-quickstart-blob.md). If you have a [supported data source](search-indexer-overview.md#supported-data-sources) with text you'd like to translate, you can step through the wizard to try out the language detection and translation functionality.
## Define fields for content in different languages
The "analyzer" property on a field definition is used to set the [language analy
An intermediate (and perhaps obvious) step is that you have to [build and populate the index](search-get-started-dotnet.md) before formulating a query. We mention this step here for completeness. One way to determine index availability is by checking the indexes list in the [portal](https://portal.azure.com).
-> [!TIP]
-> Language detection and text translation are supported during data ingestion through [AI enrichment](cognitive-search-concept-intro.md) and [skillsets](cognitive-search-working-with-skillsets.md). If you have an Azure data source with mixed language content, you can try out the language detection and translation features using the [Import data wizard](cognitive-search-quickstart-blob.md).
- ## Constrain the query and trim results Parameters on the query are used to limit search to specific fields and then trim the results of any fields not helpful to your scenario.
search Search Pagination Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-pagination-page-layout.md
Previously updated : 01/04/2022 Last updated : 04/21/2022 # How to work with search results in Azure Cognitive Search
Another approach that promotes order consistency is using a [custom scoring prof
## Hit highlighting
-Hit highlighting refers to text formatting (such as bold or yellow highlights) applied to matching terms in a result, making it easy to spot the match. Highlighting is useful for longer content fields, such as a description field, where the match is not immediately obvious.
+Hit highlighting refers to text formatting (such as bold or yellow highlights) applied to matching terms in a result, making it easy to spot the match. Highlighting is useful for longer content fields, such as a description field, where the match is not immediately obvious.
+
+Notice that highlighting is applied to individual terms. There is no highlight capability for the contents of an entire field. If you want highlighting over a phrase, you'll have to provide the matching terms (or phrase) in a quote-enclosed query string. This technique is described further on in this section.
Hit highlighting instructions are provided on the [query request](/rest/api/searchservice/search-documents). Queries that trigger query expansion in the engine, such as fuzzy and wildcard search, have limited support for hit highlighting.
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-rbac.md
Built-in roles include generally available and preview roles.
| [Owner](../role-based-access-control/built-in-roles.md#owner) | (Generally available) Full access to the search resource, including the ability to assign Azure roles. Subscription administrators are members by default. | | [Contributor](../role-based-access-control/built-in-roles.md#contributor) | (Generally available) Same level of access as Owner, minus the ability to assign roles or change authorization options. | | [Reader](../role-based-access-control/built-in-roles.md#reader) | (Generally available) Limited access to partial service information. In the portal, the Reader role can access information in the service Overview page, in the Essentials section and under the Monitoring tab. All other tabs and pages are off limits. </br></br>This role has access to service information: resource group, service status, location, subscription name and ID, tags, URL, pricing tier, replicas, partitions, and search units. This role also has access to service metrics: search latency, percentage of throttled requests, average queries per second. </br></br>There is no access to API keys, role assignments, content (indexes or synonym maps), or content metrics (storage consumed, number of objects). |
-| [Search Service Contributor](../role-based-access-control/built-in-roles.md#search-service-contributor) | (Generally available) This role is identical to the Contributor role and applies to control plane operations. </br></br>(Preview) When you enable the RBAC preview for the data plane, this role also provides full access to all data plane actions on indexes, synonym maps, indexers, data sources, and skillsets as defined by [`Microsoft.Search/searchServices/*`](../role-based-access-control/resource-provider-operations.md#microsoftsearch). This role is for search service administrators who need to fully manage both the service and its content. </br></br>Like Contributor, members of this role cannot make or manage role assignments or change authorization options. Your service must be enabled for the preview for data requests. |
+| [Search Service Contributor](../role-based-access-control/built-in-roles.md#search-service-contributor) | (Generally available) This role is identical to the Contributor role and applies to control plane operations. </br></br>(Preview) When you enable the RBAC preview for the data plane, this role also provides full access to all data plane actions on indexes, synonym maps, indexers, data sources, and skillsets as defined by [`Microsoft.Search/searchServices/*`](../role-based-access-control/resource-provider-operations.md#microsoftsearch). This role is for search service administrators who need to fully manage both the service and its content. </br></br>Like Contributor, members of this role cannot make or manage role assignments or change authorization options. To use the preview capabilities of this role, your service must have the preview feature enabled, as described in this article. |
| [Search Index Data Contributor](../role-based-access-control/built-in-roles.md#search-index-data-contributor) | (Preview) Provides full data plane access to content in all indexes on the search service. This role is for developers or index owners who need to import, refresh, or query the documents collection of an index. | | [Search Index Data Reader](../role-based-access-control/built-in-roles.md#search-index-data-reader) | (Preview) Provides read-only data plane access to search indexes on the search service. This role is for apps and users who run queries. |
security Pen Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/pen-testing.md
na Previously updated : 02/03/2021 Last updated : 04/21/2022
We donΓÇÖt perform penetration testing of your application for you, but we do un
As of June 15, 2017, Microsoft no longer requires pre-approval to conduct a penetration test against Azure resources. This process is only related to Microsoft Azure, and not applicable to any other Microsoft Cloud Service.
->[!IMPORTANT]
->While notifying Microsoft of pen testing activities is no longer required customers must still comply with the [Microsoft Cloud Unified Penetration Testing Rules of Engagement](https://www.microsoft.com/msrc/pentest-rules-of-engagement).
+
+> [!IMPORTANT]
+> While notifying Microsoft of pen testing activities is no longer required customers must still comply with the [Microsoft Cloud Unified Penetration Testing Rules of Engagement](https://www.microsoft.com/msrc/pentest-rules-of-engagement).
Standard tests you can perform include:
Standard tests you can perform include:
One type of pen test that you canΓÇÖt perform is any kind of [Denial of Service (DoS)](https://en.wikipedia.org/wiki/Denial-of-service_attack) attack. This test includes initiating a DoS attack itself, or performing related tests that might determine, demonstrate, or simulate any type of DoS attack.
->[!Note]
->Microsoft has partnered with BreakingPoint Cloud to build an interface where you can generate traffic against DDoS Protection-enabled public IP addresses for simulations. To learn more about the BreakingPoint Cloud simulation, see [testing through simulations](../../ddos-protection/test-through-simulations.md).
+> [!Note]
+> You may only simulate attacks using Microsoft approved testing partners:
+> - [Red Button](https://www.red-button.net/): Work with a dedicated team of experts to simulate real-world DDoS attack scenarios in a controlled environment.
+> - [BreakingPoint Cloud](https://www.ixiacom.com/products/breakingpoint-cloud): A self-service traffic generator where your customers can generate traffic against DDoS Protection-enabled public endpoints for simulations.
+>
+> To learn more about the BreakingPoint Cloud simulation, see [testing with simulation partners](../../ddos-protection/test-through-simulations.md).
+ ## Next steps
sentinel Sap Solution Log Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap-solution-log-reference.md
This article describes the functions, logs, and tables available as part of the
## Functions available from the SAP solution
-This section describes the [functions](/azure-monitor/logs/functions.md) that are available in your workspace after you've deployed the Continuous Threat Monitoring for SAP solution. Find these functions in the Microsoft Sentinel **Logs** page to use in your KQL queries, listed under **Workspace functions**.
+This section describes the [functions](/azure/azure-monitor/logs/functions) that are available in your workspace after you've deployed the Continuous Threat Monitoring for SAP solution. Find these functions in the Microsoft Sentinel **Logs** page to use in your KQL queries, listed under **Workspace functions**.
Users are *strongly encouraged* to use the functions as the subjects of their analysis whenever possible, instead of the underlying logs or tables. These functions are intended to serve as the principal user interface to the data. They form the basis for all the built-in analytics rules and workbooks available to you out of the box. This allows for changes to be made to the data infrastructure beneath the functions, without breaking user-created content.
service-fabric How To Managed Cluster Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-availability-zones.md
Service Fabric managed cluster supports deployments that span across multiple Av
Sample templates are available: [Service Fabric cross availability zone template](https://github.com/Azure-Samples/service-fabric-cluster-templates)
-## Recommendations for zone resilient Azure Service Fabric managed clusters
+## Topology for zone resilient Azure Service Fabric managed clusters
+
+>[!NOTE]
+>The benefit of spanning the primary node type across availability zones is really only seen for three zones and not just two.
+ A Service Fabric cluster distributed across Availability Zones ensures high availability of the cluster state. The recommended topology for managed cluster requires the resources outlined below:
site-recovery Vmware Azure Set Up Replication Tutorial Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-set-up-replication-tutorial-preview.md
VMware to Azure replication includes the following procedures:
- Sign in to the [Azure portal](https://portal.azure.com/). - Prepare Azure account-- Prepare infrastructure - [Create a recovery Services vault](./quickstart-create-vault-template.md?tabs=CLI)-- [Deploy an Azure Site Recovery replication appliance](deploy-vmware-azure-replication-appliance-preview.md)
+- Prepare infrastructure - [deploy an Azure Site Recovery replication appliance](deploy-vmware-azure-replication-appliance-preview.md)
- Enable replication ## Prepare Azure account
Ensure the [pre-requisites](vmware-physical-azure-support-matrix.md) across stor
Follow these steps to enable replication:
-1. Select **Site Recovery** under **Getting Started** section. Click **Enable Replication (Preview)** under the VMware section.
+1. Select **Site Recovery** under **Getting Started** section. Click **Enable Replication (Preview)** under the VMware section.
2. Choose the machine type you want to protect through Azure Site Recovery.
Follow these steps to enable replication:
![Select source machines](./media/vmware-azure-set-up-replication-tutorial-preview/select-source.png)
-3. After choosing the virtual machines, select the vCenter server added to Azure Site Recovery replication appliance, registered in this vault.
+3. After choosing the machine type, select the vCenter server added to Azure Site Recovery replication appliance, registered in this vault.
-4. Later, search the source VM name to protect the machines of your choice. To review the selected VMs, select **Selected resources**.
+4.4. Search the source machine name to protect it. To review the selected machines, select **Selected resources**.
-5. After you select the list of VMs, select **Next** to proceed to source settings. Here, select the replication appliance and VM credentials. These credentials will be used to push mobility agent on the VM by configuration server to complete enabling Azure Site Recovery. Ensure accurate credentials are chosen.
+5. After you select the list of VMs, select **Next** to proceed to source settings. Here, select the replication appliance and VM credentials. These credentials will be used to push mobility agent on the machine by Azure Site Recovery replication appliance to complete enabling Azure Site Recovery. Ensure accurate credentials are chosen.
>[!NOTE] >For Linux OS, ensure to provide the root credentials. For Windows OS, a user account with admin privileges should be added. These credentials will be used to push Mobility Service on to the source machine during enable replication operation.
Follow these steps to enable replication:
9. Select the storage. - Cache storage account:
- Now, choose the cache storage account which Azure Site Recovery uses for staging purposes ΓÇô caching and storing logs before writing the changes on to the managed disks.
+ Now, choose the cache storage account which Azure Site Recovery uses for staging purposes - caching and storing logs before writing the changes on to the managed disks.
By default, a new LRS v1 type storage account will be created by Azure Site Recovery for the first enable replication operation in a vault. For the next operations, same cache storage account will be re-used. - Managed disks
spring-cloud Tutorial Managed Identities Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/tutorial-managed-identities-mysql.md
The following video describes how to manage secrets using Azure Key Vault.
* [JDK 8](/azure/java/jdk/java-jdk-install) * [Maven 3.0 or above](http://maven.apache.org/install.html)
-* [Azure CLI](/cli/azure/install-azure-cli?view=azure-cli-latest) or [Azure Cloud Shell](../cloud-shell/overview.md)
-* An existing Key Vault. If you need to create a Key Vault, you can use the [Azure portal](../key-vault/secrets/quick-create-portal.md) or [Azure CLI](/cli/azure/keyvault?view=azure-cli-latest#az-keyvault-create)
+* [Azure CLI](/cli/azure/install-azure-cli) or [Azure Cloud Shell](../cloud-shell/overview.md)
+* An existing Key Vault. If you need to create a Key Vault, you can use the [Azure portal](../key-vault/secrets/quick-create-portal.md) or [Azure CLI](/cli/azure/keyvault#az-keyvault-create)
* An existing Azure Database for MySQL instance with a database named `demo`. If you need to create an Azure Database for MySQL, you can use the [Azure portal](../mysql/quickstart-create-mysql-server-database-using-azure-portal.md) or [Azure CLI](../mysql/quickstart-create-mysql-server-database-using-azure-cli.md) ## Create a resource group
az mysql db create \
## Create an app and service in Azure Spring Cloud
-After installing the corresponding extension, create an Azure Spring Cloud instance with the Azure CLI command [az spring-cloud create](/cli/azure/spring-cloud?view=azure-cli-latest#az-spring-cloud-create).
+After installing the corresponding extension, create an Azure Spring Cloud instance with the Azure CLI command [az spring-cloud create](/cli/azure/spring-cloud#az-spring-cloud-create).
```azurecli az extension add --name spring-cloud
Make a note of the returned `url`, which will be in the format `https://<your-ap
## Grant your app access to Key Vault
-Use [az keyvault set-policy](/cli/azure/keyvault?view=azure-cli-latest#az-keyvault-set-policy) to grant proper access in Key Vault for your app.
+Use [az keyvault set-policy](/cli/azure/keyvault#az-keyvault-set-policy) to grant proper access in Key Vault for your app.
```azurecli az keyvault set-policy
This [sample](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/m
mvn clean package ```
-4. Now deploy the app to Azure with the Azure CLI command [az spring-cloud app deploy](/cli/azure/spring-cloud/app?view=azure-cli-latest#az-spring-cloud-app-deploy).
+4. Now deploy the app to Azure with the Azure CLI command [az spring-cloud app deploy](/cli/azure/spring-cloud/app#az-spring-cloud-app-deploy).
```azurecli az spring-cloud app deploy \
sql-database Sql Database Import Purview Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sql-database/scripts/sql-database-import-purview-labels.md
description: Import your classification from Microsoft Purview in your Azure SQL
-+ ms.devlang: azurepowershell
This document describes how to add Microsoft Purview labels in your Azure SQL Da
## Provide permissions to the application 1. In your Azure portal, search for **Microsoft Purview accounts**.
-2. Select the Microsoft Purview account where your SQL databases and Synapse are classified.
-3. Open **Access control (IAM)**, select **Add**.
-4. Select **Add role assignment**.
-5. In the **Role** section, search for **Microsoft Purview Data Reader** and select it.
-6. In the **Select** section, search for the application you previously created, select it, and hit **Save**.
+1. Select the Microsoft Purview account where your SQL databases and Synapse are classified.
+
+1. Assign the Microsoft Purview Data Reader role to the application you previously created.
+
+ For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
## Extract the classification from Microsoft Purview
static-web-apps Build Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/build-configuration.md
Previously updated : 12/21/2021 Last updated : 04/20/2021
The following table lists the available configuration settings.
| Property | Description | Required | ||||
-| `app_location` | This folder contains the source code for your front-end application. The value is relative to the repository root in GitHub and the current working folder in Azure DevOps. | Yes |
-| `api_location` | This folder that contains the source code for your API application. The value is relative to the repository root in GitHub and the current working folder in Azure DevOps. | No |
+| `app_location` | This folder contains the source code for your front-end application. The value is relative to the repository root in GitHub and the current working folder in Azure DevOps. When used with `skip_app_build: true`, this value is the app's build output location. | Yes |
+| `api_location` | This folder that contains the source code for your API application. The value is relative to the repository root in GitHub and the current working folder in Azure DevOps. When used with `skip_api_build: true`, this value is the API's build output location. | No |
| `output_location` | If your web app runs a build step, the output location is the folder where the public files are generated. For most projects, the `output_location` is relative to the `app_location`. However, for .NET projects, the location is relative to the publish output folder. | No | | `app_build_command` | For Node.js applications, you can define a custom command to build the static content application.<br><br>For example, to configure a production build for an Angular application create an npm script named `build-prod` to run `ng build --prod` and enter `npm run build-prod` as the custom command. If left blank, the workflow tries to run the `npm run build` or `npm run build:azure` commands. | No | | `api_build_command` | For Node.js applications, you can define a custom command to build the Azure Functions API application. | No |
If you want to skip building the API, you can bypass the automatic build and dep
Steps to skip building the API: - In the *staticwebapp.config.json* file, set `apiRuntime` to the correct language and version. Refer to [Configure Azure Static Web Apps](configuration.md#selecting-the-api-language-runtime-version) for the list of supported languages and versions.
-```json
-{
- "platform": {
- "apiRuntime": "node:16"
- }
-}
-```
+ ```json
+ {
+ "platform": {
+ "apiRuntime": "node:16"
+ }
+ }
+ ```
- Set `skip_api_build` to `true`.
+- Set `api_location` to the folder containing the built API app to deploy. This path is relative to the repository root in GitHub Actions and `cwd` in Azure Pipelines.
# [GitHub Actions](#tab/github-actions)
with:
output_location: "public" # Built app content directory, relative to app_location - optional skip_api_build: true ```+ # [Azure Pipelines](#tab/azure-devops)
-This feature is unsupported in Azure Pipelines.
-++
+```yml
+...
+
+inputs:
+ app_location: 'src'
+ api_location: 'api'
+ output_location: 'public'
+ skip_api_build: true
+ azure_static_web_apps_api_token: $(deployment_token)
+```
+ ## Extend build timeout
storage Blob Upload Function Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-upload-function-trigger.md
Copy the value of the `connectionString` property and paste it somewhere to use
## Create the Computer Vision service
-Next, create the Computer Vision service account that will process our uploaded files. Computer Vision is part of Azure Cognitive Services and offers a variety of features for extracting data out of images. You can learn more about Computer Vision on the [overview page](/services/cognitive-services/computer-vision/#overview).
+Next, create the Computer Vision service account that will process our uploaded files. Computer Vision is part of Azure Cognitive Services and offers a variety of features for extracting data out of images. You can learn more about Computer Vision on the [overview page](/azure/cognitive-services/computer-vision/overview).
### [Azure portal](#tab/azure-portal)
If you're not going to continue to use this application, you can delete the reso
2) Select the **Delete resource group** button at the top of the resource group overview page. 3) Enter the resource group name *msdocs-storage-function* in the confirmation dialog. 4) Select delete.
-The process to delete the resource group may take a few minutes to complete.
+The process to delete the resource group may take a few minutes to complete.
storage Storage Blob Pageblob Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-pageblob-overview.md
If you have a sparsely populated blob, you may want to just download the valid p
# [.NET v12 SDK](#tab/dotnet)
-To determine which pages are backed by data, use [PageBlobClient.GetPageRanges](/dotnet/api/azure.storage.blobs.specialized.pageblobclient.getpageranges). You can then enumerate the returned ranges and download the data in each range.
+To determine which pages are backed by data, use PageBlobClient.GetPageRanges. You can then enumerate the returned ranges and download the data in each range.
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/CRUD.cs" id="Snippet_ReadValidPageRegionsFromPageBlob":::
storage Storage Account Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-create.md
Every Resource Manager resource, including an Azure storage account, must belong
To create an Azure storage account with the Azure portal, follow these steps: 1. From the left portal menu, select **Storage accounts** to display a list of your storage accounts. If the portal menu isn't visible, click the menu button to toggle it on.+
+ :::image type="content" source="media/storage-account-create/menu-expand-sml.png" alt-text="Image of the Azure Portal homepage showing the location of the Menu button near the top left corner of the browser" lightbox="media/storage-account-create/menu-expand-lrg.png":::
+ 1. On the **Storage accounts** page, select **Create**.
+ :::image type="content" source="media/storage-account-create/create-button-sml.png" alt-text="Image showing the location of the create button within the Azure Portal Storage Accounts page" lightbox="media/storage-account-create/create-button-lrg.png":::
+ Options for your new storage account are organized into tabs in the **Create a storage account** page. The following sections describe each of the tabs and their options. ### Basics tab
The following table describes the fields on the **Basics** tab.
| Project details | Resource group | Required | Create a new resource group for this storage account, or select an existing one. For more information, see [Resource groups](../../azure-resource-manager/management/overview.md#resource-groups). | | Instance details | Storage account name | Required | Choose a unique name for your storage account. Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only. | | Instance details | Region | Required | Select the appropriate region for your storage account. For more information, see [Regions and Availability Zones in Azure](../../availability-zones/az-overview.md).<br /><br />Not all regions are supported for all types of storage accounts or redundancy configurations. For more information, see [Azure Storage redundancy](storage-redundancy.md).<br /><br />The choice of region can have a billing impact. For more information, see [Storage account billing](storage-account-overview.md#storage-account-billing). |
-| Instance details | Performance | Required | Select **Standard** performance for general-purpose v2 storage accounts (default). This type of account is recommended by Microsoft for most scenarios. For more information, see [Types of storage accounts](storage-account-overview.md#types-of-storage-accounts).<br /><br />Select **Premium** for scenarios requiring low latency. After selecting **Premium**, select the type of premium storage account to create. The following types of premium storage accounts are available: <ul><li>[Block blobs](./storage-account-overview.md)</li><li>[File shares](../files/storage-files-planning.md#management-concepts)</li><li>[Page blobs](../blobs/storage-blob-pageblob-overview.md)</li></ul> |
+| Instance details | Performance | Required | Select **Standard** performance for general-purpose v2 storage accounts (default). This type of account is recommended by Microsoft for most scenarios. For more information, see [Types of storage accounts](storage-account-overview.md#types-of-storage-accounts).<br /><br />Select **Premium** for scenarios requiring low latency. After selecting **Premium**, select the type of premium storage account to create. The following types of premium storage accounts are available: <ul><li>[Block blobs](./storage-account-overview.md)</li><li>[File shares](../files/storage-files-planning.md#management-concepts)</li><li>[Page blobs](../blobs/storage-blob-pageblob-overview.md)</li></ul><br /><br />Microsoft recommends creating a general-purpose v2, premium block blob, or premium file share account for most scenarios. To select a legacy account type, use the link provided beneath **Instance details**. For more information about legacy account types, see [Legacy storage account types](storage-account-overview.md#legacy-storage-account-types). |
| Instance details | Redundancy | Required | Select your desired redundancy configuration. Not all redundancy options are available for all types of storage accounts in all regions. For more information about redundancy configurations, see [Azure Storage redundancy](storage-redundancy.md).<br /><br />If you select a geo-redundant configuration (GRS or GZRS), your data is replicated to a data center in a different region. For read access to data in the secondary region, select **Make read access to data available in the event of regional unavailability**. |
-The following image shows a standard configuration for a new storage account.
+The following image shows a standard configuration of the basic properties for a new storage account.
### Advanced tab
The following table describes the fields on the **Advanced** tab.
| Blob storage | Access tier | Required | Blob access tiers enable you to store blob data in the most cost-effective manner, based on usage. Select the hot tier (default) for frequently accessed data. Select the cool tier for infrequently accessed data. For more information, see [Hot, Cool, and Archive access tiers for blob data](../blobs/access-tiers-overview.md). | | Azure Files | Enable large file shares | Optional | Available only for standard file shares with the LRS or ZRS redundancies. |
+The following image shows a standard configuration of the advanced properties for a new storage account.
++ ### Networking tab On the **Networking** tab, you can configure network connectivity and routing preference settings for your new storage account. These options can also be configured after the storage account is created.
The following table describes the fields on the **Networking** tab.
| Network connectivity | Connectivity method | Required | By default, incoming network traffic is routed to the public endpoint for your storage account. You can specify that traffic must be routed to the public endpoint through an Azure virtual network. You can also configure private endpoints for your storage account. For more information, see [Use private endpoints for Azure Storage](storage-private-endpoints.md). | | Network routing | Routing preference | Required | The network routing preference specifies how network traffic is routed to the public endpoint of your storage account from clients over the internet. By default, a new storage account uses Microsoft network routing. You can also choose to route network traffic through the POP closest to the storage account, which may lower networking costs. For more information, see [Network routing preference for Azure Storage](network-routing-preference.md). |
+The following image shows a standard configuration of the networking properties for a new storage account.
++ ### Data protection tab On the **Data protection** tab, you can configure data protection options for blob data in your new storage account. These options can also be configured after the storage account is created. For an overview of data protection options in Azure Storage, see [Data protection overview](../blobs/data-protection-overview.md).
The following table describes the fields on the **Data protection** tab.
| Tracking | Enable blob change feed | Optional | The blob change feed provides transaction logs of all changes to all blobs in your storage account, as well as to their metadata. For more information, see [Change feed support in Azure Blob Storage](../blobs/storage-blob-change-feed.md). | | Access control | Enable version-level immutability support | Optional | Enable support for immutability policies that are scoped to the blob version. If this option is selected, then after you create the storage account, you can configure a default time-based retention policy for the account or for the container, which blob versions within the account or container will inherit by default. For more information, see [Enable version-level immutability support on a storage account](../blobs/immutable-policy-configure-version-scope.md#enable-version-level-immutability-support-on-a-storage-account). |
+The following image shows a standard configuration of the data protection properties for a new storage account.
++ ### Encryption tab On the **Encryption** tab, you can configure options that relate to how your data is encrypted when it is persisted to the cloud. Some of these options can be configured only when you create the storage account.
On the **Encryption** tab, you can configure options that relate to how your dat
| User-assigned identity | Required if **Encryption type** field is set to **Customer-managed keys**. | If you are configuring customer-managed keys at create time for the storage account, you must provide a user-assigned identity to use for authorizing access to the key vault. | | Enable infrastructure encryption | Optional | By default, infrastructure encryption is not enabled. Enable infrastructure encryption to encrypt your data at both the service level and the infrastructure level. For more information, see [Create a storage account with infrastructure encryption enabled for double encryption of data](infrastructure-encryption-enable.md). |
+The following image shows a standard configuration of the encryption properties for a new storage account.
++ ### Tags tab On the **Tags** tab, you can specify Resource Manager tags to help organize your Azure resources. For more information, see [Tag resources, resource groups, and subscriptions for logical organization](../../azure-resource-manager/management/tag-resources.md).
+The following image shows a standard configuration of the index tag properties for a new storage account.
++ ### Review + create tab When you navigate to the **Review + create** tab, Azure runs validation on the storage account settings that you have chosen. If validation passes, you can proceed to create the storage account. If validation fails, then the portal indicates which settings need to be modified.
+The following image shows the **Review** tab data prior to the creation of a new storage account.
++ # [PowerShell](#tab/azure-powershell) To create a general-purpose v2 storage account with PowerShell, first create a new resource group by calling the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) command:
storage Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-redundancy.md
Previously updated : 04/19/2022 Last updated : 04/21/2022
synapse-analytics Clone Lake Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/database-designer/clone-lake-database.md
+
+ Title: Clone a lake database using the database designer.
+description: Learn how to clone an entire lake database or specific tables within a lake database using the database designer.
+++++ Last updated : 04/08/2022++++
+# How-to: Clone a lake database
+
+In this article, you'll learn how to clone an existing [lake database](./concepts-lake-database.md) in Azure Synapse using the database designer. The database designer allows you to easily create and deploy a database without writing any code.
+
+## Prerequisites
+
+- Synapse Administrator, or Synapse Contributor permissions are required on the Synapse workspace for creating a lake database.
+
+## Clone database
+1. From your Azure Synapse Analytics workspace **Home** hub, select the **Data** tab on the left. The **Data** tab will open. You'll see the list of databases that already exist in your workspace.
+2. Hover over the **Databases** section and select the ellipsis **...** next to the database you want to clone, then choose **Clone**.
+
+ ![Screenshot showing how to clone an existing database](./media/clone-lake-database/clone-database.png)
+
+3. The database designer tab will open with a copy of the selected database loaded on the canvas.
+4. The database designer opens the **Properties** pane by default. Update the details for this cloned lake database.
+![Screenshot showing the cloned database in the designer](./media/clone-lake-database/database-copy.png)
+ - **Name** Provide a new name for this database. Note that this name cannot be changed once the lake database is published.
+ - **Description** Give your database a description (optional).
+ - **Storage settings for database** provide the default storage information for tables in the database. The default settings are applied to each table in the database unless it's overridden on the table itself.
+ - **Linked service** provide the default linked service used to store your data in Azure Data Lake Storage. The default linked service associated with the Synapse workspace will be shown, but you can change the **Linked Service** to any ADLS storage account you like.
+ - **Input folder** set the default container and folder path within that linked service using the file browser or manually editing the path with the pencil icon.
+ - **Data format** select the data format. Lake databases in Azure Synapse support parquet and delimited text as the storage formats for data.
+5. You can add additional tables to this cloned database by selecting the **+ Table** button. For more information, see [Modify lake database](./modify-lake-database.md).
+6. Once you have completed all the changes to your cloned database, it's now time to publish it. If you're using Git integration with your Synapse workspace, you must commit your changes and merge them into the collaboration branch. [Learn more about source control in Azure Synapse](././cicd/../../cicd/source-control.md). If you're using Synapse Live mode, you can select "publish".
++
+## Clone tables within a lake database
+The database designer allows you to clone any of the tables in your database.
+
+1. Select a table within your lake database and click on the ellipsis **...** next to the table name.
+2. Click **Clone**.
+
+ ![Screenshot showing how to clone a table](./media/clone-lake-database/clone-table.png)
+
+3. The database designer will create a copy of the table and opens the **General** tab by default.
+
+ ![Screenshot showing the cloned table in the designer](./media/clone-lake-database/table-copy-general-tab.png)
+
+4. Update the **Name**, **Description** and **Storage settings for table** as needed. For more information, see [Modify lake database](./modify-lake-database.md).
+5. Similarly update the **Columns** tab and **Relationships** tab as needed.
+6. Once you have completed all the changes to your cloned database, it's now time to publish it. If you're using Git integration with your Synapse workspace, you must commit your changes and merge them into the collaboration branch. [Learn more about source control in Azure Synapse](././cicd/../../cicd/source-control.md). If you're using Synapse Live mode, you can select "publish".
++
+## Next steps
+Continue to explore the capabilities of the database designer using the links below.
+- [Create an empty lake database](./create-empty-lake-database.md)
+- [Learn more about lake databases](./concepts-lake-database.md)
+- [Create a lake database from lake database template](./create-lake-database-from-lake-database-templates.md)
synapse-analytics Concepts Database Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/database-designer/concepts-database-templates.md
# Lake database templates
-Azure Synapse Analytics provides standardized database templates for various industries to readily use and create a database model per organization needs. These templates contain rich metadata for building understanding of a data model. Use these templates to create your lake database and use Azure Synapse analytical runtime to provide insights to business users.
-
-Learn concepts related to lake database templates in Azure Synapse. Use these templates to create a database with rich metadata for better understanding and productivity.
-
-## Business area templates
-
-Business area templates provide the most comprehensive and granular view of data for a business or subject area. Business area models are also referred to as Subject Area or domain templates. Business area templates contain tables and columns relevant to a particular business within an industry. Data stewards, data governance team, and business teams within an organization can use the business area templates to build business-centric data schema that facilitate detailed communication of business requirements and scope. Each business area template is constructed from a common set of entities from the corresponding industry enterprise database template to ensure that business area templates will have common keys, attributes, and definitions consistent with other industry models. For example, Accounting & Financial Reporting, Marketing, Budget & Forecasting are business area templates for many industries such as Retail, or Banking.
-
-![Business area templates example](./media/concepts-database-templates/business-area-template-example.png)
+Azure Synapse Analytics provides industry specific database templates to help standardize data in the lake. These templates provide schemas for predefined business areas, enabling data to be loaded into a lake database in a structured way. Use these templates to create your lake database and use Azure Synapse analytical runtime to provide insights to business users.
## Enterprise templates
-Enterprise database templates contain a subset of tables that are most likely to be of interest to an organization within a specific industry. It provides a high-level overview and describes the connectivity between the related business area templates. These templates serve as an accelerator for many types of large projects. For example, the banking template has one enterprise template called "Banking".
+Enterprise database templates contain a subset of tables that are most likely to be of interest to an organization within a specific industry. It provides a high-level overview and describes the connectivity between the related business areas. These templates serve as an accelerator for many types of large projects. For example, the retail template has one enterprise template called "Retail".
![Enterprise template example](./media/concepts-database-templates/enterprise-template-example.png)
A table is an object with an independent existence that can be differentiated fr
## Column
-Each table is described by a set of columns. Each column has a name, description, data type and is associated with a table. There are around 30,000 columns in the database templates. For example, CustomerId is a column in the table Customer.
+Each table is described by a set of columns that represent the values and data that make up the table. Each column has a name, description, data type and is associated with a table. There are around 30,000 columns in the database templates. For example, CustomerId is a column in the table Customer.
## Primary key
A composite key is one that is composed of two or more columns that are together
## Relationships
-Relations are associations or interactions between any two tables. For example, the tables Customer and CustomerEmail are related to each other. There are two tables involved in a relationship. There's a parent table and a child table, often connected by a foreign key. You might say that the relationship is From table To table.
+Relationships are associations or interactions between any two tables. For example, the tables Customer and CustomerEmail are related to each other. There are two tables involved in a relationship. There's a parent table and a child table, often connected by a foreign key. You might say that the relationship is From table To table.
## Table partitions
synapse-analytics Concepts Lake Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/database-designer/concepts-lake-database.md
The lake database in Azure Synapse Analytics enables customers to bring together
## Database designer
-The new database designer gives you the possibility to create a data model for your lake database and add additional information to it. Every Entity and Attribute can be described to provide more information about the model, which not only contains Entities but relationships as well. In particular, the lack to model relationships has been a challenge for the interaction on the data lake. This challenge is now addressed with an integrated designer that provides possibilities that have been available in databases but not on the lake. Also the capability to add descriptions and possible demo values to the model allows people who are interacting with it in the future to have information where they need it to get a better understanding about the data.
+The new database designer gives you the possibility to create a data model for your lake database and add additional information to it. Every Entity and Attribute can be described to provide more information about the model, which not only contains Entities but relationships as well. In particular, the inability to model relationships has been a challenge for the interaction on the data lake. This challenge is now addressed with an integrated designer that provides possibilities that have been available in databases but not on the lake. Also the capability to add descriptions and possible demo values to the model allows people who are interacting with it in the future to have information where they need it to get a better understanding about the data.
## Data storage
synapse-analytics Create Empty Lake Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/database-designer/create-empty-lake-database.md
In this article, you'll learn how to create an empty [lake database](./concepts-
- At least Synapse User role permissions are required for exploring a lake database template from Gallery. - Synapse Administrator, Synapse Contributor, or Synapse Artifact Publisher permissions are required on the Synapse workspace for creating a lake database.-- Storage Blob Data Contributor permissions are required on data lake.
+- Storage Blob Data Contributor permissions are required on data lake when using the create table **From data lake** option.
## Create lake database from database template 1. From your Azure Synapse Analytics workspace **Home** hub, select the **Data** tab on the left. The **Data** tab will open and you will see the list of databases that already exist in your workspace.
-2. Hover over the **+** button and select, then choose **Lake database (preview)**.
-![Screenshot showing create empty lake database](./media/create-empty-lake-database/create-empty-lakedb.png)
+2. Hover over the **+** button and select, then choose **Lake database**.
+
+ ![Screenshot showing create empty lake database](./media/create-empty-lake-database/create-empty-lakedb.png)
+ 3. The database designer tab will open with an empty database. 4. The database designer has **Properties** on the right that need to be configured. - **Name** - Give your database a name. Names cannot be edited after the database is published, so make sure the name you choose is correct.
synapse-analytics Create Lake Database From Lake Database Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/database-designer/create-lake-database-from-lake-database-templates.md
In this article, you'll learn how to use the Azure Synapse database templates to
- At least Synapse User role permissions are required for exploring a lake database template from Gallery. - Synapse Administrator, or Synapse Contributor permissions are required on the Synapse workspace for creating a lake database.-- Storage Blob Data Contributor permissions are required on data lake.
+- Storage Blob Data Contributor permissions are required on data lake when using the create table **From data lake** option.
## Create lake database from database template
In this article, you'll learn how to use the Azure Synapse database templates to
3. Select the industry you're interested in (for example, **Retail**) and select **Continue** to navigate to the exploration of the data model. 4. You'll land on the database canvas and can explore the tables available in the template. By default, the canvas will show a selection of the most used tables in that template. The canvas has various tools to help you navigate the entity-relationship diagram. - **Zoom to fit** to fit all tables on the canvas in the viewing area
+ - **Undo last action** to undo **one** recent change
- **Increase zoom** to zoom in to the canvas - **Decrease zoom** to zoom out of the canvas - **Zoom slider** to control the zoom level - **Zoom preview** to provide a preview of the canvas - **Expand all**/**Collapse all** to view more or less columns within a table on the canvas - **Clear canvas** to clear-off all the tables on the canvas
-![Exploration page of the canvas, showing sample tables and controls.](./media/create-lake-database-from-lake-database-template/canvas-overview.png)
+ 5. On the left, you'll see list of folders containing the items of the template you can add to the canvas. There are several controls to help. - **Search box** to search for tables based on a term. The term will be searched across the template tables, columns, and descriptions.
In this article, you'll learn how to use the Azure Synapse database templates to
- **Business areas** are folders containing tables related to that business construct. For example, Budget & Forecasting contains tables related to managing budgets. - You can expand business area folders to view the tables, and select the checkbox to add them to the canvas. - Selected tables can be removed via the checkbox.
+ - You can also click on the ellipses next to the business area folder and **Select All** or **UnSelect All** to add/remove all tables under that business area to the canvas.
6. You can select a table on the canvas. It opens the table properties pane with the tabs General, Columns, and Relationships. - The General tab has information on the table such as its name and description. - The Columns tab has the details about all the columns that make up the table such as column names and datatypes. - The Relationships tab lists the incoming and outgoing relationships of the table with other tables on the canvas.
-
-7. To quickly add tables that are related to the tables on canvas, select the ellipses to the right of the table name and then select **Add related tables**. All tables with existing relationships are added to the canvas.
+ - Use the **Select all** toggle to view all the 'from' & 'to' relationships to that table.
+ - Using the check boxes next to each relationship in the relationship tab, add the required table - relationship to the canvas.
++
+7. To quickly add tables that are related to the tables on canvas, select the ellipses to the right of the table name and then select **Add related tables**. All tables with existing relationships are added to the canvas. If this adds too many tables to the canvas, use the **Undo Last Action** to undo this change.
-8. Once the canvas has all the tables that meet your requirements, select **Create database** to continue with creation of lake database. The new database will show up in the database designer and you can customize it per your business needs.
+8. Once the canvas has all the tables that meet your requirements, select **Create database** to continue with creation of lake database. The new database will show up in the database designer and you can customize it per your business needs.
9. The database designer has more **Properties** on the right that need to be configured. - **Name** give your database a name. Names cannot be edited after the database is published, so make sure the name you choose is correct. - **Description** Giving your database a description is optional, but it allows users to understand the purpose of the database. - **Storage settings for database** is a section containing the default storage information for tables in the database. This default is applied to each table in the database unless it's overridden on the table itself. - **Linked service** is the default linked service used to store your data in Azure Data Lake Storage. The default linked service associated with the Synapse workspace will be shown, but you can change the **Linked Service** to any ADLS storage account you like.
- - **Input folder** used to set the default container and folder path within that linked service using the file browser.
+ - **Input folder** used to set the default container and folder path within that linked service using the file browser or manually editing the path with the pencil icon.
- **Data format** lake databases in Azure Synapse support parquet and delimited text as the storage formats for data. > [!NOTE]
-> You can always override the default storage settings on a table by table basis, and the default remains customizable. If you are not sure what to choose, you can revisit this later.
+> You can always override the default storage settings on a table by table basis, and the default remains customizable. If you are not sure what to choose, you can revisit this later. If you are unsure of the folder hierarchy in the data lake, you can also specify wildcards to traverse the directory structure.
-![Screenshot showing the database designer with the properties panel open](./media/create-lake-database-from-lake-database-template/designer-overview.png)
10. You can begin to customize tables, columns, and relationships inherited from the database template. You can also add custom tables, columns, relationships as needed in the database. For more information on modifying a lake database, see [Modify a lake database.](./modify-lake-database.md)
synapse-analytics Modify Lake Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/database-designer/modify-lake-database.md
In this article, you'll learn how to modify an existing [lake database](./concep
## Prerequisites - Synapse Administrator, or Synapse Contributor permissions are required on the Synapse workspace for creating a lake database.-- Storage Blob Data Contributor permissions are required on data lake.
+- Storage Blob Data Contributor permissions are required on data lake when using the create table **From data lake** option.
## Modify database properties 1. From your Azure Synapse Analytics workspace **Home** hub, select the **Data** tab on the left. The **Data** tab will open you'll see the list of databases that already exist in your workspace.
-2. Hover over the **Databases** section and select the ellipsis **...** next to the database you want to modify, then choose **Open (preview)**.
+2. Hover over the **Databases** section and select the ellipsis **...** next to the database you want to modify, then choose **Open**.
![Screenshot showing how to open an existing database](./media/modify-lake-database/open-designer.png)
In this article, you'll learn how to modify an existing [lake database](./concep
- **Description** Giving your database a description is optional, but it allows users to understand the purpose of the database. - **Storage settings for database** is a section containing the default storage information for tables in the database. The default settings are applied to each table in the database unless it's overridden on the table itself. - **Linked service** is the default linked service used to store your data in Azure Data Lake Storage. The default linked service associated with the Synapse workspace will be shown, but you can change the **Linked Service** to any ADLS storage account you like.
- - **Input folder** used to set the default container and folder path within that linked service using the file browser.
+ - **Input folder** used to set the default container and folder path within that linked service using the file browser or manually editing the path with the pencil icon.
- **Data format** lake databases in Azure Synapse support parquet and delimited text as the storage formats for data. 5. To add a table to the database, select the **+ Table** button. - **Custom** will add a new table to the canvas.
The **General** tab contains information specific to the table itself.
- In addition, there is a collapsible section called **Storage settings for table** that provides settings for the underlying storage information used by the table. - **Inherit from database default** a checkbox that determines whether the storage settings below are inherited from the values set in the database **Properties** tab, or are set individually. If you want to customize the storage values, uncheck this box. - **Linked service** is the default linked service used to store your data in Azure Data Lake Storage. Change this to pick a different ADLS account.
- - **Input folder** the folder in ADLS where the data loaded to this table will live. This can be edited via the file browser.
+ - **Input folder** the folder in ADLS where the data loaded to this table will live. You can either browse the folder location or edit it manually using the pencil icon.
- **Data format** the data format of the data in the **Input folder** Lake databases in Azure Synapse support parquet and delimited text as the storage formats for data. If the data format doesn't match the data in the folder, queries to the table will fail. - For a **Data format** of Delimited text, there are further settings: - **Row headers** check this box if the data has row headers.
- - **Line breaks** check this box if the data has line breaks in any of its rows. This will prevent formatting issues.
+ - **Enable multiline in data** check this box if the data has multiple lines in a string column.
+ - **Quote Character** specify the custom quote character for a delimited text file.
+ - **Escape Character** specify the custom escape character for a delimited text file.
- **Data compression** the compression type used on the data.
- - **Delimiter** the field delimiter used in the data files. Supported values are: Comma (,), tab (\t), and pipe (|).
+ - **Delimiter** the field delimiter used in the data files. Supported values are: Comma (,), tab (\t), and pipe (|).
+ - **Partition columns** the list of partition columns will be displayed here.
+ - **Appendable** check this box if you are querying Dataverse data from SQL Serverless.
- For Parquet data, there's the following setting: - **Data compression** the compression type used on the data.
The **General** tab contains information specific to the table itself.
The **Columns** tab is where the columns for the table are listed and can be modified. On this tab are two lists of columns: **Standard columns** and **Partition columns**. **Standard columns** are any column that stores data, is a primary key, and otherwise isn't used for the partitioning of the data. **Partition columns** store data as well, but are used to partition the underlying data into folders based on the values contained in the column. Each column has the following properties. ![Screenshot of the Columns tab](./media/modify-lake-database/columns-tab.png) - **Name** the name of the column. Must be unique within the table.
- - **PK** or primary key. Indicates whether the column is a primary key for the table. Not applicable to partition columns.
+ - **Keys** indicates whether the column is a primary key (PK) and/or foreign key (FK) for the table. Not applicable to partition columns.
- **Description** a description of the column. If the column was created from a database template, the description of the concept represented by this column will be seen. This field is editable and can be changed to match the description that matches your business requirements. - **Nullability** indicates whether there can be null values in this column. Not applicable to partition columns. - **Data type** sets the data type for the Column based on the available list of Spark data types.
At the top of the **Columns** tab is a command bar that can be used to interact
- **Partition column** adds a new custom partition column. - **Clone** duplicates the selected column. Cloned columns are always of the same type as the selected column. - **Convert type** is used to change the selected **standard column** to a **partition column** and the other way around. This option will be grayed out if you have selected multiple columns of different types or the selected column is ineligible to be converted because of a **PK** or **Nullability** flag set on the column.
- - **Delete** deletes the selected columns from the table. This action is irreversible.
+ - **Delete** deletes the selected columns from the table. This action is irreversible.
+
+You can also re-arrange the order of the columns by drag and drop using the double vertical ellipses that show up on the left of the column name when you hover over or click on the column as shown in the image above.
#### Partition Columns
At the top of the **Relationships** tab, is the command bar that can be used to
## Next steps Continue to explore the capabilities of the database designer using the links below. - [Create an empty lake database](./create-empty-lake-database.md)-- [Learn more about lake databases](./concepts-lake-database.md)
+- [Clone a lake database](./clone-lake-database.md)
- [Create a lake database from lake database template](./create-lake-database-from-lake-database-templates.md)
synapse-analytics Overview Database Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/database-designer/overview-database-templates.md
You can set up this use case by selecting the six tables in the retail database
A typical database template addresses the core requirements of a specific industry and consists of:
-* A supporting set of [business area templates](concepts-database-templates.md#business-area-templates).
-* One or more [enterprise templates](concepts-database-templates.md#enterprise-templates).
+* One or more [enterprise templates](concepts-database-templates.md#enterprise-templates).
+* Tables grouped by **business areas**.
## Available database templates
Currently, you can choose from the following database templates in Azure Synapse
* **Agriculture**ΓÇè-ΓÇèFor companies engaged in growing crops, raising livestock, and dairy production. * **Automotive** - For companies manufacturing automobiles, heavy vehicles, tires, and other automotive components.
-* **Banking** - For companies that analyze banking data.
+* **Banking** - For companies providing a wide range of banking and related financial services.
* **Consumer Goods** - For manufacturers or producers of goods bought and used by consumers. * **Energy & Commodity Trading**ΓÇè-ΓÇèFor traders of energy, commodities, or carbon credits. * **Freight & Logistics**ΓÇè-ΓÇèFor companies that provide freight and logistics services. * **Fund Management** - For companies that manage investment funds for investors. * **Genomics** - For companies acquiring and analyzing genomic data about human beings or other species.
+* **Healthcare Insurance** - For organizations providing insurance to cover healthcare needs (sometimes know as Payors).
+* **Healthcare Provider** - For organizations providing healthcare services.
* **Life Insurance & Annuities** - For companies that provide life insurance, sell annuities, or both. * **Manufacturing** - For companies engaged in discrete manufacturing of a wide range of products. * **Oil & Gas** - For companies that are involved in various phases of the Oil & Gas value chain. * **Pharmaceuticals** - For companies engaged in creating, manufacturing, and marketing pharmaceutical and bio-pharmaceutical products and medical devices. * **Property & Casualty Insurance** - For companies that provide insurance against risks to property and various forms of liability coverage.
+* **R&D and Clinical Trials** - For companies involved in research and development and clinical trials of pharmaceutical products or devices.
* **Retail** - For sellers of consumer goods or services to customers through multiple channels. * **Utilities**ΓÇè-ΓÇèFor gas, electric, and water utilities; power generators; and water desalinators.-
+
As emission and carbon management is an important discussion in all industries, we've included those components in all the available database templates. These components make it easy for companies who need to track and report their direct and indirect greenhouse gas emissions. ## Next steps
synapse-analytics Quick Start Create Lake Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/database-designer/quick-start-create-lake-database.md
This quick start gives you a run through of an end-2-end scenario on how you can
## Prerequisites - At least Synapse User role permissions are required for exploring a lake database template from Gallery. - Synapse Administrator, or Synapse Contributor permissions are required on the Synapse workspace for creating a lake database.-- Storage Blob Data Contributor permissions are required on data lake.
+- Storage Blob Data Contributor permissions are required on data lake when using create table **From data lake** option.
## Create a lake database from database templates
-Use the new database templates (preview) functionality to create a lake database that you can use to configure your data model for the database.
+Use the new database templates functionality to create a lake database that you can use to configure your data model for the database.
For our scenario we will use the Retail database template and select the following entities: - **RetailProduct** - A product is anything that can be offered to a market that might satisfy a need by potential customers. That product is the sum of all physical, psychological, symbolic, and service attributes associated with it.
A transaction consists of one or more discrete events.
- **Party** - A party is an individual, organization, legal entity, social organization, or business unit of interest to the business. - **Customer** - A customer is an individual or legal entity that has or has purchased a product or service. - **Channel** - A channel is a means by which products or services are sold and/or distributed. The easiest way to find them is by using the search box above the different business areas that contain the tables. ![Database Template example](./media/quick-start-create-lake-database/model-example.png)
synapse-analytics Synapse Workspace Ip Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-ip-firewall.md
Previously updated : 02/25/2022 Last updated : 04/12/2022
You can also add IP firewall rules to a Synapse workspace after the workspace is
You can connect to your Synapse workspace using Synapse Studio. You can also use SQL Server Management Studio (SSMS) to connect to the SQL resources (dedicated SQL pools and serverless SQL pool) in your workspace.
-Make sure that the firewall on your network and local computer allows outgoing communication on TCP ports 80, 443 and 1433 for Synapse Studio.
+Make sure that the firewall on your network and local computer allows outgoing communication on TCP ports 80, 443 and 1433 for Synapse Studio.
+For private endpoints of your workspace target resources (Sql, SqlOnDemand, Dev), allow outgoing communication on TCP port 443 and 1433, unless you have configured other custom ports.
Also, you need to allow outgoing communication on UDP port 53 for Synapse Studio. To connect using tools such as SSMS and Power BI, you must allow outgoing communication on TCP port 1433.
For more information on the methods to manage the firewall programmatically, see
## Next steps
-Create an [Azure Synapse Workspace](../quickstart-create-workspace.md)
-
-Create an Azure Synapse workspace with a [Managed workspace Virtual Network](./synapse-workspace-managed-vnet.md)
+- Create an [Azure Synapse Workspace](../quickstart-create-workspace.md)
+- Create an Azure Synapse workspace with a [Managed workspace Virtual Network](./synapse-workspace-managed-vnet.md)
+- [Troubleshoot Azure Private Link connectivity problems](../../private-link/troubleshoot-private-link-connectivity.md)
+- [Troubleshoot Azure Private Endpoint connectivity problems](../../private-link/troubleshoot-private-endpoint-connectivity.md)
synapse-analytics Apache Spark 24 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-24-runtime.md
Title: Azure Synapse Runtime for Apache Spark 2.4 description: Supported versions of Spark, Scala, Python, and .NET for Apache Spark 2.4.-+ Previously updated : 01/04/2021 - Last updated : 04/18/2022 +
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This docume
## Component versions | Component | Version | | -- | -- |
-| Apache Spark | 2.4|
+| Apache Spark | 2.4.8 |
| Operating System | Ubuntu 16.04 | | Java | 1.8.0_272 | | Scala | 2.11 |
zipp=0.6.0
## Next steps - [Azure Synapse Analytics](../overview-what-is.md)-- [Apache Spark Documentation](https://spark.apache.org/docs/2.4.4/)
+- [Apache Spark Documentation](https://spark.apache.org/docs/2.4.8/)
- [Apache Spark Concepts](apache-spark-concepts.md)
synapse-analytics Apache Spark 3 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-3-runtime.md
Title: Azure Synapse Runtime for Apache Spark 3.1 description: Supported versions of Spark, Scala, Python, and .NET for Apache Spark 3.1.-+ Previously updated : 09/22/2021 - Last updated : 04/18/2022+
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document will cover the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.1.
-## Known Issues
-* Synapse Pipeline/Dataflows support is coming soon.
-* The following connector support are coming soon:
- * Azure Data Explorer connector
- * SQL Server
-* Hyperspace, Spark Cruise, and Dynamic Allocation Executors are coming soon.
- ## Component versions | Component | Version | | -- | -- |
-| Apache Spark | 3.1 |
+| Apache Spark | 3.1.2 |
| Operating System | Ubuntu 18.04 | | Java | 1.8.0_282 | | Scala | 2.12.10 |
websocket-client=1.1.0
## Next steps - [Azure Synapse Analytics](../overview-what-is.md)-- [Apache Spark Documentation](https://spark.apache.org/docs/3.0.2/)
+- [Apache Spark Documentation](https://spark.apache.org/docs/3.1.2/)
- [Apache Spark Concepts](apache-spark-concepts.md)
synapse-analytics Apache Spark 32 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-32-runtime.md
+
+ Title: Azure Synapse Runtime for Apache Spark 3.2
+description: Supported versions of Spark, Scala, Python, and .NET for Apache Spark 3.2.
++++ Last updated : 04/20/2022 ++++
+# Azure Synapse Runtime for Apache Spark 3.2 (Preview)
+
+> [!IMPORTANT]
+> Azure Synapse Runtime for Apache Spark 3.2 is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document will cover the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.2.
+
+## Component versions
+| Component | Version |
+| -- | -- |
+| Apache Spark | 3.2.1 |
+| Operating System | Ubuntu 18.04 |
+| Java | 1.8.0_282 |
+| Scala | 2.12.15 |
+| Hadoop | 3.3.1 |
+| .NET Core | 3.1 |
+| .NET | 2.0.0 |
+| Delta Lake | 1.1 |
+| Python | 3.8 |
+
+## Scala and Java libraries
+
+HikariCP-2.5.1.jar
+
+JLargeArrays-1.5.jar
+
+JTransforms-3.1.jar
+
+RoaringBitmap-0.9.0.jar
+
+ST4-4.0.4.jar
+
+SparkCustomEvents-3.2.0-1.0.0.jar
+
+TokenLibrary-assembly-3.0.0.jar
+
+VegasConnector-1.1.01_2.12_3.2.0-SNAPSHOT.jar
+
+activation-1.1.1.jar
+
+adal4j-1.6.3.jar
+
+aircompressor-0.21.jar
+
+algebra_2.12-2.0.1.jar
+
+aliyun-java-sdk-core-3.4.0.jar
+
+aliyun-java-sdk-ecs-4.2.0.jar
+
+aliyun-java-sdk-ram-3.0.0.jar
+
+aliyun-java-sdk-sts-3.0.0.jar
+
+aliyun-sdk-oss-3.4.1.jar
+
+annotations-17.0.0.jar
+
+antlr-runtime-3.5.2.jar
+
+antlr4-runtime-4.8.jar
+
+aopalliance-repackaged-2.6.1.jar
+
+apache-log4j-extras-1.2.17.jar
+
+apiguardian-api-1.1.0.jar
+
+arpack-2.2.1.jar
+
+arpack_combined_all-0.1.jar
+
+arrow-format-2.0.0.jar
+
+arrow-memory-core-2.0.0.jar
+
+arrow-memory-netty-2.0.0.jar
+
+arrow-vector-2.0.0.jar
+
+audience-annotations-0.5.0.jar
+
+avro-1.10.2.jar
+
+avro-ipc-1.10.2.jar
+
+avro-mapred-1.10.2.jar
+
+aws-java-sdk-bundle-1.11.901.jar
+
+azure-data-lake-store-sdk-2.3.6.jar
+
+azure-eventhubs-3.3.0.jar
+
+azure-eventhubs-spark_2.12-2.3.21.jar
+
+azure-keyvault-core-1.0.0.jar
+
+azure-storage-7.0.1.jar
+
+azure-synapse-ml-pandas_2.12-1.0.0.jar
+
+azure-synapse-ml-predict_2.12-1.0.jar
+
+blas-2.2.1.jar
+
+bonecp-0.8.0.RELEASE.jar
+
+breeze-macros_2.12-1.2.jar
+
+breeze_2.12-1.2.jar
+
+cats-kernel_2.12-2.1.1.jar
+
+chill-java-0.10.0.jar
+
+chill_2.12-0.10.0.jar
+
+client-sdk-1.14.0.jar
+
+cntk-2.4.jar
+
+commons-cli-1.2.jar
+
+commons-codec-1.15.jar
+
+commons-collections-3.2.2.jar
+
+commons-compiler-3.0.16.jar
+
+commons-compress-1.21.jar
+
+commons-crypto-1.1.0.jar
+
+commons-dbcp-1.4.jar
+
+commons-io-2.8.0.jar
+
+commons-lang-2.6.jar
+
+commons-lang3-3.12.0.jar
+
+commons-logging-1.1.3.jar
+
+commons-math3-3.4.1.jar
+
+commons-net-3.1.jar
+
+commons-pool-1.5.4.jar
+
+commons-pool2-2.6.2.jar
+
+commons-text-1.6.jar
+
+compress-lzf-1.0.3.jar
+
+config-1.3.4.jar
+
+core-1.1.2.jar
+
+cos_api-bundle-5.6.19.jar
+
+cosmos-analytics-spark-3.2.1-connector-1.6.0.jar
+
+cudf-22.02.0-cuda11.jar
+
+curator-client-2.13.0.jar
+
+curator-framework-2.13.0.jar
+
+curator-recipes-2.13.0.jar
+
+datanucleus-api-jdo-4.2.4.jar
+
+datanucleus-core-4.1.17.jar
+
+datanucleus-rdbms-4.1.19.jar
+
+delta-core_2.12-1.1.0.1.jar
+
+derby-10.14.2.0.jar
+
+dropwizard-metrics-hadoop-metrics2-reporter-0.1.2.jar
+
+flatbuffers-java-1.9.0.jar
+
+fluent-logger-jar-with-dependencies-jdk8.jar
+
+gson-2.8.6.jar
+
+guava-14.0.1.jar
+
+hadoop-aliyun-3.3.1.5.0-57088972.jar
+
+hadoop-annotations-3.3.1.5.0-57088972.jar
+
+hadoop-aws-3.3.1.5.0-57088972.jar
+
+hadoop-azure-3.3.1.5.0-57088972.jar
+
+hadoop-azure-datalake-3.3.1.5.0-57088972.jar
+
+hadoop-client-api-3.3.1.5.0-57088972.jar
+
+hadoop-client-runtime-3.3.1.5.0-57088972.jar
+
+hadoop-cloud-storage-3.3.1.5.0-57088972.jar
+
+hadoop-cos-3.3.1.5.0-57088972.jar
+
+hadoop-openstack-3.3.1.5.0-57088972.jar
+
+hadoop-shaded-guava-1.1.0.jar
+
+hadoop-yarn-server-web-proxy-3.3.1.5.0-57088972.jar
+
+hdinsight-spark-metrics-3.2.0-1.0.0.jar
+
+hive-beeline-2.3.9.jar
+
+hive-cli-2.3.9.jar
+
+hive-common-2.3.9.jar
+
+hive-exec-2.3.9-core.jar
+
+hive-jdbc-2.3.9.jar
+
+hive-llap-common-2.3.9.jar
+
+hive-metastore-2.3.9.jar
+
+hive-serde-2.3.9.jar
+
+hive-service-rpc-3.1.2.jar
+
+hive-shims-0.23-2.3.9.jar
+
+hive-shims-2.3.9.jar
+
+hive-shims-common-2.3.9.jar
+
+hive-shims-scheduler-2.3.9.jar
+
+hive-storage-api-2.7.2.jar
+
+hive-vector-code-gen-2.3.9.jar
+
+hk2-api-2.6.1.jar
+
+hk2-locator-2.6.1.jar
+
+hk2-utils-2.6.1.jar
+
+htrace-core4-4.1.0-incubating.jar
+
+httpclient-4.5.13.jar
+
+httpclient-4.5.6.jar
+
+httpcore-4.4.14.jar
+
+httpmime-4.5.13.jar
+
+httpmime-4.5.6.jar
+
+hyperspace-core-spark3.2_2.12-0.5.1-synapse.jar
+
+impulse-core_spark3.2_2.12-0.1.8.jar
+
+impulse-telemetry-mds_spark3.2_2.12-0.1.8.jar
+
+isolation-forest_3.2.0_2.12-2.0.8.jar
+
+istack-commons-runtime-3.0.8.jar
+
+ivy-2.5.0.jar
+
+jackson-annotations-2.12.3.jar
+
+jackson-core-2.12.3.jar
+
+jackson-core-asl-1.9.13.jar
+
+jackson-databind-2.12.3.jar
+
+jackson-dataformat-cbor-2.12.3.jar
+
+jackson-mapper-asl-1.9.13.jar
+
+jackson-module-scala_2.12-2.12.3.jar
+
+jakarta.annotation-api-1.3.5.jar
+
+jakarta.inject-2.6.1.jar
+
+jakarta.servlet-api-4.0.3.jar
+
+jakarta.validation-api-2.0.2.jar
+
+jakarta.ws.rs-api-2.1.6.jar
+
+jakarta.xml.bind-api-2.3.2.jar
+
+janino-3.0.16.jar
+
+javassist-3.25.0-GA.jar
+
+javatuples-1.2.jar
+
+javax.jdo-3.2.0-m3.jar
+
+javolution-5.5.1.jar
+
+jaxb-api-2.2.11.jar
+
+jaxb-runtime-2.3.2.jar
+
+jcl-over-slf4j-1.7.30.jar
+
+jdo-api-3.0.1.jar
+
+jdom-1.1.jar
+
+jersey-client-2.34.jar
+
+jersey-common-2.34.jar
+
+jersey-container-servlet-2.34.jar
+
+jersey-container-servlet-core-2.34.jar
+
+jersey-hk2-2.34.jar
+
+jersey-server-2.34.jar
+
+jettison-1.1.jar
+
+jetty-util-9.4.43.v20210629.jar
+
+jetty-util-ajax-9.4.43.v20210629.jar
+
+jline-2.14.6.jar
+
+joda-time-2.10.10.jar
+
+jodd-core-3.5.2.jar
+
+jpam-1.1.jar
+
+jsch-0.1.54.jar
+
+json-1.8.jar
+
+json-20090211.jar
+
+json-20210307.jar
+
+json-simple-1.1.jar
+
+json4s-ast_2.12-3.7.0-M11.jar
+
+json4s-core_2.12-3.7.0-M11.jar
+
+json4s-jackson_2.12-3.7.0-M11.jar
+
+json4s-scalap_2.12-3.7.0-M11.jar
+
+jsr305-3.0.0.jar
+
+jta-1.1.jar
+
+jul-to-slf4j-1.7.30.jar
+
+junit-jupiter-5.5.2.jar
+
+junit-jupiter-api-5.5.2.jar
+
+junit-jupiter-engine-5.5.2.jar
+
+junit-jupiter-params-5.5.2.jar
+
+junit-platform-commons-1.5.2.jar
+
+junit-platform-engine-1.5.2.jar
+
+kafka-clients-2.8.0.jar
+
+kryo-shaded-4.0.2.jar
+
+kusto-data-2.7.0.jar
+
+kusto-ingest-2.7.0.jar
+
+kusto-spark_3.0_2.12-2.7.5.jar
+
+lapack-2.2.1.jar
+
+leveldbjni-all-1.8.jar
+
+libfb303-0.9.3.jar
+
+libshufflejni.so
+
+libthrift-0.12.0.jar
+
+libvegasjni.so
+
+lightgbmlib-3.2.110.jar
+
+log4j-1.2.17.jar
+
+lz4-java-1.7.1.jar
+
+macro-compat_2.12-1.1.1.jar
+
+mdsdclientdynamic-2.0.jar
+
+metrics-core-4.2.0.jar
+
+metrics-graphite-4.2.0.jar
+
+metrics-jmx-4.2.0.jar
+
+metrics-json-4.2.0.jar
+
+metrics-jvm-4.2.0.jar
+
+microsoft-catalog-metastore-client-1.0.63.jar
+
+microsoft-log4j-etwappender-1.0.jar
+
+microsoft-spark.jar
+
+minlog-1.3.0.jar
+
+mmlspark-1.0.0-rc3-194-14bef9b1-SNAPSHOT.jar
+
+mmlspark-cognitive-1.0.0-rc3-194-14bef9b1-SNAPSHOT.jar
+
+mmlspark-core-1.0.0-rc3-194-14bef9b1-SNAPSHOT.jar
+
+mmlspark-deep-learning-1.0.0-rc3-194-14bef9b1-SNAPSHOT.jar
+
+mmlspark-lightgbm-1.0.0-rc3-194-14bef9b1-SNAPSHOT.jar
+
+mmlspark-opencv-1.0.0-rc3-194-14bef9b1-SNAPSHOT.jar
+
+mmlspark-vw-1.0.0-rc3-194-14bef9b1-SNAPSHOT.jar
+
+mssql-jdbc-8.4.1.jre8.jar
+
+mysql-connector-java-8.0.18.jar
+
+netty-all-4.1.68.Final.jar
+
+notebook-utils-3.2.0-20220208.5.jar
+
+objenesis-2.6.jar
+
+onnxruntime_gpu-1.8.1.jar
+
+opencsv-2.3.jar
+
+opencv-3.2.0-1.jar
+
+opentest4j-1.2.0.jar
+
+orc-core-1.6.12.jar
+
+orc-mapreduce-1.6.12.jar
+
+orc-shims-1.6.12.jar
+
+oro-2.0.8.jar
+
+osgi-resource-locator-1.0.3.jar
+
+paranamer-2.8.jar
+
+parquet-column-1.12.2.jar
+
+parquet-common-1.12.2.jar
+
+parquet-encoding-1.12.2.jar
+
+parquet-format-structures-1.12.2.jar
+
+parquet-hadoop-1.12.2.jar
+
+parquet-jackson-1.12.2.jar
+
+peregrine-spark-0.10.jar
+
+postgresql-42.2.9.jar
+
+protobuf-java-2.5.0.jar
+
+proton-j-0.33.8.jar
+
+py4j-0.10.9.3.jar
+
+pyrolite-4.30.jar
+
+qpid-proton-j-extensions-1.2.4.jar
+
+rapids-4-spark_2.12-22.02.0-SNAPSHOT.jar
+
+rocksdbjni-6.20.3.jar
+
+scala-collection-compat_2.12-2.1.1.jar
+
+scala-compiler-2.12.15.jar
+
+scala-java8-compat_2.12-0.9.0.jar
+
+scala-library-2.12.15.jar
+
+scala-parser-combinators_2.12-1.1.2.jar
+
+scala-reflect-2.12.15.jar
+
+scala-xml_2.12-1.2.0.jar
+
+scalactic_2.12-3.0.5.jar
+
+shapeless_2.12-2.3.3.jar
+
+shims-0.9.0.jar
+
+slf4j-api-1.7.30.jar
+
+slf4j-log4j12-1.7.16.jar
+
+snappy-java-1.1.8.4.jar
+
+spark-3.2-rpc-history-server-app-listener_2.12-1.0.0.jar
+
+spark-3.2-rpc-history-server-core_2.12-1.0.0.jar
+
+spark-avro_2.12-3.2.1.5.0-57088972.jar
+
+spark-catalyst_2.12-3.2.1.5.0-57088972.jar
+
+spark-cdm-connector-assembly-1.19.2.jar
+
+spark-core_2.12-3.2.1.5.0-57088972.jar
+
+spark-enhancement_2.12-3.2.1.5.0-57088972.jar
+
+spark-enhancementui_2.12-3.0.0.jar
+
+spark-graphx_2.12-3.2.1.5.0-57088972.jar
+
+spark-hadoop-cloud_2.12-3.2.1.5.0-57088972.jar
+
+spark-hive-thriftserver_2.12-3.2.1.5.0-57088972.jar
+
+spark-hive_2.12-3.2.1.5.0-57088972.jar
+
+spark-kusto-synapse-connector_3.1_2.12-1.0.0.jar
+
+spark-kvstore_2.12-3.2.1.5.0-57088972.jar
+
+spark-launcher_2.12-3.2.1.5.0-57088972.jar
+
+spark-microsoft-tools_2.12-3.2.1.5.0-57088972.jar
+
+spark-mllib-local_2.12-3.2.1.5.0-57088972.jar
+
+spark-mllib_2.12-3.2.1.5.0-57088972.jar
+
+spark-mssql-connector-1.2.0.jar
+
+spark-network-common_2.12-3.2.1.5.0-57088972.jar
+
+spark-network-shuffle_2.12-3.2.1.5.0-57088972.jar
+
+spark-repl_2.12-3.2.1.5.0-57088972.jar
+
+spark-sketch_2.12-3.2.1.5.0-57088972.jar
+
+spark-sql-kafka-0-10_2.12-3.2.1.5.0-57088972.jar
+
+spark-sql_2.12-3.2.1.5.0-57088972.jar
+
+spark-streaming-kafka-0-10-assembly_2.12-3.2.1.5.0-57088972.jar
+
+spark-streaming-kafka-0-10_2.12-3.2.1.5.0-57088972.jar
+
+spark-streaming_2.12-3.2.1.5.0-57088972.jar
+
+spark-tags_2.12-3.2.1.5.0-57088972.jar
+
+spark-token-provider-kafka-0-10_2.12-3.2.1.5.0-57088972.jar
+
+spark-unsafe_2.12-3.2.1.5.0-57088972.jar
+
+spark-yarn_2.12-3.2.1.5.0-57088972.jar
+
+spark_diagnostic_cli-1.0.11_spark-3.2.0.jar
+
+spire-macros_2.12-0.17.0.jar
+
+spire-platform_2.12-0.17.0.jar
+
+spire-util_2.12-0.17.0.jar
+
+spire_2.12-0.17.0.jar
+
+spray-json_2.12-1.3.2.jar
+
+sqlanalyticsconnector_3.2.0-1.0.0.jar
+
+stax-api-1.0.1.jar
+
+stream-2.9.6.jar
+
+structuredstreamforspark_2.12-3.0.1-2.1.3.jar
+
+super-csv-2.2.0.jar
+
+synapse-spark-telemetry_2.12-0.0.6.jar
+
+synfs-3.2.0-20220208.5.jar
+
+threeten-extra-1.5.0.jar
+
+tink-1.6.0.jar
+
+transaction-api-1.1.jar
+
+univocity-parsers-2.9.1.jar
+
+velocity-1.5.jar
+
+vw-jni-8.9.1.jar
+
+wildfly-openssl-1.0.7.Final.jar
+
+xbean-asm9-shaded-4.20.jar
+
+xz-1.8.jar
+
+zookeeper-3.6.2.5.0-57088972.jar
+
+zookeeper-jute-3.6.2.5.0-57088972.jar
+
+zstd-jni-1.5.0-4.jar
+
+## Python libraries (Normal VMs)
+
+_libgcc_mutex=0.1
+
+_openmp_mutex=4.5
+_py-xgboost-mutex=2.0
+
+abseil-cpp=20210324.0
+
+absl-py=0.13.0
+
+adal=1.2.7
+
+adlfs=0.7.7
+
+aiohttp=3.7.4.post0
+
+alsa-lib=1.2.3
+
+appdirs=1.4.4
+
+arrow-cpp=3.0.0
+
+astor=0.8.1
+
+astunparse=1.6.3
+
+async-timeout=3.0.1
+
+attrs=21.2.0
+
+aws-c-cal=0.5.11
+
+aws-c-common=0.6.2
+
+aws-c-event-stream=0.2.7
+
+aws-c-io=0.10.5
+
+aws-checksums=0.1.11
+
+aws-sdk-cpp=1.8.186
+
+azure-datalake-store=0.0.51
+
+azure-identity=2021.03.15b1
+
+azure-storage-blob=12.8.1
+
+backcall=0.2.0
+
+backports=1.0
+
+backports.functools_lru_cache=1.6.4
+
+beautifulsoup4=4.9.3
+
+blas=2.109
+
+blas-devel=3.9.0
+
+blinker=1.4
+
+blosc=1.21.0
+
+bokeh=2.3.2
+
+brotli=1.0.9
+
+brotli-bin=1.0.9
+
+brotli-python=1.0.9
+
+brotlipy=0.7.0
+
+brunsli=0.1
+
+bzip2=1.0.8
+
+c-ares=1.17.1
+
+ca-certificates=2021.7.5
+
+cachetools=4.2.2
+
+cairo=1.16.0
+
+certifi=2021.5.30
+
+cffi=1.14.5
+
+chardet=4.0.0
+
+charls=2.2.0
+
+click=8.0.1
+
+cloudpickle=1.6.0
+
+conda=4.9.2
+
+conda-package-handling=1.7.3
+
+configparser=5.0.2
+
+cryptography=3.4.7
+
+cudatoolkit=11.1.1
+
+cycler=0.10.0
+
+cython=0.29.23
+
+cytoolz=0.11.0
+
+dash=1.20.0
+
+dash-core-components=1.16.0
+
+dash-html-components=1.1.3
+
+dash-renderer=1.9.1
+
+dash-table=4.11.3
+
+dash_cytoscape=0.2.0
+
+dask-core=2021.6.2
+
+databricks-cli=0.12.1
+
+dataclasses=0.8
+
+dbus=1.13.18
+
+debugpy=1.3.0
+
+decorator=4.4.2
+
+dill=0.3.4
+
+entrypoints=0.3
+
+et_xmlfile=1.1.0
+
+expat=2.4.1
+
+fire=0.4.0
+
+flask=2.0.1
+
+flask-compress=1.10.1
+
+fontconfig=2.13.1
+
+freetype=2.10.4
+
+fsspec=2021.6.1
+
+future=0.18.2
+
+gast=0.3.3
+
+gensim=3.8.3
+
+geographiclib=1.52
+
+geopy=2.1.0
+
+gettext=0.21.0
+
+gevent=21.1.2
+
+gflags=2.2.2
+
+giflib=5.2.1
+
+gitdb=4.0.7
+
+gitpython=3.1.18
+
+glib=2.68.3
+
+glib-tools=2.68.3
+
+glog=0.5.0
+
+gobject-introspection=1.68.0
+
+google-auth=1.32.1
+
+google-auth-oauthlib=0.4.1
+
+google-pasta=0.2.0
+
+greenlet=1.1.0
+
+grpc-cpp=1.37.1
+
+grpcio=1.37.1
+
+gst-plugins-base=1.18.4
+
+gstreamer=1.18.4
+
+h5py=2.10.0
+
+hdf5=1.10.6
+
+html5lib=1.1
+
+hummingbird-ml=0.4.0
+
+icu=68.1=h58526e2_0
+
+idna=2.10
+
+imagecodecs=2021.3.31
+
+imageio=2.9.0
+
+importlib-metadata=4.6.1
+
+intel-openmp=2021.2.0
+
+interpret=0.2.4
+
+interpret-core=0.2.4
+
+ipykernel=6.0.1
+
+ipython=7.23.1
+
+ipython_genutils=0.2.0
+
+isodate=0.6.0
+
+itsdangerous=2.0.1
+
+jdcal=1.4.1
+
+jedi=0.18.0
+
+jinja2=3.0.1
+
+joblib=1.0.1
+
+jpeg=9d
+
+jupyter_client=6.1.12
+
+jupyter_core=4.7.1
+
+jxrlib=1.1
+
+keras-applications=1.0.8
+
+keras-preprocessing=1.1.2
+
+keras2onnx=1.6.5
+
+kiwisolver=1.3.1
+
+koalas=1.8.0
+
+krb5=1.19.1
+
+lcms2=2.12
+
+ld_impl_linux-64=2.36.1
+
+lerc=2.2.1
+
+liac-arff=2.5.0
+
+libaec=1.0.5
+
+libblas=3.9.0
+
+libbrotlicommon=1.0.9
+
+libbrotlidec=1.0.9
+
+libbrotlienc=1.0.9
+
+libcblas=3.9.0
+
+libclang=11.1.0
+
+libcurl=7.77.0
+
+libdeflate=1.7
+
+libedit=3.1.20210216
+
+libev=4.33
+
+libevent=2.1.10
+
+libffi=3.3
+
+libgcc-ng=9.3.0
+
+libgfortran-ng=9.3.0
+
+libgfortran5=9.3.0
+
+libglib=2.68.3
+
+libiconv=1.16
+
+liblapack=3.9.0
+
+liblapacke=3.9.0
+
+libllvm10=10.0.1
+
+libllvm11=11.1.0
+
+libnghttp2=1.43.0
+
+libogg=1.3.5
+
+libopus=1.3.1
+
+libpng=1.6.37
+
+libpq=13.3
+
+libprotobuf=3.15.8
+
+libsodium=1.0.18
+
+libssh2=1.9.0
+
+libstdcxx-ng=9.3.0
+
+libthrift=0.14.1
+
+libtiff=4.2.0
+
+libutf8proc=2.6.1
+
+libuuid=2.32.1000
+
+libuv=1.41.1
+
+libvorbis=1.3.7
+
+libwebp-base=1.2.0
+
+libxcb=1.14
+
+libxgboost=1.4.0
+
+libxkbcommon=1.0.3
+
+libxml2=2.9.12
+
+libzopfli=1.0.3
+
+lightgbm=3.2.1
+
+lime=0.2.0.1
+
+llvm-openmp=11.1.0
+
+llvmlite=0.36.0
+
+locket=0.2.1
+
+lz4-c=1.9.3
+
+markdown=3.3.4
+
+markupsafe=2.0.1
+
+matplotlib=3.4.2
+
+matplotlib-base=3.4.2
+
+matplotlib-inline=0.1.2
+
+mkl=2021.2.0
+
+mkl-devel=2021.2.0
+
+mkl-include=2021.2.0
+
+mleap=0.17.0
+
+mlflow-skinny=1.18.0
+
+msal=2021.06.08
+
+msal-extensions=2021.06.08
+
+msrest=2021.06.01
+
+multidict=5.1.0
+
+mysql-common=8.0.25
+
+mysql-libs=8.0.25
+
+ncurses=6.2
+
+networkx=2.5.1
+
+ninja=1.10.2
+
+nltk=3.6.2
+
+nspr=4.30
+
+nss=3.67
+
+numba=0.53.1
+
+numpy=1.19.4
+
+oauthlib=3.1.1
+
+olefile=0.46
+
+onnx=1.9.0
+
+onnxconverter-common=1.7.0
+
+onnxmltools=1.7.0
+
+onnxruntime=1.7.2
+
+openjpeg=2.4.0
+
+openpyxl=3.0.7
+
+openssl=1.1.1k
+
+opt_einsum=3.3.0
+
+orc=1.6.7=heec2584_1
+
+packaging=21.0
+
+pandas=1.2.3
+
+parquet-cpp=1.5.1
+
+parso=0.8.2
+
+partd=1.2.0
+
+patsy=0.5.1
+
+pcre=8.45
+
+pexpect=4.8.0
+
+pickleshare=0.7.5
+
+pillow=8.2.0
+
+pip=21.1.1
+
+pixman=0.40.0
+
+plotly=4.14.3
+
+pmdarima=1.8.2
+
+pooch=1.4.0
+
+portalocker=1.7.1
+
+prompt-toolkit=3.0.19
+
+protobuf=3.15.8
+
+psutil=5.8.0
+
+ptyprocess=0.7.0
+
+py-xgboost=1.4.0
+
+py4j=0.10.9
+
+pyarrow=3.0.0
+
+pyasn1=0.4.8
+
+pyasn1-modules=0.2.8
+
+pycairo=1.20.1
+
+pycosat=0.6.3
+
+pycparser=2.20
+
+pygments=2.9.0
+
+pygobject=3.40.1
+
+pyjwt=2.1.0
+
+pyodbc=4.0.30
+
+pyopenssl=20.0.1
+
+pyparsing=2.4.7
+
+pyqt=5.12.3
+
+pyqt-impl=5.12.3
+
+pyqt5-sip=4.19.18
+
+pyqtchart=5.12
+
+pyqtwebengine=5.12.1
+
+pysocks=1.7.1
+
+python=3.8.10
+
+python-dateutil=2.8.1
+
+python-flatbuffers=1.12
+
+python_abi=3.8
+
+pytorch=1.8.1
+
+pytz=2021.1
+
+pyu2f=0.1.5
+
+pywavelets=1.1.1
+
+pyyaml=5.4.1
+
+pyzmq=22.1.0
+
+qt=5.12.9
+
+re2=2021.04.01
+
+readline=8.1
+
+regex=2021.7.6
+
+requests=2.25.1
+
+requests-oauthlib=1.3.0
+
+retrying=1.3.3
+
+rsa=4.7.2
+
+ruamel_yaml=0.15.100
+
+s2n=1.0.10
+
+salib=1.3.11
+
+scikit-image=0.18.1
+
+scikit-learn=0.23.2
+
+scipy=1.5.3
+
+seaborn=0.11.1
+
+seaborn-base=0.11.1
+
+setuptools=49.6.0
+
+shap=0.39.0
+
+six=1.16.0
+
+skl2onnx=1.8.0.1
+
+sklearn-pandas=2.2.0
+
+slicer=0.0.7
+
+smart_open=5.1.0
+
+smmap=3.0.5
+
+snappy=1.1.8
+
+soupsieve=2.2.1
+
+sqlite=3.36.0
+
+statsmodels=0.12.2
+
+tabulate=0.8.9
+
+tenacity=7.0.0
+
+tensorboard=2.4.1
+
+tensorboard-plugin-wit=1.8.0
+
+tensorflow=2.4.1
+
+tensorflow-base=2.4.1
+
+tensorflow-estimator=2.4.0
+
+termcolor=1.1.0
+
+textblob=0.15.3
+
+threadpoolctl=2.1.0
+
+tifffile=2021.4.8
+
+tk=8.6.10
+
+toolz=0.11.1
+
+tornado=6.1
+
+tqdm=4.61.2
+
+traitlets=5.0.5
+
+typing-extensions=3.10.0.0
+
+typing_extensions=3.10.0.0
+
+unixodbc=2.3.9
+
+urllib3=1.26.4
+
+wcwidth=0.2.5
+
+webencodings=0.5.1
+
+werkzeug=2.0.1
+
+wheel=0.36.2
+
+wrapt=1.12.1
+
+xgboost=1.4.0
+
+xorg-kbproto=1.0.7002
+
+xorg-libice=1.0.10
+
+xorg-libsm=1.2.3
+
+xorg-libx11=1.7.2
+
+xorg-libxext=1.3.4
+
+xorg-libxrender=0.9.10003
+
+xorg-renderproto=0.11.1002
+
+xorg-xextproto=7.3.0002
+
+xorg-xproto=7.0.31007
+
+xz=5.2.5
+
+yaml=0.2.5
+
+yarl=1.6.3
+
+zeromq=4.3.4
+
+zfp=0.5.5
+
+zipp=3.5.0
+
+zlib=1.2.11010
+
+zope.event=4.5.0
+
+zope.interface=5.4.0
+
+zstd=1.4.9
+
+azure-common==1.1.27
+
+azure-core==1.16.0
+
+azure-graphrbac==0.61.1
+
+azure-mgmt-authorization==0.61.0
+
+azure-mgmt-containerregistry==8.0.0
+
+azure-mgmt-core==1.3.0
+
+azure-mgmt-keyvault==2.2.0
+
+azure-mgmt-resource==13.0.0
+
+azure-mgmt-storage==11.2.0
+
+azureml-core==1.34.0
+
+azureml-mlflow==1.34.0
+
+azureml-opendatasets==1.34.0
+
+backports-tempfile==1.0
+
+backports-weakref==1.0.post1
+
+contextlib2==0.6.0.post1
+
+docker==4.4.4
+
+ipywidgets==7.6.3
+
+jeepney==0.6.0
+
+jmespath==0.10.0
+
+jsonpickle==2.0.0
+
+kqlmagiccustom==0.1.114.post8
+
+lxml==4.6.5
+
+msrestazure==0.6.4
+
+mypy==0.780
+
+mypy-extensions==0.4.3
+
+ndg-httpsclient==0.5.1
+
+pandasql==0.7.3
+
+pathspec==0.8.1
+
+prettytable==2.4.0
+
+pyperclip==1.8.2
+
+ruamel-yaml==0.17.4
+
+ruamel-yaml-clib==0.2.6
+
+secretstorage==3.3.1
+
+sqlalchemy==1.4.20
+
+typed-ast==1.4.3
+
+torchvision==0.9.1
+
+websocket-client==1.1.0
+
+## Python libraries (GPU Accelerated VMs)
+
+_libgcc_mutex=0.1
+
+_openmp_mutex=4.5
+
+_py-xgboost-mutex=2.0
+
+_tflow_select=2.3.0
+
+abseil-cpp=20210324.1
+
+absl-py=0.13.0
+
+adal=1.2.7
+
+adlfs=0.7.7
+
+aiohttp=3.7.4.post0
+
+appdirs=1.4.4
+
+arrow-cpp=3.0.0
+
+astor=0.8.1
+
+astunparse=1.6.3
+
+async-timeout=3.0.1
+
+attrs=21.2.0
+
+aws-c-cal=0.5.11
+
+aws-c-common=0.6.2
+
+aws-c-event-stream=0.2.7
+
+aws-c-io=0.10.5
+
+aws-checksums=0.1.11
+
+aws-sdk-cpp=1.8.186
+
+azure-datalake-store=0.0.51
+
+azure-storage-blob=12.9.0
+
+backcall=0.2.0
+
+backports=1.0
+
+backports.functools_lru_cache=1.6.4
+
+beautifulsoup4=4.9.3
+
+blas=2.111
+
+blas-devel=3.9.0
+
+blinker=1.4
+
+bokeh=2.3.2
+
+brotli=1.0.9
+
+brotli-bin=1.0.9
+
+brotli-python=1.0.9
+
+brotlipy=0.7.0
+
+bzip2=1.0.8
+
+c-ares=1.17.2
+
+ca-certificates=2021.5.30
+
+cachetools=4.2.2
+
+cairo=1.16.0
+
+certifi=2021.5.30
+
+cffi=1.14.6
+
+chardet=4.0.0
+
+click=8.0.1
+
+colorama=0.4.4
+
+conda=4.9.2
+
+conda-package-handling=1.7.3
+
+configparser=5.0.2
+
+cryptography=3.4.7
+
+cudatoolkit=11.1.1
+
+cycler=0.10.0
+
+cython=0.29.24
+
+cytoolz=0.11.0
+
+dash=1.21.0
+
+dash-core-components=1.17.1
+
+dash-html-components=1.1.4
+
+dash-renderer=1.9.1
+
+dash-table=4.12.0
+
+dash_cytoscape=0.2.0
+
+dask-core=2021.9.1
+
+databricks-cli=0.12.1
+
+dataclasses=0.8
+
+dbus=1.13.18
+
+debugpy=1.4.1
+
+decorator=5.1.0
+
+dill=0.3.4
+
+entrypoints=0.3
+
+et_xmlfile=1.0.1001
+
+expat=2.4.1
+
+ffmpeg=4.3
+
+fire=0.4.0
+
+flask=2.0.1
+
+flask-compress=1.10.1
+
+fontconfig=2.13.1
+
+freetype=2.10.4
+
+fsspec=2021.8.1
+
+future=0.18.2
+
+g-ir-build-tools=1.68.0
+
+g-ir-host-tools=1.68.0
+
+gensim=3.8.3
+
+geographiclib=1.52
+
+geopy=2.1.0
+
+gettext=0.19.8.1
+
+gevent=21.8.0
+
+gflags=2.2.2
+
+gitdb=4.0.7
+
+gitpython=3.1.23
+
+glib=2.68.4
+
+glib-tools=2.68.4
+
+glog=0.5.0
+
+gmp=6.2.1
+
+gnutls=3.6.13
+
+gobject-introspection=1.68.0
+
+google-auth=1.35.0
+
+google-auth-oauthlib=0.4.6
+
+google-pasta=0.2.0
+
+greenlet=1.1.1
+
+grpc-cpp=1.37.1
+
+gst-plugins-base=1.14.0
+
+gstreamer=1.14.0
+
+h5py=2.10.0
+
+hdf5=1.10.6
+
+html5lib=1.1
+
+hummingbird-ml=0.4.0
+
+icu=58.2
+
+idna=2.10
+
+imagecodecs-lite=2019.12.3
+
+imageio=2.9.0
+
+importlib-metadata=4.8.1
+
+interpret=0.2.4
+
+interpret-core=0.2.4
+
+ipykernel=6.4.1
+
+ipython=7.23.1
+
+ipython_genutils=0.2.0
+
+isodate=0.6.0
+
+itsdangerous=2.0.1
+
+jdcal=1.4.1
+
+jedi=0.18.0
+
+jinja2=3.0.1
+
+joblib=1.0.1
+
+jpeg=9b
+
+jupyter_client=7.0.3
+
+jupyter_core=4.8.1
+
+keras=2.4.3
+
+keras-applications=1.0.8
+
+keras-preprocessing=1.1.2
+
+keras2onnx=1.6.5
+
+kiwisolver=1.3.2
+
+koalas=1.8.0
+
+krb5=1.19.2
+
+lame=3.100
+
+ld_impl_linux-64=2.36.1
+
+liac-arff=2.5.0
+
+libblas=3.9.0
+
+libbrotlicommon=1.0.9
+
+libbrotlidec=1.0.9
+
+libbrotlienc=1.0.9
+
+libcblas=3.9.0
+
+libcurl=7.79.1
+
+libedit=3.1.20191231
+
+libev=4.33
+
+libevent=2.1.10
+
+libffi=3.4.2
+
+libgcc-ng=11.2.0
+
+libgfortran-ng=11.2.0
+
+libgfortran5=11.2.0
+
+libgirepository=1.68.0
+
+libglib=2.68.4
+
+libiconv=1.16
+
+liblapack=3.9.0
+
+liblapacke=3.9.0
+
+libllvm11=11.1.0
+
+libnghttp2=1.43.0
+
+libpng=1.6.37
+
+libprotobuf=3.16.0
+
+libsodium=1.0.18
+
+libssh2=1.10.0
+
+libstdcxx-ng=11.2.0
+
+libthrift=0.14.1
+
+libtiff=4.2.0
+
+libutf8proc=2.6.1
+
+libuuid=2.32.1
+
+libuv=1.42.0
+
+libwebp-base=1.2.1
+
+libxcb=1.13
+
+libxgboost=1.4.0
+
+libxml2=2.9.9
+
+lightgbm=3.2.1
+
+lime=0.2.0.1
+
+llvm-openmp=12.0.1
+
+llvmlite=0.37.0
+
+locket=0.2.0
+
+lz4-c=1.9.3
+
+markdown=3.3.4
+
+markupsafe=2.0.1
+
+matplotlib=3.4.2
+
+matplotlib-base=3.4.2
+
+matplotlib-inline=0.1.3
+
+mkl=2021.3.0
+
+mkl-devel=2021.3.0
+
+mkl-include=2021.3.0
+
+mleap=0.17.0
+
+mlflow-skinny=1.18.0
+
+msal=2021.09.01
+
+msrest=2021.09.01
+
+multidict=5.1.0
+
+multiprocess=0.70.12.2
+
+ncurses=6.2
+
+nest-asyncio=1.5.1
+
+nettle=3.6
+
+networkx=2.5
+
+ninja=1.10.2
+
+nltk=3.6.2
+
+numba=0.54.0
+
+numpy=1.19.4
+
+oauthlib=3.1.1
+
+olefile=0.46
+
+onnx=1.9.0
+
+onnxconverter-common=1.7.0
+
+onnxmltools=1.7.0
+
+onnxruntime=1.7.2
+
+openh264=2.1.1
+
+openpyxl=3.0.7
+
+openssl=1.1.1l
+
+opt_einsum=3.3.0
+
+orc=1.6.7
+
+packaging=21.0
+
+pandas=1.2.3
+
+parquet-cpp=1.5.1
+
+parso=0.8.2
+
+partd=1.2.0
+
+pathos=0.2.8
+
+patsy=0.5.1
+
+pcre=8.45
+
+pexpect=4.8.0
+
+pickleshare=0.7.5003
+
+pillow=7.1.2
+
+pip=21.1.1
+
+pixman=0.38.0
+
+pkg-config=0.29.2
+
+plotly=4.14.3
+
+pmdarima=1.8.2
+
+pooch=1.5.1
+
+portalocker=1.7.1
+
+pox=0.3.0
+
+ppft=1.6.6.4
+
+prompt-toolkit=3.0.20
+
+protobuf=3.16.0
+
+psutil=5.8.0
+
+pthread-stubs=0.4
+
+ptyprocess=0.7.0
+
+py-xgboost=1.4.0
+
+py4j=0.10.9
+
+pyarrow=3.0.0
+
+pyasn1=0.4.8
+
+pyasn1-modules=0.2.7
+
+pycairo=1.20.1
+
+pycosat=0.6.3
+
+pycparser=2.20
+
+pydeprecate=0.3.1
+
+pygments=2.10.0
+
+pygobject=3.40.1
+
+pyjwt=2.1.0
+
+pyodbc=4.0.30
+
+pyopenssl=20.0.1
+
+pyparsing=2.4.7
+
+pyqt=5.9.2
+
+pysocks=1.7.1
+
+pyspark=3.1.2
+
+python=3.8.12
+
+python-dateutil=2.8.2
+
+python_abi=3.8
+
+pytorch=1.8.1
+
+pytorch-lightning=1.4.2
+
+pytz=2021.1
+
+pyu2f=0.1.5
+
+pywavelets=1.1.1
+
+pyyaml=5.4.1
+
+pyzmq=22.3.0
+
+qt=5.9.7
+
+re2=2021.04.01
+
+readline=8.1
+
+regex=2021.8.28
+
+requests=2.25.1
+
+requests-oauthlib=1.3.0
+
+retrying=1.3.3
+
+rsa=4.7.2
+
+ruamel_yaml=0.15.80
+
+s2n=1.0.10
+
+salib=1.4.5
+
+scikit-image=0.18.1
+
+scikit-learn=0.23.2
+
+scipy=1.7.1
+
+seaborn=0.11.1
+
+seaborn-base=0.11.1
+
+setuptools=49.6.0
+
+shap=0.39.0
+
+sip=4.19.13
+
+skl2onnx=1.8.0.1
+
+sklearn-pandas=2.2.0
+
+slicer=0.0.7
+
+smart_open=5.2.1
+
+smmap=3.0.5
+
+snappy=1.1.8
+
+soupsieve=2.0.1
+
+sqlite=3.36.0
+
+statsmodels=0.12.2
+
+tabulate=0.8.9
+
+tbb=2021.3.0
+
+tenacity=8.0.1
+
+tensorboard=2.6.0
+
+tensorboard-data-server=0.6.0
+
+tensorboard-plugin-wit=1.8.0
+
+tensorflow=2.4.1
+
+tensorflow-base=2.4.1
+
+termcolor=1.1.0
+
+textblob=0.15.3
+
+threadpoolctl=2.2.0
+
+tifffile=2020.6.3
+
+tk=8.6.11
+
+toolz=0.11.1
+
+torchaudio=0.8.1
+
+torchmetrics=0.5.1
+
+torchvision=0.9.1
+
+tornado=6.1
+
+tqdm=4.62.3
+
+traitlets=5.1.0
+
+unixodbc=2.3.9
+
+urllib3=1.26.4
+
+wcwidth=0.2.5
+
+webencodings=0.5.1
+
+werkzeug=2.0.1
+
+wheel=0.37.0
+
+wrapt=1.12.1
+
+xgboost=1.4.0
+
+xorg-kbproto=1.0.7