Updates from: 01/14/2022 02:06:42
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Identity Provider Generic Saml Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-generic-saml-options.md
Previously updated : 08/25/2021 Last updated : 01/13/2022
The following example shows the `ForceAuthN` property in an authorization reques
</samlp:AuthnRequest> ```
+### Provider name
+
+You can optionally include the `ProviderName` attribute in the SAML authorization request. Set the metadata item as shown below to include the provider name for all requests to the external SAML IDP. The following example shows the `ProviderName` property set to `Contoso app`:
+
+```xml
+<Metadata>
+ ...
+ <Item Key="ProviderName">Contoso app</Item>
+ ...
+</Metadata>
+```
+
+The following example shows the `ProviderName` property in an authorization request:
++
+```xml
+<samlp:AuthnRequest AssertionConsumerServiceURL="https://..." ...
+ ProviderName="Contoso app">
+ ...
+</samlp:AuthnRequest>
+```
++ ### Include authentication context class references A SAML authorization request may contain a **AuthnContext** element, which specifies the context of an authorization request. The element can contain an authentication context class reference, which tells the SAML identity provider which authentication mechanism to present to the user.
active-directory-b2c Identity Provider Microsoft Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-microsoft-account.md
Previously updated : 09/16/2021 Last updated : 01/13/2022
active-directory-b2c Manage User Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/manage-user-access.md
Previously updated : 03/09/2021 Last updated : 01/13/2022
active-directory-b2c Quickstart Native App Desktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/quickstart-native-app-desktop.md
Previously updated : 08/16/2021 Last updated : 01/13/2022
active-directory-b2c Quickstart Single Page App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/quickstart-single-page-app.md
Previously updated : 04/04/2020 Last updated : 01/13/2022
active-directory-b2c Tutorial Create User Flows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/tutorial-create-user-flows.md
To enable [self-service password reset](add-password-reset-policy.md) for the si
1. Select the sign-up or sign-in user flow you created. 1. Under **Settings** in the left menu, select **Properties**.
-1. Under **Password complexity**, select **Self-service password reset**.
+1. Under **Password configuration**, select **Self-service password reset**.
1. Select **Save**. ### Test the user flow
active-directory-domain-services Tutorial Configure Ldaps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/tutorial-configure-ldaps.md
To complete this tutorial, you need the following resources and privileges:
* If needed, [create and configure an Azure Active Directory Domain Services managed domain][create-azure-ad-ds-instance]. * The *LDP.exe* tool installed on your computer. * If needed, [install the Remote Server Administration Tools (RSAT)][rsat] for *Active Directory Domain Services and LDAP*.
+* You need global administrator privileges in your Azure AD tenant to enable secure LDAP.
## Sign in to the Azure portal
active-directory Accidental Deletions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/accidental-deletions.md
# Enable accidental deletions prevention in the Azure AD provisioning service (Preview)
-The Azure AD provisioning service includes a feature to help avoid accidental deletions. This feature ensures that users are not disabled or deleted in an application unexpectedly.
+The Azure AD provisioning service includes a feature to help avoid accidental deletions. This feature ensures that users aren't disabled or deleted in an application unexpectedly.
The feature lets you specify a deletion threshold, above which an admin needs to explicitly choose to allow the deletions to be processed.
active-directory Howto Authentication Passwordless Security Key Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-passwordless-security-key-windows.md
Organizations may choose to use one or more of the following methods to enable t
To enable the use of security keys using Intune, complete the following steps:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Microsoft Endpoint Manager admin center](https://endpoint.microsoft.com).
1. Browse to **Microsoft Intune** > **Device enrollment** > **Windows enrollment** > **Windows Hello for Business** > **Properties**. 1. Under **Settings**, set **Use security keys for sign-in** to **Enabled**.
Configuration of security keys for sign-in isn't dependent on configuring Window
To target specific device groups to enable the credential provider, use the following custom settings via Intune:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Browse to **Microsoft Intune** > **Device configuration** > **Profiles** > **Create profile**.
+1. Sign in to the [Microsoft Endpoint Manager admin center](https://endpoint.microsoft.com).
+1. Browse to **Device** > **Windows** > **Configuration Profiles** > **Create profile**.
1. Configure the new profile with the following settings: - Name: Security Keys for Windows Sign-In - Description: Enables FIDO Security Keys to be used during Windows Sign In - Platform: Windows 10 and later
- - Profile type: Custom
+ - Profile type: Template > Custom
- Custom OMA-URI Settings: - Name: Turn on FIDO Security Keys for Windows Sign-In - OMA-URI: ./Device/Vendor/MSFT/PassportForWork/SecurityKey/UseSecurityKeyForSignin
If you'd like to share feedback or encounter issues about this feature, share vi
[Learn more about device registration](../devices/overview.md)
-[Learn more about Azure AD Multi-Factor Authentication](../authentication/howto-mfa-getstarted.md)
+[Learn more about Azure AD Multi-Factor Authentication](../authentication/howto-mfa-getstarted.md)
active-directory Howto Mfa Userstates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-userstates.md
All users start out *Disabled*. When you enroll users in per-user Azure AD Multi
To view and manage user states, complete the following steps to access the Azure portal page:
-1. Sign in to the [Azure portal](https://portal.azure.com) as an administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as a Global administrator.
1. Search for and select *Azure Active Directory*, then select **Users** > **All users**.
-1. Select **Multi-Factor Authentication**. You may need to scroll to the right to see this menu option. Select the example screenshot below to see the full Azure portal window and menu location:
+1. Select **Per-user MFA**. You may need to scroll to the right to see this menu option. Select the example screenshot below to see the full Azure portal window and menu location:
[![Select Multi-Factor Authentication from the Users window in Azure AD.](media/howto-mfa-userstates/selectmfa-cropped.png)](media/howto-mfa-userstates/selectmfa.png#lightbox) 1. A new page opens that displays the user state, as shown in the following example. ![Screenshot that shows example user state information for Azure AD Multi-Factor Authentication](./media/howto-mfa-userstates/userstate1.png)
To configure Azure AD Multi-Factor Authentication settings, see [Configure Azur
To manage user settings for Azure AD Multi-Factor Authentication, see [Manage user settings with Azure AD Multi-Factor Authentication](howto-mfa-userdevicesettings.md).
-To understand why a user was prompted or not prompted to perform MFA, see [Azure AD Multi-Factor Authentication reports](howto-mfa-reporting.md).
+To understand why a user was prompted or not prompted to perform MFA, see [Azure AD Multi-Factor Authentication reports](howto-mfa-reporting.md).
active-directory Howto Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-registration-mfa-sspr-combined.md
To make sure you understand the functionality and effects before you enable the
To enable combined registration, complete these steps: 1. Sign in to the Azure portal as a user administrator or global administrator.
-2. Go to **Azure Active Directory** > **User settings** > **Manage user feature preview settings**.
+2. Go to **Azure Active Directory** > **User settings** > **Manage user feature settings**.
3. Under **Users can use the combined security information registration experience**, choose to enable for a **Selected** group of users or for **All** users. ![Enable the combined security info experience for users](media/howto-registration-mfa-sspr-combined/enable-the-combined-security-info.png)
active-directory Tutorial Configure Custom Password Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/tutorial-configure-custom-password-protection.md
To give you flexibility in what passwords are allowed, you can also define a cus
* Locations, such as company headquarters * Company-specific internal terms * Abbreviations that have specific company meaning
+* Months and weekdays with your company's local languages
When a user attempts to reset a password to something that's on the global or custom banned password list, they see one of the following error messages:
In this tutorial, you enabled and configured custom password protection lists fo
> * Test password changes with a banned password > [!div class="nextstepaction"]
-> [Enable risk-based Azure AD Multi-Factor Authentication](./tutorial-enable-azure-mfa.md)
+> [Enable risk-based Azure AD Multi-Factor Authentication](./tutorial-enable-azure-mfa.md)
active-directory Tutorial Enable Cloud Sync Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/tutorial-enable-cloud-sync-sspr-writeback.md
Azure Active Directory Connect cloud sync self-service password reset writeback
- [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator) and [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator) roles - [Global Administrator](../roles/permissions-reference.md#global-administrator) role - Azure AD configured for self-service password reset. If needed, complete this tutorial to enable Azure AD SSPR. -- An on-premises AD DS environment configured with Azure AD Connect cloud sync version 1.1.587 or later. If needed, configure Azure AD Connect cloud sync using [this tutorial](tutorial-enable-sspr.md).
+- An on-premises AD DS environment configured with Azure AD Connect cloud sync version 1.1.587 or later. Learn how to [identify the agent's current version](../cloud-sync/how-to-automatic-upgrade.md). If needed, configure Azure AD Connect cloud sync using [this tutorial](tutorial-enable-sspr.md).
- Enabling password writeback in Azure AD Connect cloud sync requires executing signed PowerShell scripts. - Ensure that the PowerShell execution policy will allow running of scripts. - The recommended execution policy during installation is "RemoteSigned".
Permissions for cloud sync are configured by default. If permissions need to be
### Enable password writeback in Azure AD Connect cloud sync
-For public preview, you need to enable password writeback in Azure AD Connect cloud sync by using the Set-AADCloudSyncPasswordWritebackConfiguration cmdlet and tenantΓÇÖs global administrator credentials:
+For public preview, you need to enable password writeback in Azure AD Connect cloud sync by using the Set-AADCloudSyncPasswordWritebackConfiguration cmdlet on the servers with the provisioning agents. You will need global administrator credentials:
```powershell Import-Module 'C:\\Program Files\\Microsoft Azure AD Connect Provisioning Agent\\Microsoft.CloudSync.Powershell.dll'
active-directory Tutorial Enable Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/tutorial-enable-sspr-writeback.md
To correctly work with SSPR writeback, the account specified in Azure AD Connect
* **Write permissions** on `pwdLastSet` * **Extended rights** for "Unexpire Password" on the root object of *each domain* in that forest, if not already set.
-If you don't assign these permissions, writeback may appear to be configured correctly, but users encounter errors when they manage their on-premises passwords from the cloud. Permissions must be applied to **This object and all descendant objects** for "Unexpire Password" to appear.
+If you don't assign these permissions, writeback may appear to be configured correctly, but users encounter errors when they manage their on-premises passwords from the cloud. When setting "Unexpire Password" permissions in Active Directory, it must be applied to **This object and all descendant objects**, **This object only**, or **All descendant objects**, or the "Unexpire Password" permission can't be displayed.
> [!TIP] >
active-directory Active Directory How Applications Are Added https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-how-applications-are-added.md
Allowing users to register and consent to applications might initially sound con
If you still want to prevent users in your directory from registering applications and from signing in to applications without administrator approval, there are two settings that you can change to turn off those capabilities:
-* To prevent users from consenting to applications on their own behalf:
- 1. In the Azure portal, go to the [User settings](https://portal.azure.com/#blade/Microsoft_AAD_IAM/StartboardApplicationsMenuBlade/UserSettings/menuId/) section under Enterprise applications.
- 2. Change **Users can consent to apps accessing company data on their behalf** to **No**.
-
- > [!NOTE]
- > If you decide to turn off user consent, an admin will be required to consent to any new application a user needs to use.
+* To change the user consent settings in your organization, see [Configure how users consent to applications](../manage-apps/configure-user-consent.md).
* To prevent users from registering their own applications: 1. In the Azure portal, go to the [User settings](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/UserSettings) section under Azure Active Directory 2. Change **Users can register applications** to **No**. > [!NOTE]
-> Microsoft itself uses the default configuration with users able to register applications and consent to applications on their own behalf.
+> Microsoft itself uses the default configuration allowing users to register applications and only allows user consent for a very limited set of permissions.
<!--Image references-->
-[apps_service_principals_directory]:../media/active-directory-how-applications-are-added/HowAppsAreAddedToAAD.jpg
+[apps_service_principals_directory]:../media/active-directory-how-applications-are-added/HowAppsAreAddedToAAD.jpg
active-directory Scenario Web App Call Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-web-app-call-api-overview.md
Title: Build a web app that calls web APIs | Azure
+ Title: Build a web app that authenticates users and calls web APIs | Azure
-description: Learn how to build a web app that calls web APIs (overview)
+description: Learn how to build a web app that authenticates users and calls web APIs (overview)
Last updated 07/14/2020
-#Customer intent: As an application developer, I want to know how to write a web app that calls web APIs by using the Microsoft identity platform.
+#Customer intent: As an application developer, I want to know how to write a web app that authenticates users and calls web APIs by using the Microsoft identity platform.
-# Scenario: A web app that calls web APIs
+# Scenario: A web app that authenticates users and calls web APIs
Learn how to build a web app that signs users in to the Microsoft identity platform, and then calls web APIs on behalf of the signed-in user.
Development for this scenario involves these specific tasks:
## Next steps Move on to the next article in this scenario,
-[App registration](scenario-web-app-call-api-app-registration.md).
+[App registration](scenario-web-app-call-api-app-registration.md).
active-directory Workload Identity Federation Create Trust https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/workload-identity-federation-create-trust.md
Previously updated : 10/25/2021 Last updated : 01/10/2022
Find the object ID of the app (not the application (client) ID), which you need
Get the information for your external IdP and software workload, which you need in the following steps.
-## Configure a federated identity credential using Microsoft Graph
+The Microsoft Graph beta endpoint (`https://graph.microsoft.com/beta`) exposes REST APIs to create, update, delete [federatedIdentityCredentials](/graph/api/resources/federatedidentitycredential?view=graph-rest-beta&preserve-view=true) on applications. Launch [Azure Cloud Shell](https://portal.azure.com/#cloudshell/) and sign in to your tenant.
-Launch [Azure Cloud Shell](https://portal.azure.com/#cloudshell/) and sign in to your tenant.
+## Configure a federated identity credential
-### Create a federated identity credential
+Run the Microsoft Graph [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) operation on your app (specified by the object ID of the app).
-Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) on your app (specified by the object ID of the app).
+*issuer* and *subject* are the key pieces of information needed to set up the trust relationship. *issuer* is the URL of the external identity provider and must match the `issuer` claim of the external token being exchanged. *subject* is the identifier of the external software workload and must match the `sub` (`subject`) claim of the external token being exchanged. *subject* has no fixed format, as each IdP uses their own - sometimes a GUID, sometimes a colon delimited identifier, sometimes arbitrary strings. The combination of `issuer` and `subject` must be unique on the app. When the external software workload requests Microsoft identity platform to exchange the external token for an access token, the *issuer* and *subject* values of the federated identity credential are checked against the `issuer` and `subject` claims provided in the external token. If that validation check passes, Microsoft identity platform issues an access token to the external software workload.
-*issuer* and *subject* are the key pieces of information needed to set up the trust relationship. *issuer* is the URL of the external identity provider and must match the `issuer` claim of the external token being exchanged. *subject* is the identifier of the external software workload and must match the `sub` (`subject`) claim of the external token being exchanged. *subject* has no fixed format, as each IdP uses their own - sometimes a GUID, sometimes a colon delimited identifier, sometimes arbitrary strings. The combination of `issuer` and `subject` must be unique on the app. When the external software workload requests Microsoft identity platform to exchange the external token for an access token, the *issuer* and *subject* values of the federated identity credential are checked against the `issuer` and `subject` claims provided in the external token. If that validation check passes, Microsoft identity platform issues an access token to the external software workload.
+> [!IMPORTANT]
+> If you accidentally add the incorrect external workload information in the *subject* setting the federated identity credential is created successfully without error. The error does not become apparent until the token exchange fails.
*audiences* lists the audiences that can appear in the external token. This field is mandatory, and defaults to "api://AzureADTokenExchange". It says what Microsoft identity platform must accept in the `aud` claim in the incoming token. This value represents Azure AD in your external identity provider and has no fixed value across identity providers - you may need to create a new application registration in your IdP to serve as the audience of this token.
Run the following command to [create a new federated identity credential](/graph
*description* is the un-validated, user-provided description of the federated identity credential.
+### GitHub Actions example
Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) on your app (specified by the object ID of the app). The *issuer* identifies GitHub as the external token issuer. *subject* identifies the GitHub organization, repo, and environment for your GitHub Actions workflow. When the GitHub Actions workflow requests Microsoft identity platform to exchange a GitHub token for an access token, the values in the federated identity credential are checked against the provided GitHub token. ```azurecli
And you get the response:
} ```
-> [!IMPORTANT]
-> If you accidentally add the incorrect external workload information in the *subject* setting the federated identity credential is created successfully without error. The error does not become apparent until the token exchange fails.
+### Kubernetes example
+Run the following command to configure a federated identity credential on an app and create a trust relationship with a Kubernetes service account. The *issuer* is your service account issuer URL. *subject* is the subject name in the tokens issued to the service account. Kubernetes uses the following format for subject names: `system:serviceaccount:<SERVICE_ACCOUNT_NAMESPACE>:<SERVICE_ACCOUNT_NAME>`.
+
+```azurecli
+az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/f6475511-fd81-4965-a00e-41e7792b7b9c/federatedIdentityCredentials' --body '{"name":"Kubernetes-federated-credential","issuer":"https://aksoicwesteurope.blob.core.windows.net/9d80a3e1-2a87-46ea-ab16-e629589c541c/","subject":"system:serviceaccount:erp8asle:pod-identity-sa","description":"Kubernetes service account federated credential","audiences":["api://AzureADTokenExchange"]}'
+```
+
+And you get the response:
+```azurecli
+{
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#applications('f6475511-fd81-4965-a00e-41e7792b7b9c')/federatedIdentityCredentials/$entity",
+ "audiences": [
+ "api://AzureADTokenExchange"
+ ],
+ "description": "Kubernetes service account federated credential",
+ "id": "51ecf9c3-35fc-4519-a28a-8c27c6178bca",
+ "issuer": "https://aksoicwesteurope.blob.core.windows.net/9d80a3e1-2a87-46ea-ab16-e629589c541c/",
+ "name": "Kubernetes-federated-credential",
+ "subject": "system:serviceaccount:erp8asle:pod-identity-sa"
+}
+```
-### List federated identity credentials on an app
+## List federated identity credentials on an app
Run the following command to [list the federated identity credential(s)](/graph/api/application-list-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for an app (specified by the object ID of the app):
And you get a response similar to:
} ```
-### Delete a federated identity credential
+## Delete a federated identity credential
Run the following command to [delete a federated identity credential](/graph/api/application-list-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) from an app (specified by the object ID of the app):
az rest -m DELETE -u 'https://graph.microsoft.com/beta/applications/f6475511-fd
``` ## Next steps
-For more information, read about how Azure AD uses the [OAuth 2.0 client credentials grant](v2-oauth2-client-creds-grant-flow.md#third-case-access-token-request-with-a-federated-credential) and a client assertion issued by another IdP to get a token.
+- To learn how to use workload identity federation for Kubernetes, see [Azure AD Workload Identity for Kubernetes](https://azure.github.io/azure-workload-identity/docs/quick-start.html) open source project.
+- To learn how to use workload identity federation for GitHub Actions, see [Configure a GitHub Actions workflow to get an access token](/azure/developer/github/connect-from-azure).
+- For more information, read about how Azure AD uses the [OAuth 2.0 client credentials grant](v2-oauth2-client-creds-grant-flow.md#third-case-access-token-request-with-a-federated-credential) and a client assertion issued by another IdP to get a token.
active-directory Workload Identity Federation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/workload-identity-federation.md
Previously updated : 10/20/2021 Last updated : 01/10/2022
# Workload identity federation (preview) This article provides an overview of workload identity federation for software workloads. Using workload identity federation allows you to access Azure Active Directory (Azure AD) protected resources without needing to manage secrets (for supported scenarios).
-You can use workload identity federation in scenarios such as GitHub Actions.
+You can use workload identity federation in scenarios such as GitHub Actions and workloads running on Kubernetes.
## Why use workload identity federation?
You use workload identity federation to configure an Azure AD app registration t
The following scenarios are supported for accessing Azure AD protected resources using workload identity federation: - GitHub Actions. First, [Configure a trust relationship](workload-identity-federation-create-trust-github.md) between your app in Azure AD and a GitHub repo in the Azure portal or using Microsoft Graph. Then [configure a GitHub Actions workflow](/azure/developer/github/connect-from-azure) to get an access token from Microsoft identity provider and access Azure resources.
+- Workloads running on Kubernetes. Install the Azure AD workload identity webhook and establish a trust relationship between your app in Azure AD and a Kubernetes workload (described in the [Kubernetes quickstart](https://azure.github.io/azure-workload-identity/docs/quick-start.html)).
- Workloads running in compute platforms outside of Azure. [Configure a trust relationship](workload-identity-federation-create-trust.md) between your Azure AD application registration and the external IdP for your compute platform. You can use tokens issued by that platform to authenticate with Microsoft identity platform and call APIs in the Microsoft ecosystem. Use the [client credentials flow](v2-oauth2-client-creds-grant-flow.md#third-case-access-token-request-with-a-federated-credential) to get an access token from Microsoft identity platform, passing in the identity provider's JWT instead of creating one yourself using a stored certificate. ## How it works
active-directory Redemption Experience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/redemption-experience.md
When you add a guest user to your directory by [using the Azure portal](./b2b-qu
4. The guest is guided through the [consent experience](#consent-experience-for-the-guest) described below. ## Redemption limitation with conflicting Contact object
-Sometimes the invited external guest user's email may conflict with an existing [Contact object](/graph/api/resources/contact), resulting in the guest user being created without a proxyAddress. This is a known limitation that prevents guest users from:
-- Redeeming an invitation through a direct link using [SAML/WS-Fed IdP](./direct-federation.md), [Microsoft Accounts](./microsoft-account.md), [Google Federation](./google-federation.md), or [Email One-Time Passcode](./one-time-passcode.md) accounts. -- Redeeming an invitation through an invitation email redemption link using [SAML/WS-Fed IdP](./direct-federation.md) and [Email One-Time Passcode](./one-time-passcode.md) accounts.-- Signing back into an application after redemption using [SAML/WS-Fed IdP](./direct-federation.md) and [Google Federation](./google-federation.md) accounts.
+Sometimes the invited external guest user's email may conflict with an existing [Contact object](/graph/api/resources/contact), resulting in the guest user being created without a proxyAddress. This is a known limitation that prevents guest users from redeeming an invitation through a direct link using [SAML/WS-Fed IdP](./direct-federation.md), [Microsoft Accounts](./microsoft-account.md), [Google Federation](./google-federation.md), or [Email One-Time Passcode](./one-time-passcode.md) accounts.
+
+However, the following scenarios should continue to work:
+- Redeeming an invitation through an invitation email redemption link using [SAML/WS-Fed IdP](./direct-federation.md), [Email One-Time Passcode](./one-time-passcode.md), and [Google Federation](./google-federation.md) accounts.
+- Signing back into an application after redemption using [SAML/WS-Fed IdP](./direct-federation.md) and [Google Federation](./google-federation.md) accounts.
To unblock users who can't redeem an invitation due to a conflicting [Contact object](/graph/api/resources/contact), follow these steps:
-1. Delete the conflicting Contact object.
-2. Delete the guest user in the Azure portal (the user's "Invitation accepted" property should be in a pending state).
-3. Re-invite the guest user.
-4. Wait for the user to redeem invitation.
-5. Add the user's Contact email back into Exchange and any DLs they should be a part of.
+1. Delete the conflicting Contact object.
+2. Delete the guest user in the Azure portal (the user's "Invitation accepted" property should be in a pending state).
+3. Re-invite the guest user.
+4. Wait for the user to redeem invitation.
+5. Add the user's Contact email back into Exchange and any DLs they should be a part of.
+ ## Invitation redemption flow
If you see an error that requires admin consent while accessing an application,
- [Add Azure Active Directory B2B collaboration users in the Azure portal](add-users-administrator.md) - [How do information workers add B2B collaboration users to Azure Active Directory?](add-users-information-worker.md) - [Add Azure Active Directory B2B collaboration users by using PowerShell](customize-invitation-api.md#powershell)-- [Leave an organization as a guest user](leave-the-organization.md)
+- [Leave an organization as a guest user](leave-the-organization.md)
active-directory Configure Admin Consent Workflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-admin-consent-workflow.md
To enable the admin consent workflow and choose reviewers:
1. Select **Enterprise applications**. 1. Under **Manage**, select **User settings**. Under **Admin consent requests**, select **Yes** for **Users can request admin consent to apps they are unable to consent to** .
- :::image type="content" source="media/configure-admin-consent-workflow/admin-consent-requests-settings.png" alt-text="Configure admin consent workflow settings":::
+ :::image type="content" source="media/configure-admin-consent-workflow/enable-admin-consent-workflow.png" alt-text="Configure admin consent workflow settings":::
1. Configure the following settings: - **Select users to review admin consent requests** - Select reviewers for this workflow from a set of users that have the global administrator, cloud application administrator, or application administrator roles. You can also add groups and roles that can configure an admin consent workflow. You must designate at least one reviewer before the workflow can be enabled.
active-directory Manage App Consent Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/manage-app-consent-policies.md
Title: Manage app consent policies
description: Learn how to manage built-in and custom app consent policies to control when consent can be granted. --+ Last updated 09/02/2021--+ #customer intent: As an admin, I want to manage app consent policies for enterprise applications in Azure AD
App consent policies where the ID begins with "microsoft-" are built-in policies
## Pre-requisites
-1. Make sure you're using the [AzureADPreview](/powershell/module/azuread/?preserve-view=true&view=azureadps-2.0-preview) module. This step is important if you have installed both the [AzureAD](/powershell/module/azuread/) module and the [AzureADPreview](/powershell/module/azuread/?preserve-view=true&view=azureadps-2.0-preview) module).
-
- ```powershell
- Remove-Module AzureAD -ErrorAction SilentlyContinue
- Import-Module AzureADPreview
- ```
-
-1. Connect to Azure AD PowerShell.
+1. Connect to [Azure AD PowerShell](/powershell/module/azuread/).
```powershell Connect-AzureAD
active-directory Plan Issuance Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/plan-issuance-solution.md
These services provide supporting roles that don't necessarily need to integrate
* **Additional middle-tier services** that contain business rules for lookups, validating, billing, and any other runtime checks and workflows needed to issue credentials.
-For more information on setting up your web front end, see the tutorial [Configure you Azure AD to issue verifiable credentials](../verifiable-credentials/enable-your-tenant-verifiable-credentials.md).
+For more information on setting up your web front end, see the tutorial [Configure your Azure AD to issue verifiable credentials](../verifiable-credentials/enable-your-tenant-verifiable-credentials.md).
## Credential Design Considerations
aks Concepts Clusters Workloads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/concepts-clusters-workloads.md
To run your applications and supporting services, you need a Kubernetes *node*.
| Component | Description | | -- | - |
-| `kubelet` | The Kubernetes agent that processes the orchestration requests from the control plane and scheduling of running the requested containers. |
+| `kubelet` | The Kubernetes agent that processes the orchestration requests from the control plane along with scheduling and running the requested containers. |
| *kube-proxy* | Handles virtual networking on each node. The proxy routes network traffic and manages IP addressing for services and pods. | | *container runtime* | Allows containerized applications to run and interact with additional resources, such as the virtual network and storage. AKS clusters using Kubernetes version 1.19+ for Linux node pools use `containerd` as their container runtime. Beginning in Kubernetes version 1.20 for Windows node pools, `containerd` can be used in preview for the container runtime, but Docker is still the default container runtime. AKS clusters using prior versions of Kubernetes for node pools use Docker as their container runtime. |
aks Csi Secrets Store Identity Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/csi-secrets-store-identity-access.md
Azure Active Directory (Azure AD) pod-managed identities use AKS primitives to a
1. To access your key vault, you can use the user-assigned managed identity that you created when you [enabled a managed identity on your AKS cluster][use-managed-identity]: ```azurecli-interactive
- az aks show -g <resource-group> -n <cluster-name> --query identityProfile.kubeletidentity.clientId -o tsv
+ az aks show -g <resource-group> -n <cluster-name> --query addonProfiles.azureKeyvaultSecretsProvider.identity.clientId -o tsv
``` Alternatively, you can create a new managed identity and assign it to your virtual machine (VM) scale set or to each VM instance in your availability set:
aks Kubernetes Walkthrough Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-walkthrough-powershell.md
Title: 'Quickstart: Deploy an AKS cluster by using PowerShell'
-description: Learn how to quickly create a Kubernetes cluster, deploy an application, and monitor performance in Azure Kubernetes Service (AKS) using PowerShell.
+description: Learn how to quickly create a Kubernetes cluster and deploy an application in Azure Kubernetes Service (AKS) using PowerShell.
Previously updated : 03/15/2021 Last updated : 01/13/2022
-#Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run and monitor applications using the managed Kubernetes service in Azure.
+#Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
# Quickstart: Deploy an Azure Kubernetes Service cluster using PowerShell
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you will: * Deploy an AKS cluster using PowerShell. * Run a multi-container application with a web front-end and a Redis instance in the cluster.
-* Monitor the health of the cluster and pods that run your application.
To learn more about creating a Windows Server node pool, see [Create an AKS cluster that supports Windows Server containers][windows-container-powershell].
ResourceId : /subscriptions/00000000-0000-0000-0000-000000000000/resource
* [Quick steps: Create and use an SSH public-private key pair for Linux VMs in Azure](../virtual-machines/linux/mac-create-ssh-keys.md) * [How to use SSH keys with Windows on Azure](../virtual-machines/linux/ssh-from-windows.md)
-1. Create an AKS cluster using the [New-AzAksCluster][new-azakscluster] cmdlet. Azure Monitor for containers is enabled by default.
+1. Create an AKS cluster using the [New-AzAksCluster][new-azakscluster] cmdlet.
The following example creates a cluster named **myAKSCluster** with one node.
To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl
3. Verify the connection to your cluster using the [kubectl get][kubectl-get] command. This command returns a list of the cluster nodes. ```azurepowershell-interactive
- .\kubectl get nodes
+ kubectl get nodes
``` Output shows the single node created in the previous steps. Make sure the node status is *Ready*:
Two [Kubernetes Services][kubernetes-service] are also created:
1. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest: ```azurepowershell-interactive
- .\kubectl apply -f azure-vote.yaml
+ kubectl apply -f azure-vote.yaml
``` Output shows the successfully created deployments and
When the application runs, a Kubernetes service exposes the application front en
Monitor progress using the [kubectl get service][kubectl-get] command with the `--watch` argument. ```azurepowershell-interactive
-.\kubectl get service azure-vote-front --watch
+kubectl get service azure-vote-front --watch
``` The **EXTERNAL-IP** output for the `azure-vote-front` service will initially show as *pending*.
To see the Azure Vote app in action, open a web browser to the external IP addre
![Voting app deployed in Azure Kubernetes Service](./media/kubernetes-walkthrough-powershell/voting-app-deployed-in-azure-kubernetes-service.png)
-View the cluster nodes' and pods' health metrics captured by Azure Monitor for containers in the Azure portal.
- ## Delete the cluster To avoid Azure charges, clean up your unnecessary resources. Use the [Remove-AzResourceGroup][remove-azresourcegroup] cmdlet to remove the resource group, container service, and all related resources.
aks Kubernetes Walkthrough https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-walkthrough.md
To learn more about creating a Windows Server node pool, see [Create an AKS clus
- This article requires version 2.0.64 or greater of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. - The identity you are using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](concepts-identity.md).
+- Verify *Microsoft.OperationsManagement* and *Microsoft.OperationalInsights* are registered on your subscription. To check the registration status:
+
+ ```azurecli
+ az provider show -n Microsoft.OperationsManagement -o table
+ az provider show -n Microsoft.OperationalInsights -o table
+ ```
+
+ If they are not registered, register *Microsoft.OperationsManagement* and *Microsoft.OperationalInsights* using:
+
+ ```azurecli
+ az provider register --namespace Microsoft.OperationsManagement
+ az provider register --namespace Microsoft.OperationalInsights
+ ```
> [!NOTE] > Run the commands as administrator if you plan to run the commands in this quickstart locally instead of in Azure Cloud Shell.
Output for successfully created resource group:
}, "tags": null }
-```
-
-## Enable cluster monitoring
-
-Verify *Microsoft.OperationsManagement* and *Microsoft.OperationalInsights* are registered on your subscription. To check the registration status:
-
-```azurecli
-az provider show -n Microsoft.OperationsManagement -o table
-az provider show -n Microsoft.OperationalInsights -o table
-```
-
-If they are not registered, register *Microsoft.OperationsManagement* and *Microsoft.OperationalInsights* using:
-
-```azurecli
-az provider register --namespace Microsoft.OperationsManagement
-az provider register --namespace Microsoft.OperationalInsights
-```
+```
## Create AKS cluster
-Create an AKS cluster using the [az aks create][az-aks-create] command with the *--enable-addons monitoring* parameter to enable [Azure Monitor for containers][azure-monitor-containers]. The following example creates a cluster named *myAKSCluster* with one node:
+Create an AKS cluster using the [az aks create][az-aks-create] command with the *--enable-addons monitoring* parameter to enable [Azure Monitor container insights][azure-monitor-containers]. The following example creates a cluster named *myAKSCluster* with one node:
```azurecli-interactive az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 1 --enable-addons monitoring --generate-ssh-keys
To see the Azure Vote app in action, open a web browser to the external IP addre
![Voting app deployed in Azure Kubernetes Service](./media/container-service-kubernetes-walkthrough/voting-app-deployed-in-azure-kubernetes-service.png)
-View the cluster nodes' and pods' health metrics captured by [Azure Monitor for containers][azure-monitor-containers] in the Azure portal.
+View the cluster nodes' and pods' health metrics captured by [Azure Monitor container insights][azure-monitor-containers] in the Azure portal.
## Delete the cluster
az group delete --name myResourceGroup --yes --no-wait
``` > [!NOTE]
-> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete].
+> If the AKS cluster was created with system-assigned managed identity (default identity option used in this quickstart), the identity is managed by the platform and does not require removal.
>
-> If you used a managed identity, the identity is managed by the platform and does not require removal.
+> If the AKS cluster was created with service principal as the identity option instead, then when you delete the cluster, the service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete].
## Get the code
aks Quickstart Helm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/quickstart-helm.md
cd azure-voting-app-redis/azure-vote/
## Build and push the sample application to the ACR
-Using the preceding Dockerfile, run the [az acr build][az-acr-build] command to build and push an image to the registry. The `.` at the end of the command sets the location of the Dockerfile (in this case, the current directory).
+Using the preceding Dockerfile, run the [az acr build][az-acr-build] command to build and push an image to the registry. The `.` at the end of the command provides the location of the source code directory path (in this case, the current directory). The `--file` parameter takes in the path of the Dockerfile relative to this source code directory path.
```azurecli az acr build --image azure-vote-front:v1 --registry MyHelmACR --file Dockerfile .
az group delete --name MyResourceGroup --yes --no-wait
``` > [!NOTE]
-> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete].
+> If the AKS cluster was created with system-assigned managed identity (default identity option used in this quickstart), the identity is managed by the platform and does not require removal.
>
-> If you used a managed identity, the identity is managed by the platform and does not require removal.
+> If the AKS cluster was created with service principal as the identity option instead, then when you delete the cluster, the service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete].
## Next steps
api-management Virtual Network Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/virtual-network-concepts.md
The minimum size of the subnet in which API Management can be deployed is /29, w
* Two IP addresses per unit of Premium SKU, or * One IP address for the Developer SKU.
-* Each instance reserves an extra IP address for the external load balancer. When deploying into an [internal VNet](./api-management-using-with-internal-vnet.md), the instance requires an extra IP address for the internal load balancer.
+* When deploying into an [internal VNet](./api-management-using-with-internal-vnet.md), the instance requires an extra IP address for the internal load balancer.
## Routing
app-service Configure Common https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-common.md
description: Learn to configure common settings for an App Service app. App sett
keywords: azure app service, web app, app settings, environment variables ms.assetid: 9af8a367-7d39-4399-9941-b80cbc5f39a0 Previously updated : 12/07/2020 Last updated : 01/13/2022
app-service Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/firewall-integration.md
description: Learn how to integrate with Azure Firewall to secure outbound traff
ms.assetid: 955a4d84-94ca-418d-aa79-b57a5eb8cb85 Previously updated : 01/05/2022 Last updated : 01/12/2022
With an Azure Firewall, you automatically get everything below configured with t
| \*.ctldl.windowsupdate.com:443 | | \*.prod.microsoftmetrics.com:443 | | \*.dsms.core.windows.net:443 |
-| \*.prod.warm.ingest.monitor.core.windows.net |
+| \*.prod.warm.ingest.monitor.core.windows.net:443 |
### Linux dependencies
Linux is not available in US Gov regions and is thus not listed as an optional c
|\*.management.usgovcloudapi.net:443 | |\*.update.microsoft.com:443 | |\*.prod.microsoftmetrics.com:443 |
-| \*.prod.warm.ingest.monitor.core.usgovcloudapi.net |
+| \*.prod.warm.ingest.monitor.core.usgovcloudapi.net:443 |
<!--Image references--> [1]: ./media/firewall-integration/firewall-apprule.png
app-service Network Secure Outbound Traffic Azure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/network-secure-outbound-traffic-azure-firewall.md
+
+ Title: 'App Service outbound traffic control with Azure Firewall'
+description: Outbound traffic from App Service to internet, private IP addresses, and Azure services are routed through Azure Firewall. Learn how to control App Service outbound traffic by using Virtual Network integration and Azure Firewall.
+++ Last updated : 01/13/2022++
+# Control outbound traffic with Azure Firewall
+
+This article shows you how to lock down the outbound traffic from your App Service app to back-end Azure resources or other network resources with [Azure Firewall](../firewall/overview.md). This configuration helps prevent data exfiltration or the risk of malicious program implantation.
+
+By default, an App Service app can make outbound requests to the public internet (for example, when installing required Node.js packages from NPM.org.). If your app is [integrated with an Azure virtual network](overview-vnet-integration.md), you can control outbound traffic with [network security groups](../virtual-network/network-security-groups-overview.md) to a limited extent, such as the target IP address, port, and protocol. Azure Firewall lets you control outbound traffic at a much more granular level and filter traffic based on real-time threat intelligence from Microsoft Cyber Security. You can centrally create, enforce, and log application and network connectivity policies across subscriptions and virtual networks (see [Azure Firewall features](../firewall/features.md)).
+
+For detailed network concepts and security enhancements in App Service, see [Networking features](networking-features.md) and [Zero to Hero with App Service, Part 6: Securing your web app](https://azure.github.io/AppService/2020/08/14/zero_to_hero_pt6.html).
+
+## Prerequisites
+
+* [Enable regional virtual network integration](configure-vnet-integration-enable.md) for your app.
+* [Verify that **Route All** is enabled](configure-vnet-integration-routing.md). This setting is enabled by default, which tells App Service to route all outbound traffic through the integrated virtual network. If you disable it, only traffic to private IP addresses will be routed through the virtual network.
+* If you want to route access to back-end Azure services through Azure Firewall as well, [disable all service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) on the App Service subnet in the integrated virtual network. After Azure Firewall is configured, outbound traffic to Azure Services will be routed through the firewall instead of the service endpoints.
+
+## 1. Create the required firewall subnet
+
+To deploy a firewall into the integrated virtual network, you need a subnet called **AzureFirewallSubnet**.
+
+1. In the [Azure portal](https://portal.azure.com), navigate to the virtual network that's integrated with your app.
+1. From the left navigation, select **Subnets** > **+ Subnet**.
+1. In **Name**, type **AzureFirewallSubnet**.
+1. **Subnet address range**, accept the default or specify a range that's [at least /26 in size](../firewall/firewall-faq.yml#why-does-azure-firewall-need-a--26-subnet-size).
+1. Select **Save**.
+
+## 2. Deploy the firewall and get its IP
+
+1. On the [Azure portal](https://portal.azure.com) menu or from the **Home** page, select **Create a resource**.
+1. Type *firewall* in the search box and press **Enter**.
+1. Select **Firewall** and then select **Create**.
+1. On the **Create a Firewall** page, configure the firewall as shown in the following table:
+
+ | Setting | Value |
+ | - | - |
+ | **Resource group** | Same resource group as the integrated virtual network. |
+ | **Name** | Name of your choice |
+ | **Region** | Same region as the integrated virtual network. |
+ | **Firewall policy** | Create one by selecting **Add new**. |
+ | **Virtual network** | Select the integrated virtual network. |
+ | **Public IP address** | Select an existing address or create one by selecting **Add new**. |
+
+ :::image type="content" source="./media/network-secure-outbound-traffic-azure-firewall/create-azfw.png" alt-text="Screenshot of creating an Azure Firewall in the Azure portal.":::
+
+1. Click **Review + create**.
+1. Select **Create** again.
+
+ This will take a few minutes to deploy.
+
+1. After deployment completes, go to your resource group, and select the firewall.
+1. In the firewall's **Overview** page, copy private IP address. The private IP address will be used as next hop address in the routing rule for the virtual network.
+
+ :::image type="content" source="./media/network-secure-outbound-traffic-azure-firewall/firewall-private-ip.png" alt-text="Screenshot of get Azure Firewall private IP address.":::
+
+## 3. Route all traffic to the firewall
+
+When you create a virtual network, Azure automatically creates a [default route table](../virtual-network/virtual-networks-udr-overview.md#default) for each of its subnets and adds system default routes to the table. In this step, you create a user-defined route table that routes all traffic to the firewall, and then associate it with the App Service subnet in the integrated virtual network.
+
+1. On the [Azure portal](https://portal.azure.com) menu, select **All services** or search for and select **All services** from any page.
+1. Under **Networking**, select **Route tables**.
+1. Select **Add**.
+1. Configure the route table like the following example:
+
+ :::image type="content" source="./media/network-secure-outbound-traffic-azure-firewall/create-route-table.png" alt-text="Screenshot of creating a routing route table in Azure portal.":::
+
+ Make sure you select the same region as the firewall you created.
+
+1. Select **Review + create**.
+1. Select **Create**.
+1. After deployment completes, select **Go to resource**.
+1. From the left navigation, select **Routes** > **Add**.
+1. Configure the new route as shown in the following table:
+
+ | Setting | Value |
+ | - | - |
+ | **Address prefix** | *0.0.0.0/0* |
+ | **Next hop type** | **Virtual appliance** |
+ | **Next hop address** | The private IP address for the firewall that you copied in [2. Deploy the firewall and get its IP](#2-deploy-the-firewall-and-get-its-ip). |
+
+1. From the left navigation, select **Subnets** > **Associate**.
+1. In **Virtual network**, select your integrated virtual network.
+1. In **Subnet**, select the App Service subnet.
+
+ :::image type="content" source="./media/network-secure-outbound-traffic-azure-firewall/associate-route-table.png" alt-text="Screenshot of associate the route table to the App Service subnet.":::
+
+1. Select **OK**.
+
+## 4. Configure firewall policies
+
+Outbound traffic from your app is now routed through the integrated virtual network to the firewall. To control App Service outbound traffic, add an application rule to firewall policy.
+
+1. Navigate to the firewall's overview page and select its firewall policy.
+
+1. In the firewall policy page, from the left navigation, select **Application Rules** > **Add a rule collection**.
+1. In **Rules**, add a network rule with the App Service subnet as the source address, and specify an FQDN destination. In the screenshot below, the destination FQDN is set to `api.my-ip.io`.
+
+ :::image type="content" source="./media/network-secure-outbound-traffic-azure-firewall/config-azfw-policy-app-rule.png" alt-text="Screenshot of configure Azure Firewall policy rule.":::
+
+ > [!NOTE]
+ > Instead of specifying the App Service subnet as the source address, you can also use the private IP address of the app in the subnet directly. You can find your app's private IP address in the subnet by using the [`WEBSITE_PRIVATE_IP` environment variable](reference-app-settings.md#networking).
+
+1. Select **Add**.
+
+## 5. Verify the outbound traffic
+
+An easy way to verify your configuration is to use the `curl` command from your app's SCM debug console to verify the outbound connection.
+
+1. In a browser, navigate to `https://<app-name>.scm.azurewebsites.net/DebugConsole`.
+1. In the console, run `curl -s <protocol>://<fqdn-address>` with a URL that matches the application rule you configured, To continue example in the previous screenshot, you can use **curl -s https://api.my-ip.io/api**. The following screenshot shows a successful response from the API, showing the public IP address of your App Service app.
+
+ :::image type="content" source="./media/network-secure-outbound-traffic-azure-firewall/verify-outbound-traffic-fw-allow-rule.png" alt-text="Screenshot of verifying the success outbound traffic by using curl command in SCM debug console.":::
+
+1. Run `curl -s <protocol>://<fqdn-address>` again with a URL that doesn't match the application rule you configured. In the following screenshot, you get no response, which indicates that your firewall has blocked the outbound request from the app.
+
+ :::image type="content" source="./media/network-secure-outbound-traffic-azure-firewall/verify-outbound-traffic-fw-no-rule.png" alt-text="Screenshot of sending outbound traffic by using curl command in SCM debug console.":::
+
+> [!TIP]
+> Because these outbound requests are going through the firewall, you can capture them in the firewall logs by [enabling diagnostic logging for the firewall](../firewall/firewall-diagnostics.md#enable-diagnostic-logging-through-the-azure-portal) (enable the **AzureFirewallApplicationRule**).
+>
+> If you run the `curl` commands with diagnostic logs enabled, you can find them in the firewall logs.
+>
+> 1. In the Azure portal, navigate to your firewall.
+> 1. From the left navigation, select **Logs**.
+> 1. Close the welcome message by selecting **X**.
+> 1. From All Queries, select **Firewall Logs** > **Application rule log data**.
+> 1. Click **Run**. You can see these two access logs in query result.
+>
+> :::image type="content" source="./media/network-secure-outbound-traffic-azure-firewall/azfw-application-log-min.png" alt-text="Screenshot of SCM debug console to verify the failed outbound traffic by using curl command." lightbox="./media/network-secure-outbound-traffic-azure-firewall/azfw-application-log.png":::
+
+## More resources
+
+[Monitor Azure Firewall logs and metrics](../firewall/firewall-diagnostics.md).
+++
app-service Troubleshoot Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/troubleshoot-diagnostic-logs.md
Title: Enable diagnostics logging
description: Learn how to enable diagnostic logging and add instrumentation to your application, as well as how to access the information logged by Azure. ms.assetid: c9da27b2-47d4-4c33-a3cb-1819955ee43b Previously updated : 07/06/2021 Last updated : 01/13/2022
This article uses the [Azure portal](https://portal.azure.com) and Azure CLI to
## Enable application logging (Windows)
-> [!NOTE]
-> Application logging for blob storage can only use storage accounts in the same region as the App Service
- To enable application logging for Windows apps in the [Azure portal](https://portal.azure.com), navigate to your app and select **App Service logs**. Select **On** for either **Application Logging (Filesystem)** or **Application Logging (Blob)**, or both. The **Filesystem** option is for temporary debugging purposes, and turns itself off in 12 hours. The **Blob** option is for long-term logging, and needs a blob storage container to write logs to. The **Blob** option also includes additional information in the log messages, such as the ID of the origin VM instance of the log message (`InstanceId`), thread ID (`Tid`), and a more granular timestamp ([`EventTickCount`](/dotnet/api/system.datetime.ticks)).
+> [!NOTE]
+> If your Azure Storage account is secured by firewall rules, see [Networking considerations](#networking-considerations).
+ > [!NOTE] > Currently only .NET application logs can be written to the blob storage. Java, PHP, Node.js, Python application logs can only be stored on the App Service file system (without code modifications to write logs to external storage). >
To enable web server logging for Windows apps in the [Azure portal](https://port
For **Web server logging**, select **Storage** to store logs on blob storage, or **File System** to store logs on the App Service file system.
+> [!NOTE]
+> If your Azure Storage account is secured by firewall rules, see [Networking considerations](#networking-considerations).
+ In **Retention Period (Days)**, set the number of days the logs should be retained. > [!NOTE]
The following table shows the supported log types and descriptions:
<sup>3</sup> AppServiceAntivirusScanAuditLogs log type is still currently in Preview
+## Networking considerations
+
+If you secure your Azure Storage account by [only allowing selected networks](../storage/common/storage-network-security.md#change-the-default-network-access-rule), it can receive logs from App Service only if both of the following are true:
+
+- The Azure Storage account is in a different Azure region from the App Service app.
+- All outbound addresses of the App Service app are [added to the Storage account's firewall rules](../storage/common/storage-network-security.md#managing-ip-network-rules). To find the outbound addresses for your app, see [Find outbound IPs](overview-inbound-outbound-ips.md#find-outbound-ips).
+ ## <a name="nextsteps"></a> Next steps * [Query logs with Azure Monitor](../azure-monitor/logs/log-query-overview.md) * [How to Monitor Azure App Service](web-sites-monitor.md)
automation Automation Windows Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-windows-hrw-install.md
To install and configure a Windows user Hybrid Runbook Worker, you can use one o
* Use a provided PowerShell script to completely [automate](#automated-deployment) the process of configuring one or more Windows machines. This is the recommended method for machines in your datacenter or another cloud environment. * Manually import the Automation solution, install the Log Analytics agent for Windows, and configure the worker role on the machine.
+* Agent-based hybrid worker uses MMA proxy setting. You have to pass the proxy setting while installing the log analytics extension(MMA) and this setting will be stored under MMA configuration(registry) on VM.
## Automated deployment
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/extension-based-hybrid-runbook-worker-install.md
After you successfully deploy a runbook worker, review [Run runbooks on a Hybrid
If you use a proxy server for communication between Azure Automation and machines running the extension-base Hybrid Runbook Worker, ensure that the appropriate resources are accessible. The timeout for requests from the Hybrid Runbook Worker and Automation services is 30 seconds. After three attempts, a request fails.
+> [!NOTE]
+> You can set up the proxy settings either by PowerShell cmdlets or API.
+
+**Proxy server settings**
+# [Windows](#tab/windows)
+
+```azurepowershell
+$settings = @{
+ "AutomationAccountURL" = "<registrationurl>/<subscription-id>";
+ "ProxySettings" = @{
+ "ProxyServer" = "<ipaddress>:<port>";
+ "UserName"="test";
+ }
+};
+$protectedsettings = @{
+"ProxyPassword" = "password";
+};
+```
+
+# [Linux](#tab/linux)
+```
+$protectedsettings = @{
+ "Proxy_URL"="http://username:password@<IP Address>"
+};
+$settings = @{
+ "AutomationAccountURL" = "<registration-url>/<subscription-id>";
+};
+```
++ ### Firewall use If you use a firewall to restrict access to the Internet, you must configure the firewall to permit access. The following port and URLs are required for the Hybrid Runbook Worker, and for [Automation State Configuration](./automation-dsc-overview.md) to communicate with Azure Automation.
automation Remove Node And Configuration Package https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/state-configuration/remove-node-and-configuration-package.md
# How to remove a configuration and node from Automation State Configuration
-This article covers how to unregister a node managed by Automation State Configuration, and safely remove a PowerShell Desired State Configuration (DSC) configuration from managed nodes. For both Windows and Linux nodes, you need to [unregister the node](#unregister-a-node) and [delete the configuration](#delete-a-configuration-from-the-azure-portal). For Linux nodes only, you can optionally delete the DSC packages from the nodes as well. See [Remove the DSC package from a Linux node](#remove-the-dsc-package-from-a-linux-node).
+This article covers how to unregister a node managed by Automation State Configuration, and safely remove a PowerShell Desired State Configuration (DSC) configuration from managed nodes. For both Windows and Linux nodes, you need to [unregister the node](#unregister-a-node) and [Delete a configuration from the node](#delete-a-configuration-from-the-node). For Linux nodes only, you can optionally delete the DSC packages from the nodes as well. See [Remove the DSC package from a Linux node](#remove-the-dsc-package-from-a-linux-node).
## Unregister a node
-If you no longer want a node to be managed by State Configuration (DSC), you can unregister it from the Azure portal or with Azure PowerShell using the following steps.
+>[!NOTE]
+> Unregistering a node from the service only sets the Local Configuration Manager settings so the node is no longer connecting to the service. This does not effect the configuration that's currently applied to the node, and leaves the related files in place on the node. After you unregister/delete the node, to re-register it, clear the existing configuration files. See [Delete a configuration from the node](#delete-a-configuration-from-the-node).
-Unregistering a node from the service only sets the Local Configuration Manager settings so the node is no longer connecting to the service. This does not effect the configuration that's currently applied to the node, and leaves the related files in place on the node. You can optionally clean up those files. See [Delete a configuration](#delete-a-configuration).
+If you no longer want a node to be managed by State Configuration (DSC), you can unregister it from the Azure portal or with Azure PowerShell using the following steps.
-### Unregister in the Azure portal
+ # [Azure portal](#tab/azureportal)
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for and select **Automation Accounts**.
Unregistering a node from the service only sets the Local Configuration Manager
:::image type="content" source="./media/remove-node-and-configuration-package/unregister-node.png" alt-text="Screenshot of the Node details page highlighting the Unregister button." lightbox="./media/remove-node-and-configuration-package/unregister-node.png":::
-### Unregister using PowerShell
+# [Azure PowerShell](#tab/powershell)
You can also unregister a node using the PowerShell cmdlet [Unregister-AzAutomationDscNode](/powershell/module/az.automation/unregister-azautomationdscnode). >[!NOTE]
->If your organization still uses the deprecated AzureRM modules, you can use [Unregister-AzureRmAutomationDscNode](/powershell/module/azurerm.automation/unregister-azurermautomationdscnode).
+> If your organization still uses the deprecated AzureRM modules, you can use [Unregister-AzureRmAutomationDscNode](/powershell/module/azurerm.automation/unregister-azurermautomationdscnode).
+++
-## Delete a configuration
+## Delete a configuration from the node
-When you're ready to remove an imported DSC configuration document (which is a Managed Object Format (MOF) or .mof file) that's assigned to one or more nodes, follow these steps.
+When you're ready to remove an imported DSC configuration document (which is a Managed Object Format (MOF) or .mof file) that's assigned to one or more nodes, follow either of these steps.
-### Delete a configuration from the Azure portal
+# [Azure portal](#tab/delete-azureportal)
You can delete configurations for both Windows and Linux nodes from the Azure portal.
You can delete configurations for both Windows and Linux nodes from the Azure po
:::image type="content" source="./media/remove-node-and-configuration-package/delete-extension.png" alt-text="Screenshot of deleting an extension." lightbox="./media/remove-node-and-configuration-package/delete-extension.png":::
-## Manually delete a configuration file from a node
+# [Manual Deletion](#tab/manual-delete-azureportal)
-If you don't want to use the Azure portal, you can manually delete the .mof configuration files as follows.
+To manually delete the .mof configuration files, follow the steps:
-### Delete a Windows configuration using PowerShell
+**Delete a Windows configuration using PowerShell**
To remove an imported DSC configuration document (.mof), use the [Remove-DscConfigurationDocument](/powershell/module/psdesiredstateconfiguration/remove-dscconfigurationdocument) cmdlet.
-### Delete a Linux configuration
+**Delete a Linux configuration**
The configuration files are stored in /etc/opt/omi/conf/dsc/configuration/. Remove the .mof files in this directory to delete the node's configuration. +++
+## Re-register a node
+You can re-register a node just as you registered the node initially, using any of the methods described in [Enable Azure Automation State Configuration](/azure/automation/automation-dsc-onboarding)
++ ## Remove the DSC package from a Linux node This step is optional. Unregistering a Linux node from State Configuration (DSC) doesn't remove the DSC and OMI packages from the machine. Use the commands below to remove the packages as well as all logs and related data.
azure-arc Conceptual Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/platform/conceptual-custom-locations.md
For example, a cluster operator can create a custom location **Contoso-Michigan-
On Arc-enabled Kubernetes clusters, Custom Locations represents an abstraction of a namespace within the Azure Arc-enabled Kubernetes cluster. Custom Locations creates the granular [RoleBindings and ClusterRoleBindings](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) necessary for other Azure services to access the cluster. These other Azure services require cluster access to manage resources you want to deploy on your clusters.
+> [!IMPORTANT]
+> In the interest of ensuring new features are documented no later than their release, this page may include documentation for features that may not yet be publicly available.
+ ## Architecture for Arc-enabled Kubernetes When an administrator enables the Custom Locations feature on the cluster, a ClusterRoleBinding is created on the cluster, authorizing the Azure AD application used by the Custom Locations Resource Provider (RP). Once authorized, Custom Locations RP can create ClusterRoleBindings or RoleBindings needed by other Azure RPs to create custom resources on this cluster. The cluster extensions installed on the cluster determines the list of RPs to authorize.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/resource-bridge/overview.md
Azure Arc resource bridge (preview) is part of the core Azure Arc platform, and
All management operations are performed from Azure, no local configuration is required on the appliance.
+> [!IMPORTANT]
+> In the interest of ensuring new features are documented no later than their release, this page may include documentation for features that may not yet be publicly available.
+ ## Overview Azure resource bridge (preview) hosts other components such as Custom Locations, cluster extensions, and other Azure Arc agents in order to deliver the level of functionality with the private cloud infrastructures it supports. This complex system is composed of three layers:
azure-arc Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/resource-bridge/security-overview.md
Last updated 11/08/2021
This article describes the security configuration and considerations you should evaluate before deploying Azure Arc resource bridge (preview) in your enterprise.
+> [!IMPORTANT]
+> In the interest of ensuring new features are documented no later than their release, this page may include documentation for features that may not yet be publicly available.
+ ## Using a managed identity By default, an Azure Active Directory system-assigned [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) is created and assigned to the Azure Arc resource bridge (preview). Azure Arc resource bridge (preview) currently supports only a system-assigned identity. The `clusteridentityoperator` identity initiates the first outbound communication and fetches the Managed Service Identity (MSI) certificate used by other agents for communication with Azure.
azure-arc Troubleshoot Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md
This article provides information on troubleshooting and resolving issues that may occur while attempting to deploy, use, or remove the Azure Arc resource bridge (preview). The resource bridge is a packaged virtual machine, which hosts a *management* Kubernetes cluster. For general information, see [Azure Arc resource bridge (preview) overview](./overview.md).
+> [!IMPORTANT]
+> In the interest of ensuring new features are documented no later than their release, this page may include documentation for features that may not yet be publicly available.
+ ## Logs For any issues encountered with the Azure Arc resource bridge, you can collect logs for further investigation. To collect the logs, use the Azure CLI [Az arcappliance log](placeholder for published ref API) command. This command needs to be run from the client machine where you've deployed the Azure Arc resource bridge from.
azure-arc Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/security-overview.md
Last updated 08/30/2021
This article describes the security configuration and considerations you should evaluate before deploying Azure Arc-enabled servers in your enterprise.
+> [!IMPORTANT]
+> In the interest of ensuring new features are documented no later than their release, this page may include documentation for features that may not yet be publicly available.
+ ## Identity and access control Each Azure Arc-enabled server has a managed identity as part of a resource group inside an Azure subscription. That identity represents the server running on-premises or other cloud environment. Access to this resource is controlled by standard [Azure role-based access control](../../role-based-access-control/overview.md). From the [**Access Control (IAM)**](../../role-based-access-control/role-assignments-portal.md) page in the Azure portal, you can verify who has access to your Azure Arc-enabled server.
azure-arc Browse And Enable Vcenter Resources In Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/vmware-vsphere/browse-and-enable-vcenter-resources-in-azure.md
After you've connected your VMware vCenter to Azure, you'll represent it in Azur
You can visit the VMware vCenter blade in Azure arc to view all the connected vCenters. From here, you'll browse your virtual machines (VMs), resource pools, templates, and networks. From the inventory of your vCenter resources, you can select and enable one or more resources in Azure. When you enable a vCenter resource in Azure, it creates an Azure resource that represents your vCenter resource. You can use this Azure resource to assign permissions or conduct management operations.
+> [!IMPORTANT]
+> In the interest of ensuring new features are documented no later than their release, this page may include documentation for features that may not yet be publicly available.
+ ## Create a representation of VMware resources in Azure In this section, you'll enable resource pools, networks, and VM templates in Azure.
azure-arc Manage Access To Arc Vmware Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/vmware-vsphere/manage-access-to-arc-vmware-resources.md
Once your VMware vCenter resources have been enabled for access through Azure, t
This article describes how to use custom roles to manage granular access to VMware resources through Azure.
+> [!IMPORTANT]
+> In the interest of ensuring new features are documented no later than their release, this page may include documentation for features that may not yet be publicly available.
+ ## Arc enabled VMware vSphere custom roles You can select from three custom roles to meet your RBAC needs. You can apply these roles to a whole subscription, resource group, or a single resource.
azure-arc Manage Vmware Vms In Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md
You can do various operations on the VMware VMs that are enabled by Azure Arc, s
For more information, such as benefits and capabilities, see [VM extension management with Azure Arc-enabled servers](../servers/manage-vm-extensions.md).
+> [!IMPORTANT]
+> In the interest of ensuring new features are documented no later than their release, this page may include documentation for features that may not yet be publicly available.
+ ## Supported extensions and management services ### Windows extensions
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/vmware-vsphere/overview.md
Arc-enabled VMware vSphere allows you to:
- Conduct governance and monitoring operations across Azure and VMware VMs by enabling guest management (installing the [Azure Arc-enabled servers Connected Machine agent](../servers/agent-overview.md)).
+> [!IMPORTANT]
+> In the interest of ensuring new features are documented no later than their release, this page may include documentation for features that may not yet be publicly available.
+ ## How does it work? To deliver this experience, you need to deploy the [Azure Arc resource bridge](../resource-bridge/overview.md) (preview), which is a virtual appliance, in your vSphere environment. It connects your vCenter Server to Azure. Azure Arc resource bridge (preview) enables you to represent the VMware resources in Azure and do various operations on them.
azure-arc Quick Start Connect Vcenter To Arc Using Script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md
Before using the Azure Arc-enabled VMware vSphere features, you'll need to conne
First, the script deploys a lightweight Azure Arc appliance, called [Azure Arc resource bridge](../resource-bridge/overview.md) (preview), as a virtual machine running in your vCenter environment. Then, it installs a VMware cluster extension to provide a continuous connection between your vCenter Server and Azure Arc.
+> [!IMPORTANT]
+> In the interest of ensuring new features are documented no later than their release, this page may include documentation for features that may not yet be publicly available.
+ ## Prerequisites ### Azure
azure-arc Quick Start Create A Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/vmware-vsphere/quick-start-create-a-vm.md
Last updated 09/29/2021
Once your administrator has connected a VMware vCenter to Azure, represented VMware vCenter resources in Azure, and provided you permissions on those resources, you'll create a virtual machine.
+> [!IMPORTANT]
+> In the interest of ensuring new features are documented no later than their release, this page may include documentation for features that may not yet be publicly available.
+ ## Prerequisites - An Azure subscription and resource group where you have an Arc VMware VM contributor role.
azure-cache-for-redis Cache Best Practices Development https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-best-practices-development.md
While you can connect from outside of Azure, it is not recommended *especially w
Azure Cache for Redis requires TLS encrypted communications by default. TLS versions 1.0, 1.1 and 1.2 are currently supported. However, TLS 1.0 and 1.1 are on a path to deprecation industry-wide, so use TLS 1.2 if at all possible.
-If your client library or tool doesn't support TLS, then enabling unencrypted connections is possible through the [Azure portal](cache-configure.md#access-ports) or [management APIs](/rest/api/redis/redis/update). In cases where encrypted connections aren't possible, we recommend placing your cache and client application into a virtual network. For more information about which ports are used in the virtual network cache scenario, see this [table](cache-how-to-premium-vnet.md#outbound-port-requirements).
+If your client library or tool doesn't support TLS, then enabling unencrypted connections is possible through the [Azure portal](cache-configure.md#access-ports) or [management APIs](/rest/api/redis/2021-06-01/redis/update). In cases where encrypted connections aren't possible, we recommend placing your cache and client application into a virtual network. For more information about which ports are used in the virtual network cache scenario, see this [table](cache-how-to-premium-vnet.md#outbound-port-requirements).
## Client library-specific guidance
azure-functions Create First Function Cli Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-python.md
Use the following commands to create these items. Both Azure CLI and PowerShell
# [Azure CLI](#tab/azure-cli) ```azurecli
- az functionapp create --consumption-plan-location westeurope --runtime python --runtime-version 3.8 --functions-version 3 --name <APP_NAME> --os-type linux
+ az functionapp create --consumption-plan-location westeurope --runtime python --runtime-version 3.8 --functions-version 3 --name <APP_NAME> --os-type linux --storage-account <STORAGE_NAME>
``` The [az functionapp create](/cli/azure/functionapp#az_functionapp_create) command creates the function app in Azure. If you are using Python 3.7 or 3.6, change `--runtime-version` to `3.7` or `3.6`, respectively. You must supply `--os-type linux` because Python functions can't run on Windows, which is the default.
azure-functions Functions Bindings Rabbitmq Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-rabbitmq-trigger.md
The default message type is [RabbitMQ Event](https://rabbitmq.github.io/rabbitmq
* `An object serializable as JSON` - The message is delivered as a valid JSON string. * `string` * `byte[]`
-* `POCO` - The message is formatted as a C# object. For a complete example, see C# [example](#example).
+* `POCO` - The message is formatted as a C# object. For complete code, see C# [example](#example).
# [C# Script](#tab/csharp-script)
azure-functions Functions Bindings Timer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-timer.md
The timer trigger is provided in the [Microsoft.Azure.WebJobs.Extensions](https:
# [C#](#tab/csharp)
-The following example shows a [C# function](functions-dotnet-class-library.md) that is executed each time the minutes have a value divisible by five (eg if the function starts at 18:57:00, the next performance will be at 19:00:00). The [`TimerInfo`](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/TimerInfo.cs) object is passed into the function.
+The following example shows a [C# function](functions-dotnet-class-library.md) that is executed each time the minutes have a value divisible by five (eg if the function starts at 18:55:00, the next performance will be at 19:00:00). The [`TimerInfo`](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/TimerInfo.cs) object is passed into the function.
```cs [FunctionName("TimerTriggerCSharp")]
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-csp-list.md
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[CodeLynx, LLC](http://www.codelynx.com/)| |[Columbus US, Inc.](https://www.columbusglobal.com)| |[Competitive Innovations, LLC](https://www.cillc.com)|
-|[Computer Professionals International](http://www.comproinc.com/)|
+|[Computer Professionals International](https://cb20.com/)|
|[Computer Solutions Inc.](http://cs-inc.co/)| |[Computex Technology Solutions](http://www.computex-inc.com/)| |[ConvergeOne](https://www.convergeone.com)|
azure-percept How To Troubleshoot Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-troubleshoot-setup.md
Refer to the table below for workarounds to common issues found during the [Azur
|Issue|Reason|Workaround| |:--|:|:-|
+|The Azure Percept DK Wi-Fi access point passphrase/password doesn't work| We have heard reports that some welcome cards may have incorrect passphrase/password printed.|In order to retrieve the Wi-Fi SoftAP password of your Percept Devkit, you must connect and use an Ethernet cable. Once the cable is attached and the device powered on, youΓÇÖll need to find the IP address that was assigned to your devkit. In ΓÇ£HomeΓÇ¥ situations you may be able to log in to your home router to get this info. Look for an ASUS device named ΓÇ£apdk-xxxxxxxΓÇ¥. The article [Connect to Azure Percept DK over Ethernet](./how-to-connect-over-ethernet.md) can guide you if youΓÇÖre not able to get the IP from the router. Once you have the EthernetΓÇÖs IP, start a web browser and manually copy and paste this address: IE: http://192.168.0.222 to go to the Onboarding experience. <ul><li>DonΓÇÖt go through the full setup just yet.</li><li>Setup Wi-Fi and create your SSH User and pause there (you can leave that window open and complete setup after we get the SoftAP password).</li><li>Open Putty or an SSH client and connect to the devkit using the user/pw you just created.</li><li>**Run: sudo tpm2_handle2psk 0x81000009.** The output from this command will be your password for the SoftAP. ΓÇô Please write it down on the card ΓÇô</li></ul>
|When connecting to the Azure account sign-up pages or to the Azure portal, you may automatically sign in with a cached account. If you don't sign in with the correct account, it may result in an experience that is inconsistent with the documentation.|The result of a browser setting to "remember" an account you have previously used.|From the Azure page, select on your account name in the upper right corner and select **sign out**. You can then sign in with the correct account.| |The Azure Percept DK Wi-Fi access point (apd-xxxx) doesn't appear in the list of available Wi-Fi networks.|It's usually a temporary issue that resolves within 15 minutes.|Wait for the network to appear. If it doesn't appear after more than 15 minutes, reboot the device.| |The connection to the Azure Percept DK Wi-Fi access point frequently disconnects.|It's usually because of a poor connection between the device and the host computer. It can also be caused by interference from other Wi-Fi connections on the host computer.|Make sure that the antennas are properly attached to the dev kit. If the dev kit is far away from the host computer, try moving it closer. Turn off any other internet connections such as LTE/5G if they're running on the host computer.|
azure-relay Relay Authentication And Authorization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-relay/relay-authentication-and-authorization.md
Last updated 07/19/2021
# Azure Relay authentication and authorization
-There are two ways to authenticate and authorize access to Azure Relay resources: Azure Activity Directory (Azure AD) and Shared Access Signatures (SAS). This article gives you details on using these two types of security mechanisms.
+There are two ways to authenticate and authorize access to Azure Relay resources: Azure Active Directory (Azure AD) and Shared Access Signatures (SAS). This article gives you details on using these two types of security mechanisms.
## Azure Active Directory (Preview) Azure AD integration for Azure Relay resources provides Azure role-based access control (Azure RBAC) for fine-grained control over a clientΓÇÖs access to resources. You can use Azure RBAC to grant permissions to a security principal, which may be a user, a group, or an application service principal. The security principal is authenticated by Azure AD to return an OAuth 2.0 token. The token can be used to authorize a request to access an Azure Relay resource.
SAS authentication support for Azure Relay is included in the Azure .NET SDK ver
- See the [Azure Relay Hybrid Connections protocol guide](relay-hybrid-connections-protocol.md) for detailed information about the Hybrid Connections capability. - For corresponding information about Service Bus Messaging authentication and authorization, see [Service Bus authentication and authorization](../service-bus-messaging/service-bus-authentication-and-authorization.md).
-[0]: ./media/relay-authentication-and-authorization/hcanon.png
+[0]: ./media/relay-authentication-and-authorization/hcanon.png
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/template-specs.md
Title: Create & deploy template specs in Bicep description: Describes how to create template specs in Bicep and share them with other users in your organization. Previously updated : 01/07/2022 Last updated : 01/12/2022 # Azure Resource Manager template specs in Bicep
The JSON template embedded in the Bicep file needs to make these changes:
* To access the parameters and variables defined in the Bicep file, you can directly use the parameter names and the variable names. To access the parameters and variables defined in `mainTemplate`, you still need to use the ARM JSON template syntax. For example, **'name': '[parameters(&#92;'storageAccountType&#92;')]'**. * Use the Bicep syntax to call Bicep functions. For example, **'location': resourceGroup().location**.
+The size of a template spec is limited to approximated 2 MB. If a template spec size exceeds the limit, you will get the **TemplateSpecTooLarge** error code. The error message says:
+
+```error
+The size of the template spec content exceeds the maximum limit. For large template specs with many artifacts, the recommended course of action is to split it into multiple template specs and reference them modularly via TemplateLinks.
+```
+ You can view all template specs in your subscription by using: # [PowerShell](#tab/azure-powershell)
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-specs.md
Title: Create & deploy template specs description: Describes how to create template specs and share them with other users in your organization. Previously updated : 01/07/2022 Last updated : 01/12/2022
You can also create template specs by using ARM templates. The following templat
} ```
+The size of a template spec is limited to approximated 2 MB. If a template spec size exceeds the limit, you will get the **TemplateSpecTooLarge** error code. The error message says:
+
+```error
+The size of the template spec content exceeds the maximum limit. For large template specs with many artifacts, the recommended course of action is to split it into multiple template specs and reference them modularly via TemplateLinks.
+```
+ You can view all template specs in your subscription by using: # [PowerShell](#tab/azure-powershell)
azure-resource-manager Common Deployment Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/troubleshooting/common-deployment-errors.md
Title: Troubleshoot common Azure deployment errors
description: Describes common errors for Azure resources deployed with Azure Resource Manager templates (ARM templates) or Bicep files. tags: top-support-issue Previously updated : 01/10/2022 Last updated : 01/13/2022
If your error code isn't listed, submit a GitHub issue. On the right side of the
| PrivateIPAddressInReservedRange | The specified IP address includes an address range required by Azure. Change IP address to avoid reserved range. | [Private IP addresses](../../virtual-network/ip-services/private-ip-addresses.md) | PrivateIPAddressNotInSubnet | The specified IP address is outside of the subnet range. Change IP address to fall within subnet range. | [Private IP addresses](../../virtual-network/ip-services/private-ip-addresses.md) | | PropertyChangeNotAllowed | Some properties can't be changed on a deployed resource. When updating a resource, limit your changes to permitted properties. | [Update resource](/azure/architecture/guide/azure-resource-manager/advanced-templates/update-resource) |
+| RegionDoesNotAllowProvisioning | Select a different region or submit a quota support request for **Region access**. | |
| RequestDisallowedByPolicy | Your subscription includes a resource policy that prevents an action you're trying to do during deployment. Find the policy that blocks the action. If possible, change your deployment to meet the limitations from the policy. | [Resolve policies](error-policy-requestdisallowedbypolicy.md) | | ReservedResourceName | Provide a resource name that doesn't include a reserved name. | [Reserved resource names](error-reserved-resource-name.md) | | ResourceGroupBeingDeleted | Wait for deletion to complete. | |
azure-video-analyzer Pipeline Topologies List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/pipeline-topologies-list.md
+
+ Title: List of pipeline topologies
+description: This article lists validated sample pipeline topologies for Azure Video Analyzer.
+ Last updated : 01/12/2022++
+# List of pipeline topologies
+
+The following tables list validated sample Azure Video Analyzer [live pipeline topologies](terminology.md#pipeline-topology). These topologies can be further customized according to solution needs. The tables also provide
+
+* A short description,
+* Topology's corresponding sample tutorial(s), and
+* The corresponding pipeline topology name of the Visual Studio Code (VSCode) [Video Analyzer extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.azure-video-analyzer).
+
+Clicking on a topology name redirects to the corresponding JSON file located in [this GitHub folder](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/), clicking on a sample redirects to the corresponding sample document, and clicking on a VSCode name redirects to a screenshot of the sample topology.
+
+## Live pipeline topologies
+
+### Continuous video recording
+
+Name | Description | Samples | VSCode Name
+:-- | :- | :- | :
+[cvr-video-sink](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/cvr-video-sink/topology.json) | Perform continuous video recording (CVR). Capture video and continuously record it to an Azure Video Analyzer video. | [Continuous video recording and playback](edge/use-continuous-video-recording.md) | [Record to Video Analyzer video](./visual-studio-code-extension.md#record-to-video-analyzer-video)
+[cvr-with-grpcExtension](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/cvr-with-grpcExtension/topology.json) | Perform CVR. A subset of the video frames is sent to an external AI inference engine using the sharedMemory mode for data transfer via the gRPC extension. The results are then published to the IoT Edge Hub. | | [Record using gRPC Extension](./visual-studio-code-extension.md#record-using-grpc-extension)
+[cvr-with-httpExtension](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/cvr-with-httpExtension/topology.json) | Perform CVR. A subset of the video frames is sent to an external AI inference engine via the HTTP extension. The results are then published to the IoT Edge Hub. | | [Record using HTTP Extension](./visual-studio-code-extension.md#record-using-http-extension)
+[cvr-with-httpExtension-and-objectTracking](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/cvr-with-httpExtension-and-objectTracking/topology.json) | Perform CVR and track objects in a live feed. Inference metadata from an external AI inference engine is published to the IoT Edge Hub, and can be played back with the video. | [Record and stream inference metadata with video](edge/record-stream-inference-data-with-video.md)
+[cvr-with-motion](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/cvr-with-motion/topology.json) | Perform CVR. When motion is detected from a live video feed, relevant inferencing events are published to the IoT Edge Hub. | | [Record on motion detection](./visual-studio-code-extension.md#record-on-motion-detection)
+[audio-video](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/audio-video/topology.json) | Perform CVR and record audio using the outputSelectors property. | | [Record audio with video](./visual-studio-code-extension.md#record-audio-with-video)
+
+### Event-based video recording
+
+Name | Description | Samples | VSCode Name
+:-- | :- | :- | :
+[evr-grpcExtension-video-sink](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/evr-grpcExtension-video-sink/topology.json) | When an event of interest is detected by the external AI inference engine via the gRPC extension, those events are published to the IoT Edge Hub. The events are used to trigger the signal gate processor node that results in the appending of new clips to the Azure Video Analyzer video, corresponding to when the event of interest was detected. | [Develop and deploy gRPC inference server](edge/develop-deploy-grpc-inference-srv.md) | [Record using gRPC Extension](./visual-studio-code-extension.md#record-using-grpc-extension-1)
+[evr-httpExtension-video-sink](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/evr-httpExtension-video-sink/topology.json) | When an event of interest is detected by the external AI inference engine via the HTTP extension, those events are published to the IoT Edge Hub. The events are used to trigger the signal gate processor node that results in the appending of new clips to the Azure Video Analyzer video, corresponding to when the event of interest was detected. | | [Record using HTTP Extension](./visual-studio-code-extension.md#record-using-http-extension-1)
+[evr-hubMessage-video-sink](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/evr-hubMessage-video-sink/topology.json) | Use an object detection AI model to look for objects in the video, and record video clips only when a certain type of object is detected. The trigger to generate these clips is based on the AI inference events published onto the IoT Hub. | [Event-based video recording and playback](edge/record-event-based-live-video.md)| [Record to Video Analyzer video based on inference events](./visual-studio-code-extension.md#record-to-video-analyzer-video-based-on-inference-events)
+[evr-hubMessage-file-sink](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/evr-hubMessage-file-sink/topology.json) | Record video clips to the local file system of the edge device whenever an external sensor sends a message to the pipeline topology. For example, the sensor can be a door sensor. | | [Record to local files based on inference events](./visual-studio-code-extension.md#record-to-local-files-based-on-inference-events)
+[evr-motion-video-sink-file-sink](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/evr-motion-video-sink-file-sink/topology.json) | Perform event-based recording of video clips to the cloud and to the edge. When motion is detected from a live video feed, events are sent to a signal gate processor node that opens, allowing video to pass through to a file sink node and a video sink node. As a result, new files are created on the local file system of the Edge device, and new video clips are appended to your Video Analyzer video. The recordings contain the frames where motion was detected. | | [Record motion events to Video Analyzer video and local files](./visual-studio-code-extension.md#record-motion-events-to-video-analyzer-video-and-local-files)
+[evr-motion-video-sink](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/evr-motion-video-sink/topology.json) | When motion is detected, those events are published to the IoT Edge Hub. In addition, the motion events are used to trigger the signal gate processor node that will send frames to the video sink node when motion is detected. As a result, new video clips are appended to the Azure Video Analyzer video, corresponding to when motion was detected. | [Detect motion, record video to Video Analyzer](edge/detect-motion-record-video-clips-cloud.md) | [Record motion events to Video Analyzer video](./visual-studio-code-extension.md#record-motion-events-to-video-analyzer-video)
+[evr-motion-file-sink](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/evr-motion-file-sink/topology.json) | When motion is detected from a live video feed, events are sent to a signal gate processor node that opens, sending frames to a file sink node. As a result, new files are created on the local file system of the edge device, containing the frames where motion was detected. | [Detect motion and record video on edge devices](edge/detect-motion-record-video-edge-devices.md) | [Record motion events to local files](./visual-studio-code-extension.md#record-motion-events-to-local-files)
+
+### Motion detection
+
+Name | Description | Samples | VSCode Name
+:-- | :- | :- | :
+[motion-detection](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/motion-detection/topology.json) | Detect motion in a live video feed. When motion is detected, those events are published to the IoT Hub. | [Get started with Azure Video Analyzer](edge/get-started-detect-motion-emit-events.md), [Get started with Video Analyzer in the portal](edge/get-started-detect-motion-emit-events-portal.md), [Detect motion and emit events](edge/detect-motion-emit-events-quickstart.md) | [Publish motion events to IoT Hub](./visual-studio-code-extension.md#publish-motion-events-to-iot-hub)
+[motion-with-grpcExtension](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/motion-with-grpcExtension/topology.json) | Perform event-based recording in the presence of motion. When motion is detected from a live video feed, those events are published to the IoT Edge Hub. In addition, the motion events are used to trigger a signal gate processor node that will send frames to a video sink node only when motion is present. As a result, new video clips are appended to the Azure Video Analyzer video, corresponding to when motion was detected. Additionally, run video analytics only when motion is detected. Upon detecting motion, a subset of the video frames is sent to an external AI inference engine via the gRPC extension. The results are then published to the IoT Edge Hub. | [Analyze live video with your own model - gRPC](edge/analyze-live-video-use-your-model-grpc.md) | [Analyzer motion events using gRPC Extension](./visual-studio-code-extension.md#analyze-motion-events-using-grpc-extension)
+[motion-with-httpExtension](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/motion-with-httpExtension/topology.json) | Perform event-based recording in the presence of motion. When motion is detected in a live video feed, those events are published to the IoT Edge Hub. In addition, the motion events are used to trigger a signal gate processor node that will send frames to a video sink node only when motion is present. As a result, new video clips are appended to the Azure Video Analyzer video, corresponding to when motion was detected. Additionally, run video analytics only when motion is detected. Upon detecting motion, a subset of the video frames is sent to an external AI inference engine via the HTTP extension. The results are then published to the IoT Edge Hub. | [Analyze live video with your own model - HTTP](edge/analyze-live-video-use-your-model-http.md#generate-and-deploy-the-iot-edge-deployment-manifest) | [Analyze motion events using HTTP Extension](./visual-studio-code-extension.md#analyze-motion-events-using-http-extension)
+
+### Extensions
+
+Name | Description | Samples | VSCode Name
+:-- | :- | :- | :
+[grpcExtensionOpenVINO](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/grpcExtensionOpenVINO/topology.json) | Run video analytics on a live video feed. The gRPC extension allows you to create images at video frame rate from the camera that are converted to images, and sent to the [OpenVINO™ DL Streamer - Edge AI Extension module](https://aka.ms/ava-intel-ovms) provided by Intel. The results are then published to the IoT Edge Hub. | [Analyze live video with Intel OpenVINO™ DL Streamer – Edge AI Extension](edge/use-intel-grpc-video-analytics-serving-tutorial.md)
+[httpExtension](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/httpExtension/topology.json) | Run video analytics on a live video feed. A subset of the video frames from the camera are converted to images, and sent to an external AI inference engine. The results are then published to the IoT Edge Hub. | [Analyze live video with your own model - HTTP](edge/analyze-live-video-use-your-model-http.md), [Analyze live video with Azure Video Analyzer on IoT Edge and Azure Custom Vision](edge/analyze-live-video-custom-vision.md) | [Analyze video using HTTP Extension](./visual-studio-code-extension.md#analyze-video-using-http-extension)
+[httpExtensionOpenVINO](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/httpExtensionOpenVINO/topology.json) | Run video analytics on a live video feed. A subset of the video frames from the camera are converted to images, and sent to the [OpenVINO™ Model Server – AI Extension module](https://aka.ms/ava-intel-ovms) provided by Intel. The results are then published to the IoT Edge Hub. | [Analyze live video using OpenVINO™ Model Server – AI Extension from Intel](https://aka.ms/ava-intel-ovms-tutorial) | [Analyze video with Intel OpenVINO Model Server](./visual-studio-code-extension.md#analyze-video-with-intel-openvino-model-server)
+
+### Computer vision
+
+Name | Description | Samples | VSCode Name
+:-- | :- | :- | :
+[spatial-analysis/person-count-operation-topology](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/spatial-analysis/person-count-operation-topology.json) | Live video is sent to an external [spatialAnalysis](../../cognitive-services/computer-vision/spatial-analysis-operations.md) module that counts people in a designated zone. When the criteria defined by the AI operation is met, events are sent to a signal gate processor that opens, sending the frames to a video sink node. As a result, a new clip is appended to the Azure Video Analyzer video resource. | | [Person count operation with Computer Vision for Spatial Analysis](./visual-studio-code-extension.md#person-count-operation-with-computer-vision-for-spatial-analysis)
+[spatial-analysis/person-line-crossing-operation-topology](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/spatial-analysis/person-line-crossing-operation-topology.json) | Live video is sent to an external [spatialAnalysis](../../cognitive-services/computer-vision/spatial-analysis-operations.md) module that tracks when a person crosses a designated line. When the criteria defined by the AI operation is met, events are sent to a signal gate processor that opens, sending the frames to a video sink node. As a result, a new clip is appended to the Azure Video Analyzer video resource. | | [Person crossing line operation with Computer Vision for Spatial Analysis](./visual-studio-code-extension.md#person-crossing-line-operation-with-computer-vision-for-spatial-analysis)
+[spatial-analysis/person-zone-crossing-operation-topology](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/spatial-analysis/person-zone-crossing-operation-topology.json) | Live video is sent to an external [spatialAnalysis](../../cognitive-services/computer-vision/spatial-analysis-operations.md) module that emits an event when a person enters or exists a zone. When the criteria defined by the AI operation is met, events are sent to a signal gate processor that opens, sending the frames to a video sink node. As a result, a new clip is appended to the Azure Video Analyzer video resource. | [Live Video with Computer Vision for Spatial Analysis](https://aka.ms/ava-spatial-analysis) | [Person crossing zone operation with Computer Vision for Spatial Analysis](./visual-studio-code-extension.md#person-crossing-zone-operation-with-computer-vision-for-spatial-analysis)
+[spatial-analysis/person-distance-operation-topology](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/spatial-analysis/person-distance-operation-topology.json) | Live video is sent to an external [spatialAnalysis](../../cognitive-services/computer-vision/spatial-analysis-operations.md) module that tracks when people violate a distance rule. When the criteria defined by the AI operation is met, events are sent to a signal gate processor that opens, sending the frames to a video sink node. As a result, a new clip is appended to the Azure Video Analyzer video resource. | | [Person distance operation with Computer Vision for Spatial Analysis](./visual-studio-code-extension.md#person-distance-operation-with-computer-vision-for-spatial-analysis)
+[spatial-analysis/custom-operation-topology](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/spatial-analysis/custom-operation-topology.json) | Live video is sent to an external [spatialAnalysis](../../cognitive-services/computer-vision/spatial-analysis-operations.md) module that carries out a supported AI operation. When the criteria defined by the AI operation is met, events are sent to a signal gate processor that opens, sending the frames to a video sink node. As a result, a new clip is appended to the Azure Video Analyzer video resource. | | [Custom operation with Computer Vision for Spatial Analysis](./visual-studio-code-extension.md#custom-operation-with-computer-vision-for-spatial-analysis)
+
+### AI composition
+
+Name | Description | Samples | VSCode Name
+:-- | :- | :- | :
+[ai-composition](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/ai-composition/topology.json) | Run 2 AI inferencing models of your choice. In this example, classified video frames are sent from an AI inference engine using the [Tiny YOLOv3 model](https://github.com/Azure/video-analyzer/tree/main/edge-modules/extensions/yolo/tinyyolov3/grpc-cpu) to another engine using the [YOLOv3 model](https://github.com/Azure/video-analyzer/tree/main/edge-modules/extensions/yolo/yolov3/grpc-cpu). Having such a topology enables you to trigger a heavy AI module, only when a light AI module indicates a need to do so. | [Analyze live video streams with multiple AI models using AI composition](edge/analyze-ai-composition.md) | [Record to the Video Analyzer service using multiple AI models](./visual-studio-code-extension.md#record-to-the-video-analyzer-service-using-multiple-ai-models)
+
+### Miscellaneous
+
+Name | Description | Samples | VSCode Name
+:-- | :- | :- | :
+[object-tracking](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/object-tracking/topology.json) | Track objects in a live video feed. The object tracker comes in handy when you need to detect objects in every frame, but the edge device does not have the necessary compute power to be able to apply the vision model on every frame. | [Track objects in a live video](edge/track-objects-live-video.md) | [Record video based on the object tracking AI model](./visual-studio-code-extension.md#record-video-based-on-the-object-tracking-ai-model)
+[line-crossing](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/line-crossing/topology.json) | Use a computer vision model to detect objects in a subset of frames when they cross a virtual line in a live video feed. The object tracker node is used to track those objects in the frames and pass them through a line-crossing node. The line-crossing node comes in handy when you want to detect objects that cross the imaginary line and emit events. | [Detect when objects cross a virtual line in a live video](edge/use-line-crossing.md) | [Record video based on the line crossing AI model](./visual-studio-code-extension.md#record-video-based-on-the-line-crossing-ai-model)
+
+## Next steps
+
+[Understand Video Analyzer pipelines](pipeline.md).
azure-video-analyzer Quotas Limitations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/quotas-limitations.md
Video Analyzer only supports RTSP with [interleaved RTP streams](https://datatra
### Support for video AI The HTTP or gRPC extension processors only support sending of image/video frame data with an external AI module. Thus, running inferencing on audio data is not supported. As a result, processor nodes in pipeline topologies that have an RTSP source node as one of the `inputs` also make use of an `outputSelectors` property to ensure that only video is passed into the processor. See this [topology](https://github.com/Azure/video-analyzer/blob/main/pipelines/live/topologies/evr-grpcExtension-video-sink/topology.json) as an example.
-## Quotas and limitations - live and batch pipeline
+## Quotas and limitations - cloud pipelines
This section enumerates the quotas and limitations of Video Analyzer cloud pipelines.
azure-video-analyzer Visual Studio Code Extension https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/visual-studio-code-extension.md
Title: Visual Studio Code extension
description: This reference article explains how to use the various pieces of functionality in the Visual Studio Code extension for Azure Video Analyzer. Previously updated : 11/04/2021 Last updated : 01/13/2022
If you have not set up the extension to connect to your edge device, follow the
## Managing pipelines topology
-To create a topology, along the left under your module right-click on `Pipelines topologies` and select `Create pipeline topology`. This will open up a new blank topology. You can then either load one of the pre-made topologies by selecting from the `Try sample topologies` dropdown at the top, or building one yourself.
+To create a topology, along the left panel under the Video Analyzer Edge module, right-click on `Pipelines topologies` and select `Create pipeline topology`. This will open up a new blank topology. Either load one of the pre-made topologies by selecting from the `Try sample topologies` dropdown at the top, or build one by dragging and dropping the available modules and connecting them.
-After all required areas are complete, you will need to save the topology with the `Save` in the top right. For sample topologies required field should be pre-filled. This will make it available for use with creating live pipelines.
+After all required areas are complete, save the topology with the `Save` in the top right. For sample topologies, the required fields should be pre-filled. This will make it available for use with creating live pipelines.
-To edit an existing topology, on the left under Pipeline topologies right-click on the name of the topology, and select `Edit pipeline topology`.
+To edit an existing topology, on the left panel under `Pipeline topologies` right-click on the name of the topology, and select `Edit pipeline topology`.
-To delete an existing topology, on the left under Pipeline topologies right-click on the name of the topology, and select `Delete pipeline topology`. Live pipelines will need to be removed first.
+To delete an existing topology, on the left under `Pipeline topologies` right-click on the name of the topology, and select `Delete pipeline topology`. Live pipelines will need to be removed first.
-If you want to view the underlying JSON behind an existing topology, on the left under Pipeline topologies right-click on the name of the topology, and select `Show pipeline topology JSON`.
+To view the underlying JSON behind an existing topology, on the left panel under `Pipeline topologies` right-click on the name of the topology, and select `Show pipeline topology JSON`.
## Live pipelines
-To create a live pipeline, along the left under Pipeline topologies right-click on the name of the topology and select `Create live pipeline`. You will then need to fill in a live pipeline name, and any required parameters before continuing. In the top right you can then either click `Save` which will save it in an inactive state, or `Save and activate` which will start the live pipeline immediately.
+To create a live pipeline, along the left panel under `Pipeline topologies` right-click on the name of the topology and select `Create live pipeline`. Fill in a live pipeline name and any required parameters before continuing. In the top right, either click `Save` which will save it in an inactive state, or `Save and activate` which will start the live pipeline immediately.
-To activate an existing live pipeline, along the left under Pipeline topologies right-click on the name of the live pipeline and select `Activate live pipeline`.
+To activate an existing live pipeline, along the left panel under `Pipeline topologies` right-click on the name of the live pipeline and select `Activate live pipeline`.
-To deactivate a running instance, along the left under Pipeline topologies right-click on the live pipeline and select `Deactivate live pipeline`. This will not delete the live pipeline.
+To deactivate a running instance, along the left panel under `Pipeline topologies` right-click on the live pipeline and select `Deactivate live pipeline`. This will not delete the live pipeline.
-To delete an existing live pipeline, along the left under Pipeline topologies right-click on the live pipeline and select `Delete live pipeline`. You cannot delete an active live pipeline.
+To delete an existing live pipeline, along the left panel under `Pipeline topologies` right-click on the live pipeline and select `Delete live pipeline`. Active live pipelines cannot be deleted.
-If you want to view the underlying JSON behind an existing live pipeline, on the left under Pipeline topologies right-click on the live pipeline and select `Show live pipeline JSON`.
+To view the underlying JSON behind an existing live pipeline, on the left panel under `Pipeline topologies` right-click on the live pipeline and select `Show live pipeline JSON`.
+
+## Remote device adapters
+
+To create a [remote device adapter](./cloud/connect-cameras-to-cloud.md#connect-via-a-remote-device-adapter), along the left panel under the Video Analyzer Edge module, right-click on `Remote device adapters` and select `Create remote device adapter`. Three additional dialog boxes appear, prompting for additional information:
+1. Enter a unique name for the remote device adapter. (There should be no other remote device adapters with this name.)
+2. Select a IoT Device. (For instance, select the IoT device that represents the network camera that will be connected.)
+3. Enter a hostname or IP address of the remote device adapter. (For instance, enter the IP address of the network camera that will be connected.)
+
+After entering all the necessary information, the remote device adapter will be saved and listed under the `Remote device adapters` section.
+
+To delete an existing remote device adapter, along the left panel under `Remote device adapters` right-click on a remote device adapter and select `Delete remote device adapter`.
+
+To view the underlying JSON behind an existing remote device adapter, along the left panel under `Remote device adapters` right-click on a remote device adapter and select `Show remote device adapter JSON`.
## Editing a topology
As topologies are effectively templates that you may make multiple live pipeline
In the parameterization window, you can create a new parameter which will work as a value you fill in at live pipeline creation, or select from an existing one. Creating a new one requires you to fill out the name and type. You can optionally enter a default value if this will only sometimes need to be changed. When a live pipeline is created only parameters without a default value will be required. When you are done, click `Set`.
-If you wish to manage your existing parameters, this can be done with the `Manage parameters` option along the top. The pane that comes allows you to add new parameters, and either edit or delete existing ones.
+If you wish to manage your existing parameters, this can be done with the `Manage parameters` option along the top. The pane that comes allows you to add new parameters, and either edit or delete existing ones.
### System variable
-When creating a series of live pipelines, there are likely cases where you want to use variables to help name files or outputs. For example, you may wish to name a video clip with the live pipeline name and date / time so you know where it came from and at what time. Video Analyzer provides three system variables you can use in your modules to help here.
+When creating a series of live pipelines, there are likely cases where you want to use variables to help name files or outputs. For example, you may wish to name a video clip with the live pipeline name and date / time so you know where it came from and at what time. Video Analyzer provides three system variables you can use in your modules to help here.
| System Variable | Description | Example | | : | :-- | :- |
When creating a series of live pipelines, there are likely cases where you want
### Connections
-When you create a topology, you will need to connect the various modules together. This is done with connections. From the circle on the edge of a module, drag to the circle on the next module you want data to flow to. This will produce a connection.
+When you create a topology, you will need to connect the various modules together. This is done with connections. From the circle on the edge of a module, drag to the circle on the next module you want data to flow to. This will produce a connection.
By default, connections send video data from one module to another. If you want to send only audio data or application data, you can left click on the connection and edit the output types. Selectable types of data include video, audio, and application. Selecting none of the output types will be treated as sending all applicable data from the sender node.+
+## Sample pipeline topologies
+
+The following sample pipeline topologies are available on the extension:
+
+### Continuous Video Recording
+
+#### Record to Video Analyzer video
+[ ![Screenshot of C-V-R-To-Video-Sink topology on Visual Studio Code.](./media/visual-studio-code-extension/cvr-to-video-sink.png) ](./media/visual-studio-code-extension/cvr-to-video-sink.png)
+
+#### Record using gRPC Extension
+[ ![Screenshot of C-V-R-With-G-r-p-c-Extension topology on Visual Studio Code.](./media/visual-studio-code-extension/cvr-with-grpc-extension.png) ](./media/visual-studio-code-extension/cvr-with-grpc-extension.png)
+
+#### Record using HTTP Extension
+[ ![Screenshot of C-V-R-With-H-t-t-p-Extension topology on Visual Studio Code.](./media/visual-studio-code-extension/cvr-with-http-extension.png) ](./media/visual-studio-code-extension/cvr-with-http-extension.png)
+
+#### Record on motion detection
+[ ![Screenshot of C-V-R-With-Motion-Detection topology on Visual Studio Code.](./media/visual-studio-code-extension/cvr-with-motion-detection.png) ](./media/visual-studio-code-extension/cvr-with-motion-detection.png)
+
+#### Record audio with video
+[ ![Screenshot of Audio-Video topology on Visual Studio Code.](./media/visual-studio-code-extension/audio-video.png) ](./media/visual-studio-code-extension/audio-video.png)
+
+### Event-based Video Recording
+
+#### Record using gRPC Extension
+[ ![Screenshot of E-V-R-toVideo-Sink-By-G-r-p-c-Extension topology on Visual Studio Code.](./media/visual-studio-code-extension/evr-to-video-sink-by-grpc-extension.png) ](./media/visual-studio-code-extension/evr-to-video-sink-by-grpc-extension.png)
+
+#### Record using HTTP Extension
+[ ![Screenshot of E-V-R-to-Video-Sink-By-H-t-t-p-Extension topology on Visual Studio Code.](./media/visual-studio-code-extension/evr-to-video-sink-by-http-extension.png) ](./media/visual-studio-code-extension/evr-to-video-sink-by-http-extension.png)
+
+#### Record to Video Analyzer video based on inference events
+[ ![Screenshot of E-V-R-to-Video-Sink-On-Obj-Detect topology on Visual Studio Code.](./media/visual-studio-code-extension/evr-to-video-sink-on-obj-detect.png) ](./media/visual-studio-code-extension/evr-to-video-sink-on-obj-detect.png)
+
+#### Record to local files based on inference events
+[ ![Screenshot of E-V-R-to-Files-Based-On-Hub-Messages topology on Visual Studio Code.](./media/visual-studio-code-extension/evr-to-files-based-on-hub-messages.png) ](./media/visual-studio-code-extension/evr-to-files-based-on-hub-messages.png)
+
+#### Record motion events to Video Analyzer video and local files
+[ ![Screenshot of E-V-R-to-Files-And-Video-Sink-On-Motion topology on Visual Studio Code.](./media/visual-studio-code-extension/evr-to-files-and-video-sink-on-motion.png) ](./media/visual-studio-code-extension/evr-to-files-and-video-sink-on-motion.png)
+
+#### Record motion events to Video Analyzer video
+[ ![Screenshot of E-V-R-to-Video-Sink-On-Motion-Detection topology on Visual Studio Code.](./media/visual-studio-code-extension/evr-to-video-sink-on-motion-detection.png) ](./media/visual-studio-code-extension/evr-to-video-sink-on-motion-detection.png)
+
+#### Record motion events to local files
+[ ![Screenshot of E-V-R-to-Files-On-Motion-Detection topology on Visual Studio Code.](./media/visual-studio-code-extension/evr-to-files-on-motion-detection.png) ](./media/visual-studio-code-extension/evr-to-files-on-motion-detection.png)
+
+### Motion Detection
+
+#### Publish motion events to IoT Hub
+[ ![Screenshot of Motion-Detection topology on Visual Studio Code.](./media/visual-studio-code-extension/motion-detection.png) ](./media/visual-studio-code-extension/motion-detection.png)
+
+#### Analyze motion events using gRPC Extension
+[ ![Screenshot of E-V-R-On-Motion-Plus-G-r-p-c-Extension topology on Visual Studio Code.](./media/visual-studio-code-extension/evr-on-motion-plus-grpc-extension.png) ](./media/visual-studio-code-extension/evr-on-motion-plus-grpc-extension.png)
+
+#### Analyze motion events using HTTP Extension
+[ ![Screenshot of E-V-R-On-Motion-Plus-H-t-t-p-Extension topology on Visual Studio Code.](./media/visual-studio-code-extension/evr-on-motion-plus-http-extension.png) ](./media/visual-studio-code-extension/evr-on-motion-plus-http-extension.png)
+
+### Extensions
+
+#### Analyze video using HTTP Extension
+[ ![Screenshot of Inferencing-With-H-t-t-p-Extension topology on Visual Studio Code.](./media/visual-studio-code-extension/inferencing-with-http-extension.png) ](./media/visual-studio-code-extension/inferencing-with-http-extension.png)
+
+#### Analyze video with Intel OpenVINO Model Server
+[ ![Screenshot of Inferencing-With-Open-VINO topology on Visual Studio Code.](./media/visual-studio-code-extension/inferencing-with-openvino.png) ](./media/visual-studio-code-extension/inferencing-with-openvino.png)
+
+### Computer Vision
+
+#### Person count operation with Computer Vision for Spatial Analysis
+[ ![Screenshot of Person-Count-Topology on Visual Studio Code.](./media/visual-studio-code-extension/person-count-topology.png) ](./media/visual-studio-code-extension/person-count-topology.png)
+
+#### Person crossing line operation with Computer Vision for Spatial Analysis
+[ ![Screenshot of Person-Crossing-Line-Topology on Visual Studio Code.](./media/visual-studio-code-extension/person-crossing-line-topology.png) ](./media/visual-studio-code-extension/person-crossing-line-topology.png)
+
+#### Person crossing zone operation with Computer Vision for Spatial Analysis
+[ ![Screenshot of Person-Zone-Crossing-Topology on Visual Studio Code.](./media/visual-studio-code-extension/person-zone-crossing-topology.png) ](./media/visual-studio-code-extension/person-zone-crossing-topology.png)
+
+#### Person distance operation with Computer Vision for Spatial Analysis
+[ ![Screenshot of Person-Distance-Topology on Visual Studio Code.](./media/visual-studio-code-extension/person-distance-topology.png) ](./media/visual-studio-code-extension/person-distance-topology.png)
+
+#### Custom operation with Computer Vision for Spatial Analysis
+[ ![Screenshot of Person-Attributes-Topology on Visual Studio Code.](./media/visual-studio-code-extension/person-attributes-topology.png) ](./media/visual-studio-code-extension/person-attributes-topology.png)
+
+### AI Composition
+
+#### Record to the Video Analyzer service using multiple AI models
+[ ![Screenshot of A-I-Composition topology on Visual Studio Code.](./media/visual-studio-code-extension/ai-composition.png) ](./media/visual-studio-code-extension/ai-composition.png)
+
+### Miscellaneous
+
+#### Record video based on the object tracking AI model
+[ ![Screenshot of Object-Tracking-With-H-t-t-p-Extension topology on Visual Studio Code.](./media/visual-studio-code-extension/object-tracking-with-http-extension.png) ](./media/visual-studio-code-extension/object-tracking-with-http-extension.png)
+
+#### Record video based on the line crossing AI model
+[ ![Screenshot of Line-Crossing-With-H-t-t-p-Extension topology on Visual Studio Code.](./media/visual-studio-code-extension/line-crossing-with-http-extension.png) ](./media/visual-studio-code-extension/line-crossing-with-http-extension.png)
azure-video-analyzer Connect To Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/connect-to-azure.md
When creating an Azure Video Analyzer for Media (formerly Video Indexer) account, you can choose a free trial account (where you get a certain number of free indexing minutes) or a paid option (where you're not limited by the quota). With a free trial, Video Analyzer for Media provides up to 600 minutes of free indexing to users and up to 2400 minutes of free indexing to users that subscribe to the Video Analyzer API on the [developer portal](https://aka.ms/avam-dev-portal). With the paid options, Azure Video Analyzer for Media offers two types of accounts: classic accounts(General Availability), and ARM-based accounts(Public Preview). Main difference between the two is account management platform. While classic accounts is built on the API Management, ARM-based accounts management is built on Azure, enables to apply access control to all services with role-based access control (Azure RBAC) natively. * You can create a Video Analyzer for Media **classic** account through our [API](https://aka.ms/avam-dev-portal).- * You can create a Video Analyzer for Media **ARM-based** account through one of the following: 1. [Video Analyzer for Media portal](https://aka.ms/vi-portal-link)- 2. [Azure portal](https://ms.portal.azure.com/#home)- 3. [QuickStart ARM template](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/ARM-Samples/Create-Account)
-### To read more on how to create a **new ARM-Based** Video Analyzer for Media account, read this [article](create-video-analyzer-for-media-account.md)
+To read more on how to create a **new ARM-Based** Video Analyzer for Media account, read this [article](create-video-analyzer-for-media-account.md)
## How to create classic accounts This article shows how to create a Video Analyzer for Media classic account. The topic provides steps for connecting to Azure using the automatic (default) flow. It also shows how to connect to Azure manually (advanced).
backup Sap Hana Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/sap-hana-backup-support-matrix.md
Title: SAP HANA Backup support matrix description: In this article, learn about the supported scenarios and limitations when you use Azure Backup to back up SAP HANA databases on Azure VMs. Previously updated : 11/10/2021 Last updated : 01/13/2022
Azure Backup supports the backup of SAP HANA databases to Azure. This article su
| **Topology** | SAP HANA running in Azure Linux VMs only | HANA Large Instances (HLI) | | **Regions** | **GA:**<br> **Americas** ΓÇô Central US, East US 2, East US, North Central US, South Central US, West US 2, West US 3, West Central US, West US, Canada Central, Canada East, Brazil South <br> **Asia Pacific** ΓÇô Australia Central, Australia Central 2, Australia East, Australia Southeast, Japan East, Japan West, Korea Central, Korea South, East Asia, Southeast Asia, Central India, South India, West India, China East, China North, China East2, China North 2 <br> **Europe** ΓÇô West Europe, North Europe, France Central, UK South, UK West, Germany North, Germany West Central, Switzerland North, Switzerland West, Central Switzerland North, Norway East, Norway West <br> **Africa / ME** - South Africa North, South Africa West, UAE North, UAE Central <BR> **Azure Government regions** | France South, Germany Central, Germany Northeast, US Gov IOWA | | **OS versions** | SLES 12 with SP2, SP3, SP4 and SP5; SLES 15 with SP0, SP1, SP2 and SP3 <br><br> RHEL 7.4, 7.6, 7.7, 7.9, 8.1, 8.2 and 8.4 | |
-| **HANA versions** | SDC on HANA 1.x, MDC on HANA 2.x SPS04, SPS05 Rev <= 55 (validated for encryption enabled scenarios as well) | |
+| **HANA versions** | SDC on HANA 1.x, MDC on HANA 2.x SPS04, SPS05 Rev <= 56, SPS 06 (validated for encryption enabled scenarios as well) | |
| **Encryption** | SSLEnforce, HANA data encryption | | | **HANA deployments** | SAP HANA on a single Azure VM - Scale up only. <br><br> For high availability deployments, both the nodes on the two different machines are treated as individual nodes with separate data chains. | Scale-out <br><br> In high availability deployments, backup doesnΓÇÖt failover to the secondary node automatically. Configuring backup should be done separately for each node. | | **HANA Instances** | A single SAP HANA instance on a single Azure VM ΓÇô scale up only | Multiple SAP HANA instances on a single VM. You can protect only one of these multiple instances at a time. |
backup Sql Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/sql-support-matrix.md
_*The database size limit depends on the data transfer rate that we support and
* TDE - enabled database backup is supported. To restore a TDE-encrypted database to another SQL Server, you need to first [restore the certificate to the destination server](/sql/relational-databases/security/encryption/move-a-tde-protected-database-to-another-sql-server). The backup compression for TDE-enabled databases for SQL Server 2016 and newer versions is available, but at lower transfer size as explained [here](https://techcommunity.microsoft.com/t5/sql-server/backup-compression-for-tde-enabled-databases-important-fixes-in/ba-p/385593). * The backup and restore operations for mirror databases and database snapshots aren't supported. * SQL Server **Failover Cluster Instance (FCI)** isn't supported.
-* Azure Backup supports only back up of database files with the following extensions - _.ad_, _.cs_, and _.master_. Database files with other extensions, such as _.dll_, aren't backed-up because the IIS server performs the [file extension request filtering](/iis/configuration/system.webserver/security/requestfiltering/fileextensions).
+* Back up of databases with extensions in their names arenΓÇÖt supported. This is because the IIS server performs the [file extension request filtering](/iis/configuration/system.webserver/security/requestfiltering/fileextensions). However, note that we have allowlisted _.ad_, _.cs_, and _.master_ that can be used in the database names.
## Backup throughput performance
certification Program Requirements Edge Secured Core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/certification/program-requirements-edge-secured-core.md
Validation|Device to be validated through toolset to ensure the device supports
|Name|SecuredCore.Protection.CodeIntegrity| |:|:|
-|Status|Required[Need confirmation from Deepak and EnS on details of validation and description]|
+|Status|Required|
|Description|The purpose of this test is to validate that code integrity is available on this device.| |Target Availability|2022| |Validation Type|Manual/Tools|
-|Validation|Device to be validated through toolset to ensure that code integrity is enabled. </br> Windows: HVCI </br> Linux: dm-verity and IMA|
+|Validation|Device to be validated through toolset to ensure that code integrity is enabled by validating dm-verity and IMA|
|Resources||
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-msrc-releases.md
The following tables show the Microsoft Security Response Center (MSRC) updates
| Rel 21-12 | [4578955] | .NET Framework 4.5.2 Security and Quality Rollup | [2.117] | Jun 8, 2021 | | Rel 21-12 | [4578953] | .NET Framework 3.5 Security and Quality Rollup | [4.97] | Feb 16, 2021 | | Rel 21-12 | [4578956] | .NET Framework 4.5.2 Security and Quality Rollup | [4.97] | Feb 16, 2021 |
-| Rel 21-12 | [4578950  ] | .NET Framework 3.5 Security and Quality Rollup | [3.104] | Feb 16, 2021 |
-| Rel 21-12 | [4578954 ] | . NET Framework 4.5.2 Security and Quality Rollup | [3.104] | Feb 16, 2021 |
-| Rel 21-12 | [5004335 ] | . NET Framework 3.5 and 4.7.2 Cumulative Update | [6.38] | Aug 10, 2021 |
+| Rel 21-12 | [4578950] | .NET Framework 3.5 Security and Quality Rollup | [3.104] | Feb 16, 2021 |
+| Rel 21-12 | [4578954] | . NET Framework 4.5.2 Security and Quality Rollup | [3.104] | Feb 16, 2021 |
+| Rel 21-12 | [5004335] | . NET Framework 3.5 and 4.7.2 Cumulative Update | [6.38] | Aug 10, 2021 |
| Rel 21-12 | [5008244] | Monthly Rollup | [2.117] | Sep 14, 2021 | | Rel 21-12 | [5008277] | Monthly Rollup | [3.104] | Sep 14, 2021 | | Rel 21-12 | [5008263] | Monthly Rollup | [4.97] | Sep 14, 2021 |
-| Rel 21-12 | [5001401 ] | Servicing Stack update | [3.104] | Apr 13, 2021 |
-| Rel 21-12 | [5001403 ] | Servicing Stack update | [4.97] | Apr 13, 2021 |
+| Rel 21-12 | [5001401] | Servicing Stack update | [3.104] | Apr 13, 2021 |
+| Rel 21-12 | [5001403] | Servicing Stack update | [4.97] | Apr 13, 2021 |
| Rel 21-12 OOB | [4578013] | Standalone Security Update | [4.97] | Aug 19, 2020 | | Rel 21-12 | [5005698] | Servicing Stack update | [5.62] | Sep 14, 2021 | | Rel 21-12 | [5006749] | Servicing Stack update | [2.117] | July 13, 2021 |
-| Rel 21-12 | [5008287 ] | Servicing Stack update | [6.38] | Aug 10, 2021 |
-| Rel 21-12 | [4494175 ] | Microcode | [5.62] | Sep 1, 2020 |
+| Rel 21-12 | [5008287] | Servicing Stack update | [6.38] | Aug 10, 2021 |
+| Rel 21-12 | [4494175] | Microcode | [5.62] | Sep 1, 2020 |
| Rel 21-12 | [4494174] | Microcode | [6.38] | Sep 1, 2020 | [5008218]: https://support.microsoft.com/kb/5008218
The following tables show the Microsoft Security Response Center (MSRC) updates
[4578955]: https://support.microsoft.com/kb/4578955 [4578953]: https://support.microsoft.com/kb/4578953 [4578956]: https://support.microsoft.com/kb/4578956
-[4578950  ]: https://support.microsoft.com/kb/4578950  
-[4578954 ]: https://support.microsoft.com/kb/4578954 
-[5004335 ]: https://support.microsoft.com/kb/5004335 
+[4578950]: https://support.microsoft.com/kb/4578950  
+[4578954]: https://support.microsoft.com/kb/4578954 
+[5004335]: https://support.microsoft.com/kb/5004335 
[5008244]: https://support.microsoft.com/kb/5008244 [5008277]: https://support.microsoft.com/kb/5008277 [5008263]: https://support.microsoft.com/kb/5008263
-[5001401 ]: https://support.microsoft.com/kb/5001401 
-[5001403 ]: https://support.microsoft.com/kb/5001403 
+[5001401]: https://support.microsoft.com/kb/5001401 
+[5001403]: https://support.microsoft.com/kb/5001403 
[4578013]: https://support.microsoft.com/kb/4578013 [5005698]: https://support.microsoft.com/kb/5005698 [5006749]: https://support.microsoft.com/kb/5006749
-[5008287 ]: https://support.microsoft.com/kb/5008287 
+[5008287]: https://support.microsoft.com/kb/5008287 
[4494175 ]: https://support.microsoft.com/kb/4494175  [4494174]: https://support.microsoft.com/kb/4494174 [2.117]: ./cloud-services-guestos-update-matrix.md#family-2-releases
cognitive-services Export Your Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/export-your-model.md
Custom Vision Service supports the following exports:
* __CoreML__ for __iOS11__. * __ONNX__ for __Windows ML__, **Android**, and **iOS**. * __[Vision AI Developer Kit](https://azure.github.io/Vision-AI-DevKit-Pages/)__.
-* A __Docker container__ for Windows, Linux, or ARM architecture. The container includes a Tensorflow model and service code to use the Custom Vision API.
+* A __Docker container__ for Windows, Linux, or ARM architecture. The container includes a TensorFlow model and service code to use the Custom Vision API.
> [!IMPORTANT] > Custom Vision Service only exports __compact__ domains. The models generated by compact domains are optimized for the constraints of real-time classification on mobile devices. Classifiers built with a compact domain may be slightly less accurate than a standard domain with the same amount of training data.
cognitive-services Get Started Build Detector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/get-started-build-detector.md
Previously updated : 09/27/2021 Last updated : 01/12/2022 keywords: image recognition, image recognition app, custom vision
keywords: image recognition, image recognition app, custom vision
# Quickstart: Build an object detector with the Custom Vision website
-In this quickstart, you'll learn how to use the Custom Vision website to create an object detector model. Once you build a model, you can test it with new images and eventually integrate it into your own image recognition app.
+In this quickstart, you'll learn how to use the Custom Vision website to create an object detector model. Once you build a model, you can test it with new images and integrate it into your own image recognition app.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
In your web browser, navigate to the [Custom Vision web page](https://customvisi
1. Select __Object Detection__ under __Project Types__.
-1. Next, select one of the available domains. Each domain optimizes the detector for specific types of images, as described in the following table. You will be able to change the domain later if you wish.
+1. Next, select one of the available domains. Each domain optimizes the detector for specific types of images, as described in the following table. You can change the domain later if you want to.
|Domain|Purpose| |||
- |__General__| Optimized for a broad range of object detection tasks. If none of the other domains are appropriate, or you are unsure of which domain to choose, select the Generic domain. |
+ |__General__| Optimized for a broad range of object detection tasks. If none of the other domains are appropriate, or if you're unsure about which domain to choose, select the __General__ domain. |
|__Logo__|Optimized for finding brand logos in images.| |__Products on shelves__|Optimized for detecting and classifying products on shelves.| |__Compact domains__| Optimized for the constraints of real-time object detection on mobile devices. The models generated by compact domains can be exported to run locally.|
In your web browser, navigate to the [Custom Vision web page](https://customvisi
## Upload and tag images
-In this section, you will upload and manually tag images to help train the detector.
+In this section, you'll upload and manually tag images to help train the detector.
1. To add images, select __Add images__ and then select __Browse local files__. Select __Open__ to upload the images. ![The add images control is shown in the upper left, and as a button at bottom center.](./media/get-started-build-detector/add-images.png)
-1. You'll see your uploaded images in the **Untagged** section of the UI. The next step is to manually tag the objects that you want the detector to learn to recognize. Click the first image to open the tagging dialog window.
+1. You'll see your uploaded images in the **Untagged** section of the UI. The next step is to manually tag the objects that you want the detector to learn to recognize. Select the first image to open the tagging dialog window.
![Images uploaded, in Untagged section](./media/get-started-build-detector/images-untagged.png)
After training has completed, the model's performance is calculated and displaye
### Overlap threshold
-The **Overlap Threshold** slider deals with how correct an object prediction must be to be considered "correct" in training. It sets the minimum allowed overlap between the predicted object bounding box and the actual user-entered bounding box. If the bounding boxes don't overlap to this degree, the prediction won't be considered correct.
+The **Overlap Threshold** slider deals with how correct an object prediction must be to be considered "correct" in training. It sets the minimum allowed overlap between the predicted object's bounding box and the actual user-entered bounding box. If the bounding boxes don't overlap to this degree, the prediction won't be considered correct.
## Manage training iterations
cognitive-services Test Your Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/test-your-model.md
Title: Test and retrain a model - Custom Vision Service
-description: Learn how to test an image and then use it to re-train your model in the Custom Vision service.
+description: Learn how to test an image and then use it to retrain your model in the Custom Vision service.
# Test and retrain a model with Custom Vision Service
-After you train your model, you can quickly test it using a locally stored image or an online image. The test uses the most recently trained iteration of your model.
+After you train your Custom Vision model, you can quickly test it using a locally stored image or a URL pointing to a remote image. The test uses the most recently trained iteration of your model. Then you can decide whether further training is needed.
## Test your model
-1. From the [Custom Vision web page](https://customvision.ai), select your project. Select **Quick Test** on the right of the top menu bar. This action opens a window labeled **Quick Test**.
+1. From the [Custom Vision web portal](https://customvision.ai), select your project. Select **Quick Test** on the right of the top menu bar. This action opens a window labeled **Quick Test**.
![The Quick Test button is shown in the upper right corner of the window.](./media/test-your-model/quick-test-button.png)
-2. In the **Quick Test** window, click in the **Submit Image** field and enter the URL of the image you want to use for your test. If you want to use a locally stored image instead, click the **Browse local files** button and select a local image file.
+2. In the **Quick Test** window, select in the **Submit Image** field and enter the URL of the image you want to use for your test. If you want to use a locally stored image instead, select the **Browse local files** button and select a local image file.
![Image of the submit image page](./media/test-your-model/submit-image.png)
-The image you select appears in the middle of the page. Then the results appear below the image in the form of a table with two columns, labeled **Tags** and **Confidence**. After you view the results, you may close the **Quick Test** window.
-
-You can now add this test image to your model and then retrain your model.
+The image you select appears in the middle of the page. Then the prediction results appear below the image in the form of a table with two columns, labeled **Tags** and **Confidence**. After you view the results, you may close the **Quick Test** window.
## Use the predicted image for training
-To use the image submitted previously for training, use the following steps:
+You can now take the image submitted previously for testing and use it to retrain your model.
1. To view images submitted to the classifier, open the [Custom Vision web page](https://customvision.ai) and select the __Predictions__ tab.
To use the image submitted previously for training, use the following steps:
> [!TIP] > Images are ranked, so that the images that can bring the most gains to the classifier are at the top. To select a different sorting, use the __Sort__ section.
- To add an image to your training data, select the image, select the tag, and then select __Save and close__. The image is removed from __Predictions__ and added to the training images. You can view it by selecting the __Training Images__ tab.
+ To add an image to your training data, select the image, manually select the tag(s), and then select __Save and close__. The image is removed from __Predictions__ and added to the training images. You can view it by selecting the __Training Images__ tab.
![Image of the tagging page](./media/test-your-model/tag-image.png)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
Below table lists out the prebuilt neural voices supported in each language. You
| Arabic (Egypt) | `ar-EG` | Male | `ar-EG-ShakirNeural` | General | | Arabic (Iraq) | `ar-IQ` | Female | `ar-IQ-RanaNeural` <sup>New</sup> | General | | Arabic (Iraq) | `ar-IQ` | Male | `ar-IQ-BasselNeural` <sup>New</sup> | General |
-| Arabic (Jordan) | `ar-JO` | Female | `ar-JO-Sana Neural` <sup>New</sup> | General |
-| Arabic (Jordan) | `ar-JO` | Male | `ar-JO-Taim Neural` <sup>New</sup> | General |
+| Arabic (Jordan) | `ar-JO` | Female | `ar-JO-SanaNeural` <sup>New</sup> | General |
+| Arabic (Jordan) | `ar-JO` | Male | `ar-JO-TaimNeural` <sup>New</sup> | General |
| Arabic (Kuwait) | `ar-KW` | Female | `ar-KW-NouraNeural` <sup>New</sup> | General | | Arabic (Kuwait) | `ar-KW` | Male | `ar-KW-FahedNeural` <sup>New</sup> | General | | Arabic (Libya) | `ar-LY` | Female | `ar-LY-ImanNeural` <sup>New</sup> | General |
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/custom-classification/quickstart.md
Previously updated : 11/02/2021 Last updated : 01/13/2021 zone_pivot_groups: usage-custom-language-features
communication-services Closed Captions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/closed-captions.md
+
+ Title: Azure Communication Services Closed Caption overview
+
+description: Learn about the Azure Communication Services Closed Captions.
+++++ Last updated : 12/16/2021++++
+# Closed Captions overview
++
+Azure Communication Services allows one to enable Closed Captions for the VoIP calls in private preview.
+Closed Captions is the conversion of a voice or video call audio track into written words that appear in real time. Closed Captions are never saved and are only visible to the user that has enabled it.
+Here are main scenarios where Closed Captions are useful:
+
+- **Accessibility**. In the workplace or consumer apps, Closed Captioning for meetings, conference calls, and training videos can make a huge difference.
+- **Accessibility**. Scenarios when audio can't be heard, either because of a noisy environment, such as an airport, or because of an environment that must be kept quiet, such as a hospital.
+- **Inclusivity**. Closed Captioning was developed to aid hearing-impaired people, but it could be useful for a language proficiency as well.
+
+## When to use Closed Captions
+
+- Closed Captions help maintain concentration and engagement, which can provide a better experience for viewers with learning disabilities, a language barrier, attention deficit disorder, or hearing impairment.
+- Closed Captions allow participants to be on the call in loud or sound-sensitive environments.
+
+## Feature highlights
+
+- Support for multiple platforms with cross-platform support.
+- Async processing with client subscription to events and callbacks.
+- Multiple languages to choose from for recognition.
+- Support for existing SkypeToken Authentication
++
+## Availability
+The private preview will be available on all platforms.
+- Android
+- iOS
+- Web
+
+## Next steps
+
+- Get started with a Closed Caption Quickstart (TBD)
++++
cosmos-db Emulator Command Line Parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/emulator-command-line-parameters.md
Import-Module "$env:ProgramFiles\Azure Cosmos DB Emulator\PSModules\Microsoft.Az
or place the `PSModules` directory on your `PSModulePath` and import it as shown in the following command: ```powershell
-$env:PSModulePath += "$env:ProgramFiles\Azure Cosmos DB Emulator\PSModules"
+$env:PSModulePath += ";$env:ProgramFiles\Azure Cosmos DB Emulator\PSModules"
Import-Module Microsoft.Azure.CosmosDB.Emulator ```
cost-management-billing Account Admin Tasks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/account-admin-tasks.md
This article explains how to perform the following tasks in the Azure portal:
You must be the Account Administrator to perform any of these tasks.
-## Accounts portal is retiring
+## Accounts portal is retired
-Accounts portal will retire and customers will be redirected to the Azure portal by December 31, 2021. The features supported in the Accounts portal will be migrated to the Azure portal. This article explains how to perform some of the most common operations in the Azure portal.
+Accounts portal was retired December 31, 2021. The features supported in the Accounts portal were migrated to the Azure portal. This article explains how to perform some of the most common operations in the Azure portal.
## Navigate to your subscription's payment methods
data-factory Concepts Roles Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-roles-permissions.md
Permissions on Azure Repos and GitHub are independent of Data Factory permission
In publish context, **Microsoft.DataFactory/factories/write** permission applies to following modes. - That permission is only required in Live mode when the customer modifies the global parameters.-- That permission is always required in Git mode since every time after the customer publishes,the factory object with the last commit ID needs to be updated.
+- That permission is always required in Git mode since every time after the customer publishes, the factory object with the last commit ID needs to be updated.
### Custom scenarios and custom roles
data-factory Connector Amazon Rds For Oracle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-amazon-rds-for-oracle.md
More connection properties you can set in connection string per your case:
| Property | Description | Allowed values | |: |: |: |
-| ArraySize |The number of bytes the connector can fetch in a single network round trip. E.g., `ArraySize=‭10485760‬`.<br/><br/>Larger values increase throughput by reducing the number of times to fetch data across the network. Smaller values increase response time, as there is less of a delay waiting for the server to transmit data. | An integer from 1 to 4294967296 (4 GB). Default value is `60000`. The value 1 does not define the number of bytes, but indicates allocating space for exactly one row of data. |
+| ArraySize |The number of bytes the connector can fetch in a single network round trip. For example, `ArraySize=‭10485760‬`.<br/><br/>Larger values increase throughput by reducing the number of times to fetch data across the network. Smaller values increase response time, as there is less of a delay waiting for the server to transmit data. | An integer from 1 to 4294967296 (4 GB). Default value is `60000`. The value 1 does not define the number of bytes, but indicates allocating space for exactly one row of data. |
To enable encryption on Amazon RDS for Oracle connection, you have two options:
You are suggested to enable parallel copy with data partitioning especially when
| Full load from large table, with physical partitions. | **Partition option**: Physical partitions of table. <br><br/>During execution, the service automatically detects the physical partitions, and copies data by partitions. | | Full load from large table, without physical partitions, while with an integer column for data partitioning. | **Partition options**: Dynamic range partition.<br>**Partition column**: Specify the column used to partition data. If not specified, the primary key column is used. | | Load a large amount of data by using a custom query, with physical partitions. | **Partition option**: Physical partitions of table.<br>**Query**: `SELECT * FROM <TABLENAME> PARTITION("?AdfTabularPartitionName") WHERE <your_additional_where_clause>`.<br>**Partition name**: Specify the partition name(s) to copy data from. If not specified, the service automatically detects the physical partitions on the table you specified in the Amazon RDS for Oracle dataset.<br><br>During execution, the service replaces `?AdfTabularPartitionName` with the actual partition name, and sends to Amazon RDS for Oracle. |
-| Load a large amount of data by using a custom query, without physical partitions, while with an integer column for data partitioning. | **Partition options**: Dynamic range partition.<br>**Query**: `SELECT * FROM <TABLENAME> WHERE ?AdfRangePartitionColumnName <= ?AdfRangePartitionUpbound AND ?AdfRangePartitionColumnName >= ?AdfRangePartitionLowbound AND <your_additional_where_clause>`.<br>**Partition column**: Specify the column used to partition data. You can partition against the column with integer data type.<br>**Partition upper bound** and **partition lower bound**: Specify if you want to filter against partition column to retrieve data only between the lower and upper range.<br><br>During execution, the service replaces `?AdfRangePartitionColumnName`, `?AdfRangePartitionUpbound`, and `?AdfRangePartitionLowbound` with the actual column name and value ranges for each partition, and sends to Amazon RDS for Oracle. <br>For example, if your partition column "ID" is set with the lower bound as 1 and the upper bound as 80, with parallel copy set as 4, the service retrieves data by 4 partitions. Their IDs are between [1,20], [21, 40], [41, 60], and [61, 80], respectively. |
+| Load a large amount of data by using a custom query, without physical partitions, while with an integer column for data partitioning. | **Partition options**: Dynamic range partition.<br>**Query**: `SELECT * FROM <TABLENAME> WHERE ?AdfRangePartitionColumnName <= ?AdfRangePartitionUpbound AND ?AdfRangePartitionColumnName >= ?AdfRangePartitionLowbound AND <your_additional_where_clause>`.<br>**Partition column**: Specify the column used to partition data. You can partition against the column with integer data type.<br>**Partition upper bound** and **partition lower bound**: Specify if you want to filter against partition column to retrieve data only between the lower and upper range.<br><br>During execution, the service replaces `?AdfRangePartitionColumnName`, `?AdfRangePartitionUpbound`, and `?AdfRangePartitionLowbound` with the actual column name and value ranges for each partition, and sends to Amazon RDS for Oracle. <br>For example, if your partition column "ID" is set with the lower bound as 1 and the upper bound as 80, with parallel copy set as 4, the service retrieves data by 4 partitions. Their IDs are between [1, 20], [21, 40], [41, 60], and [61, 80], respectively. |
> [!TIP] > When copying data from a non-partitioned table, you can use "Dynamic range" partition option to partition against an integer column. If your source data doesn't have such type of column, you can leverage [ORA_HASH]( https://docs.oracle.com/database/121/SQLRF/functions136.htm) function in source query to generate a column and use it as partition column.
data-factory Create Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-self-hosted-integration-runtime.md
Here is a high-level summary of the data-flow steps for copying with a self-host
- Windows Server 2012 R2 - Windows Server 2016 - Windows Server 2019
+ - Windows Server 2022
Installation of the self-hosted integration runtime on a domain controller isn't supported.
data-lake-analytics Migrate Azure Data Lake Analytics To Synapse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/migrate-azure-data-lake-analytics-to-synapse.md
The document shows you how to do the migration from Azure Data Lake Analytics to
### Step 4: Cut over from Azure Data Lake Analytics to Azure Synapse Analytics
-After you're confident that your applications and workloads are stable, you can begin using Azure Synapse Analytics to satisfy your business scenarios. Turn off any remaining pipelines that are running on Azure Data Lake Analytics and decommission your Azure Data Lake Analytics accounts.
+After you're confident that your applications and workloads are stable, you can begin using Azure Synapse Analytics to satisfy your business scenarios. Turn off any remaining pipelines that are running on Azure Data Lake Analytics and retire your Azure Data Lake Analytics accounts.
<a name="questionnaire"></a> ## Questionnaire for Migration Assessment
databox Data Box Customer Managed Encryption Key Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-customer-managed-encryption-key-portal.md
Previously updated : 01/10/2022 Last updated : 01/13/2022
If you receive any errors related to your customer-managed key, use the followin
| SsemUserErrorKeyVaultDetailsNotFound| Could not fetch the passkey as the associated key vault for the customer-managed key could not be found. | If you deleted the key vault, you can't recover the customer-managed key. If you migrated the key vault to a different tenant, see [Change a key vault tenant ID after a subscription move](../key-vault/general/move-subscription.md). If you deleted the key vault:<ol><li>Yes, if it is in the purge-protection duration, using the steps at [Recover a key vault](../key-vault/general/key-vault-recovery.md?tabs=azure-powershell#key-vault-powershell).</li><li>No, if it is beyond the purge-protection duration.</li></ol><br>Else if the key vault underwent a tenant migration, yes, it can be recovered using one of the below steps: <ol><li>Revert the key vault back to the old tenant.</li><li>Set `Identity = None` and then set the value back to `Identity = SystemAssigned`. This deletes and recreates the identity once the new identity has been created. Enable `Get`, `WrapKey`, and `UnwrapKey` permissions to the new identity in the key vault's Access policy.</li></ol> | | SsemUserErrorSystemAssignedIdentityAbsent | Could not fetch the passkey as the customer-managed key could not be found.| Yes, check if: <ol><li>Key vault still has the MSI in the access policy.</li><li>Identity is of type System assigned.</li><li>Enable `Get`, `WrapKey`, and `UnwrapKey` permissions to the identity in the key vaultΓÇÖs access policy. These permissions must remain for the lifetime of the order. They're used during order creation and at the beginning of the Data Copy phase.</li></ol>| | SsemUserErrorUserAssignedLimitReached | Adding new User Assigned Identity failed as you have reached the limit on the total number of user assigned identities that can be added. | Retry the operation with fewer user identities, or remove some user-assigned identities from the resource before retrying. |
-| SsemUserErrorCrossTenantIdentityAccessForbidden | Managed identity access operation failed. <br> Note: This error can occur when a subscription is moved to different tenant. The customer has to manually move the identity to the new tenant. PFA mail for more details. | Move the identity selected to the new tenant under which the subscription is present. For more information, see how to [Enable the key](#enable-key). |
+| SsemUserErrorCrossTenantIdentityAccessForbidden | Managed identity access operation failed. <br> Note: This error can occur when a subscription is moved to different tenant. The customer has to manually move the identity to the new tenant. | Try adding a different user-assigned identity to your key vault to enable access to the customer-managed key. Or move the identity to the new tenant under which the subscription is present. For more information, see how to [Enable the key](#enable-key). |
| SsemUserErrorKekUserIdentityNotFound | Applied a customer-managed key but the user assigned identity that has access to the key was not found in the active directory. <br> Note: This error can occur when a user identity is deleted from Azure.| Try adding a different user-assigned identity to your key vault to enable access to the customer-managed key. For more information, see how to [Enable the key](#enable-key). | | SsemUserErrorUserAssignedIdentityAbsent | Could not fetch the passkey as the customer-managed key could not be found. | Could not access the customer-managed key. Either the User Assigned Identity (UAI) associated with the key is deleted or the UAI type has changed. |
-| SsemUserErrorCrossTenantIdentityAccessForbidden | Managed identity access operation failed. <br> Note: This error can occur when a subscription is moved to different tenant. The customer has to manually move the identity to the new tenant. PFA mail for more details. | Try adding a different user-assigned identity to your key vault to enable access to the customer-managed key. For more information, see how to [Enable the key](#enable-key).|
| SsemUserErrorKeyVaultBadRequestException | Applied a customer-managed key, but key access has not been granted or has been revoked, or the key vault couldn't be accessed because a firewall is enabled. | Add the identity selected to your key vault to enable access to the customer-managed key. If the key vault has a firewall enabled, switch to a system-assigned identity and then add a customer-managed key. For more information, see how to [Enable the key](#enable-key). |
-| Generic error | Could not fetch the passkey. | This error is a generic error. Contact Microsoft Support to troubleshoot the error and determine the next steps.|
| SsemUserErrorEncryptionKeyTypeNotSupported | The encryption key type isn't supported for the operation. | Enable a supported encryption type on the key - for example, RSA or RSA-HSM. For more information, see [Key types, algorithms, and operations](/azure/key-vault/keys/about-keys-details). | | SsemUserErrorSoftDeleteAndPurgeProtectionNotEnabled | Key vault does not have soft delete or purge protection enabled. | Ensure that both soft delete and purge protection are enabled on the key vault. | | SsemUserErrorInvalidKeyVaultUrl<br>(Command-line only) | An invalid key vault URI was used. | Get the correct key vault URI. To get the key vault URI, use [Get-AzKeyVault](/powershell/module/az.keyvault/get-azkeyvault?view=azps-7.1.0) in PowerShell. | | SsemUserErrorKeyVaultUrlWithInvalidScheme | Only HTTPS is supported for passing the key vault URI. | Pass the key vault URI over HTTPS. | | SsemUserErrorKeyVaultUrlInvalidHost | The key vault URI host is not an allowed host in the geographical region. | In the public cloud, the key vault URI should end with `vault.azure.net`. In the Azure Government cloud, the key vault URI should end with `vault.usgovcloudapi.net`. | -
+| Generic error | Could not fetch the passkey. | This error is a generic error. Contact Microsoft Support to troubleshoot the error and determine the next steps.|
## Next steps
databox Data Box Deploy Ordered https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-deploy-ordered.md
Previously updated : 01/10/2022 Last updated : 01/11/2022 #Customer intent: As an IT admin, I need to be able to order Data Box to upload on-premises data from my server onto Azure.
Do the following steps using Azure PowerShell to order a device:
After you place the order, you can track the status of the order from Azure portal. Go to your Data Box order and then go to **Overview** to view the status. The portal shows the order in **Ordered** state.
-If the device is not available, you receive a notification. If the device is available, Microsoft identifies the device for shipment and prepares the shipment. During device preparation, following actions occur:
+If the device isn't available, you receive a notification. If the device is available, Microsoft identifies the device for shipment and prepares the shipment. During device preparation, following actions occur:
* SMB shares are created for each storage account associated with the device. * For each share, access credentials such as username and password are generated.
databox Data Box Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-limits.md
Previously updated : 01/11/2022 Last updated : 01/13/2022 # Azure Data Box limits
devtest-labs Add Artifact Repository https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/add-artifact-repository.md
Title: Add an artifact repository to your lab
-description: Learn how to specify your own artifact repository for your lab in Azure DevTest Labs to store tools unavailable in the public artifact repository.
+description: Learn how to add a private artifact repository to your lab to store your custom artifacts.
Previously updated : 10/19/2021 Last updated : 01/11/2022
-# Add an artifact repository to your lab in DevTest Labs
-DevTest Labs allows you to specify an artifact to be added to a VM at the time of creating the VM or after the VM is created. This artifact could be a tool or an application that you want to install on the VM. Artifacts are defined in a JSON file loaded from a GitHub or Azure DevOps Git repository.
+# Add an artifact repository to a lab
-The [public artifact repository](https://github.com/Azure/azure-devtestlab/tree/master/Artifacts), maintained by DevTest Labs, provides many common tools for both Windows and Linux. A link to this repository is automatically added to your lab. You can create your own artifact repository with specific tools that aren't available in the public artifact repository. To learn about creating custom artifacts, see [Create custom artifacts](devtest-lab-artifact-author.md).
+This article tells you how to add an *artifact* repository to your lab in Azure DevTest Labs. Artifacts are tools or applications to install on virtual machines (VMs). You define artifacts in a JSON file that you load from a GitHub or Azure Repos Git repository.
-This article provides information on how to add your custom artifact repository by using Azure portal, Azure Resource Management templates, and Azure PowerShell. You can automate adding an artifact repository to a lab by writing PowerShell or CLI scripts.
+The public [DevTest Labs GitHub artifact repository](https://github.com/Azure/azure-devtestlab/tree/master/Artifacts) provides many common artifacts for Windows and Linux. The artifacts in this public repository are available by default in DevTest Labs. For information about adding artifacts to VMs, see [Add artifacts to DevTest Labs VMs](add-artifact-vm.md).
+
+You can also create custom artifacts that aren't available in the public artifact repository. To learn about creating custom artifacts, see [Create custom artifacts](devtest-lab-artifact-author.md). You can add your custom artifacts to your own artifact repository, and add the repository to your lab so all lab users can use the artifacts.
+
+This article shows you how to add an artifact repository to your lab by using the Azure portal, an Azure Resource Management (ARM) template, or Azure PowerShell. You can also use an Azure PowerShell or Azure CLI script to automate adding an artifact repository to a lab.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] ## Prerequisites
-To add a repository to your lab, first, get key information from your repository. The following sections describe how to get the required information for repositories that are hosted on **GitHub** or **Azure DevOps**.
-
-### Get the GitHub repository clone URL and personal access token
-
-1. Go to the home page of the GitHub repository that contains the artifact or Resource Manager template definitions.
-2. Select **Clone or download**.
-3. To copy the URL to the clipboard, select the **HTTPS clone url** button. Save the URL for later use.
-4. In the upper-right corner of GitHub, select the profile image, and then select **Settings**.
-5. In the **Personal settings** menu on the left, select **Developer Settings**.
-6. Select **Personal access tokens** on the left menu.
-7. Select **Generate new token**.
-8. On the **New personal access token** page, under **Token description**, enter a description. Accept the default items under **Select scopes**, and then select **Generate Token**.
-9. Save the generated token. You use the token later.
-10. Close GitHub.
-
-### Get the Azure Repos clone URL and personal access token
-1. Go to the home page of your team collection (for example, `https://contoso-web-team.visualstudio.com`), and then select your project.
-2. On the project home page, select **Code**.
-3. To view the clone URL, on the project **Code** page, select **Clone**.
-4. Save the URL. You use the URL later.
-5. To create a personal access token, in the user account drop-down menu, select **My profile**.
-6. On the profile information page, select **Security**.
-7. On the **Security > Personal access tokens** tab, select **+ New Token**.
-8. On the **Create a new personal access token** page:
- 1. Enter a **Name** for the token.
- 2. In the **Organization** list, select the organization the repo belongs to.
- 3. In the **Expiration (UTC)** list, select **90 days**, or a custom defined expiration period.
- 4. Select the **Custom defined** option for Scopes and select only **Code - Read**.
- 5. Select **Create**.
-9. The new token appears in the **Personal Access Tokens** list. Select **Copy Token**, and then save the token value for later use.
-10. Continue to the Connect your lab to the repository section.
-
-## Use Azure portal
-This section provides steps to add an artifact repository to a lab in the Azure portal.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select **More Services**, and then select **DevTest Labs** from the list of services.
-3. From the list of labs, select your lab.
-4. Select **Configuration and policies** on the left menu.
-5. Select **Repositories** under **External resources** section on the left menu.
-6. Select **+ Add** on the toolbar.
-
- ![The Add repository button](./media/devtest-lab-add-repo/devtestlab-add-repo.png)
-5. On the **Repositories** page, specify the following information:
- 1. **Name**. Enter a name for the repository.
- 2. **Git Clone Url**. Enter the Git HTTPS clone URL that you copied earlier from either GitHub or Azure DevOps Services.
- 3. **Branch**. To get your definitions, enter the branch.
- 4. **Personal Access Token**. Enter the personal access token that you got earlier from either GitHub or Azure DevOps Services.
- 5. **Folder Paths**. Enter at least one folder path relative to the clone URL that contains your artifact or Resource Manager template definitions. When you specify a subdirectory, make sure you include the forward slash in the folder path.
-
- ![Repositories area](./media/devtest-lab-add-repo/devtestlab-repo-blade.png)
-6. Select **Save**.
-
-## Use Azure Resource Manager template
-Azure Resource Management (Azure Resource Manager) templates are JSON files that describe resources in Azure that you want to create. For more information about these templates, see [Authoring Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md).
-
-This section provides steps to add an artifact repository to a lab by using an Azure Resource Manager template. The template creates the lab if it doesn't already exist.
-
-### Template
-The sample template used in this article gathers the following information via parameters. Most of the parameters do have smart defaults, but there are a few values that must be specified. Specify the lab name, URI for the artifact repository, and the security token for the repository.
+To add an artifact repository to a lab, you need to know the Git HTTPS clone URL and the personal access token for the GitHub or Azure Repos repository that has the artifact files.
+
+### Get the clone URL and personal access token for GitHub
+
+1. On the home page of the GitHub repository that has your artifacts, select **Code**, and under **Clone**, copy the HTTPS URL.
+1. Select your profile image in the upper-right corner of GitHub, and then select **Settings**.
+1. On your profile page, in the left menu, select **Developer Settings**, and then select **Personal access tokens**.
+1. Select **Generate new token**.
+1. On the **New personal access token** page, under **Note**, enter an optional description for the token. Accept all the defaults, and then select **Generate token**.
+1. Save the generated token.
+
+### Get the clone URL and personal access token for Azure Repos
+
+1. On the main page of the repository that has your artifacts, select **Clone**. On the **Clone Repository** page, copy the clone URL.
+1. In the upper-right corner of the Azure DevOps page, select **User settings** > **Personal access tokens**.
+1. On the **Personal Access Tokens** page, select **New Token**.
+1. Fill out the information for the token, selecting **Read** for the scopes, and then select **Create**.
+1. On the **Success** page, be sure to copy the token, because Azure Repos doesn't store the token or show it again.
+
+## Add an artifact repository to a lab in the Azure portal
+
+1. On the lab's **Overview** page, select **Configuration and policies** from the left navigation.
+
+1. On the **Configuration and policies** page, select **Repositories** under **External resources** in the left navigation.
+
+ On the **Repositories** page, the **Public Artifact Repo** is automatically present and connects to the [DevTest Labs public GitHub repository](https://github.com/Azure/azure-devtestlab). If this repo isn't enabled for your lab, you can enable it by selecting the checkbox next to **Public Artifact Repo**, and then selecting **Enable** on the top menu bar.
+
+1. To add your artifact repository to the lab, select **Add** in the top menu bar.
+
+ ![Screenshot that shows the Repositories configuration screen.](media/devtest-lab-add-repo/devtestlab-add-repo.png)
+
+1. In the **Repository** pane, enter the following information:
+
+ - **Name**: A repository name to use in the lab.
+ - **Git clone URL**: The Git HTTPS clone URL from GitHub or Azure Repos.
+ - **Branch** (optional): The branch that has your artifact definitions.
+ - **Personal access token**: The personal access token from GitHub or Azure Repos.
+ - **Folder paths**: The folder for your ARM template definitions, relative to the Git clone URL. Be sure to include the initial forward slash in the folder path.
+
+1. Select **Save**.
+
+ ![Screenshot that shows adding a new artifact repository to a lab.](media/devtest-lab-add-repo/devtestlab-repo-blade.png)
+
+The repository now appears in the **Repositories** list for the lab.
+
+## Add an artifact repository by using an ARM template
+
+ARM templates are JSON files that describe Azure resources to create. For more information about ARM templates, see [Understand the structure and syntax of ARM templates](../azure-resource-manager/templates/syntax.md).
+
+The following ARM template adds an artifact repository to a lab. The template creates the lab if it doesn't already exist.
+
+### Review the ARM template
+
+The sample template gathers the following information in parameters. Some of the parameters have defaults, but the deployment command must specify the lab name, artifact repository URI, repository type, and repository personal access token.
- Lab name.-- Display name for the artifact repository in the DevTest Labs user interface (UI). The default value is: `Team Repository`.-- URI to the repository (Example: `https://github.com/<myteam>/<nameofrepo>.git` or `"https://MyProject1.visualstudio.com/DefaultCollection/_git/TeamArtifacts"`.-- Branch in the repository that contains artifacts. The default value is: `master`.-- Name of the folder that contains artifacts. The default value is: `/Artifacts`.-- Type of the repository. Allowed values are `VsoGit` or `GitHub`.-- Access token for the repository.-
- ```json
- {
-
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "labName": {
- "type": "string"
- },
- "artifactRepositoryDisplayName": {
- "type": "string",
- "defaultValue": "Team Repository"
- },
- "artifactRepoUri": {
- "type": "string"
- },
- "artifactRepoBranch": {
- "type": "string",
- "defaultValue": "master"
- },
- "artifactRepoFolder": {
- "type": "string",
- "defaultValue": "/Artifacts"
- },
- "artifactRepoType": {
- "type": "string",
- "allowedValues": ["VsoGit", "GitHub"]
- },
- "artifactRepoSecurityToken": {
- "type": "securestring"
- }
+- Display name for the artifact repository in DevTest Labs. The default value is `Team Repository`.
+- URI of the artifact repository, which you copied earlier.
+- Repository branch that contains the artifacts. The default value is `main`.
+- Name of the folder that contains the artifacts. The default value is: `/Artifacts`.
+- Repository type. The allowed values are `VsoGit`, for Azure Repos, or `GitHub`.
+- Personal access token for the repository, which you copied earlier.
+
+```json
+{
+
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "labName": {
+ "type": "string"
+ },
+ "artifactRepositoryDisplayName": {
+ "type": "string",
+ "defaultValue": "Team Repository"
+ },
+ "artifactRepoUri": {
+ "type": "string"
},
- "variables": {
- "artifactRepositoryName": "[concat('Repo-', uniqueString(subscription().subscriptionId))]"
+ "artifactRepoBranch": {
+ "type": "string",
+ "defaultValue": "main"
},
- "resources": [{
- "apiVersion": "2016-05-15",
- "type": "Microsoft.DevTestLab/labs",
- "name": "[parameters('labName')]",
- "location": "[resourceGroup().location]",
- "resources": [
- {
- "apiVersion": "2016-05-15",
- "name": "[variables('artifactRepositoryName')]",
- "type": "artifactSources",
- "dependsOn": [
- "[resourceId('Microsoft.DevTestLab/labs', parameters('labName'))]"
- ],
- "properties": {
- "uri": "[parameters('artifactRepoUri')]",
- "folderPath": "[parameters('artifactRepoFolder')]",
- "branchRef": "[parameters('artifactRepoBranch')]",
- "displayName": "[parameters('artifactRepositoryDisplayName')]",
- "securityToken": "[parameters('artifactRepoSecurityToken')]",
- "sourceType": "[parameters('artifactRepoType')]",
- "status": "Enabled"
- }
+ "artifactRepoFolder": {
+ "type": "string",
+ "defaultValue": "/Artifacts"
+ },
+ "artifactRepoType": {
+ "type": "string",
+ "allowedValues": ["VsoGit", "GitHub"]
+ },
+ "artifactRepoSecurityToken": {
+ "type": "securestring"
+ }
+ },
+ "variables": {
+ "artifactRepositoryName": "[concat('Repo-', uniqueString(subscription().subscriptionId))]"
+ },
+ "resources": [{
+ "apiVersion": "2016-05-15",
+ "type": "Microsoft.DevTestLab/labs",
+ "name": "[parameters('labName')]",
+ "location": "[resourceGroup().location]",
+ "resources": [
+ {
+ "apiVersion": "2016-05-15",
+ "name": "[variables('artifactRepositoryName')]",
+ "type": "artifactSources",
+ "dependsOn": [
+ "[resourceId('Microsoft.DevTestLab/labs', parameters('labName'))]"
+ ],
+ "properties": {
+ "uri": "[parameters('artifactRepoUri')]",
+ "folderPath": "[parameters('artifactRepoFolder')]",
+ "branchRef": "[parameters('artifactRepoBranch')]",
+ "displayName": "[parameters('artifactRepositoryDisplayName')]",
+ "securityToken": "[parameters('artifactRepoSecurityToken')]",
+ "sourceType": "[parameters('artifactRepoType')]",
+ "status": "Enabled"
}
- ]
- }
- ]
- }
- ```
-
+ }
+ ]
+ }
+ ]
+}
+```
### Deploy the template
-There are a few ways to deploy the template to Azure and have the resource created, if it doesnΓÇÖt exist, or updated, if it does exist. For details, see the following articles:
-- [Deploy resources with Resource Manager templates and Azure PowerShell](../azure-resource-manager/templates/deploy-powershell.md)-- [Deploy resources with Resource Manager templates and Azure CLI](../azure-resource-manager/templates/deploy-cli.md)-- [Deploy resources with Resource Manager templates and Azure portal](../azure-resource-manager/templates/deploy-portal.md)-- [Deploy resources with Resource Manager templates and Resource Manager REST API](../azure-resource-manager/templates/deploy-rest.md)
+There are several ways to deploy ARM templates to create or update Azure resources. For information and instructions, see the following articles:
-LetΓÇÖs go ahead and see how to deploy the template in PowerShell. Cmdlets used to deploy the template are context-specific, so current tenant and current subscription are used. Use [Set-AzContext](/powershell/module/az.accounts/set-azcontext) before deploying the template, if needed, to change context.
+- [Deploy resources with ARM templates and Azure PowerShell](../azure-resource-manager/templates/deploy-powershell.md)
+- [Deploy resources with ARM templates and Azure CLI](../azure-resource-manager/templates/deploy-cli.md)
+- [Deploy resources with ARM templates in the Azure portal](../azure-resource-manager/templates/deploy-portal.md)
+- [Deploy resources with ARM templates and Resource Manager REST API](../azure-resource-manager/templates/deploy-rest.md)
-First, create a resource group using [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). If the resource group you want to use already exists, skip this step.
+For this example, deploy the template by using Azure PowerShell.
-```powershell
-New-AzResourceGroup -Name MyLabResourceGroup1 -Location westus
-```
+> [!NOTE]
+> The cmdlets that deploy the template are context-specific, so they use the current tenant and subscription. If you need to change the context, use [Set-AzContext](/powershell/module/az.accounts/set-azcontext) before you deploy the template
-Next, create a deployment to the resource group using [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment). This cmdlet applies the resource changes to Azure. Several resource deployments can be made to any given resource group. If you're deploying several times to the same resource group, make sure the name of each deployment is unique.
+1. Create a resource group by using [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). If the resource group you want to use already exists, skip this step.
-```powershell
-New-AzResourceGroupDeployment `
- -Name MyLabResourceGroup-Deployment1 `
- -ResourceGroupName MyLabResourceGroup1 `
- -TemplateFile azuredeploy.json `
- -TemplateParameterFile azuredeploy.parameters.json
-```
+ ```powershell
+ New-AzResourceGroup -Name MyLabResourceGroup1 -Location westus
+ ```
-After New-AzResourceGroupDeployment run successfully, the command outputs important information like the provisioning state (should be succeeded) and any outputs for the template.
+1. Create a deployment to the resource group by using [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment). You can make several resource deployments to the same resource group. If you're deploying several times to the same resource group, make sure each deployment name is unique.
-## Use Azure PowerShell
-This section provides you a sample PowerShell script that can be used to add an artifact repository to a lab. If you don't have Azure PowerShell, see [How to install and configure Azure PowerShell](/powershell/azure/) for detailed instructions to install it.
+ ```powershell
+ New-AzResourceGroupDeployment `
+ -Name MyLabResourceGroup-Deployment1 `
+ -ResourceGroupName MyLabResourceGroup1 `
+ -TemplateFile azuredeploy.json `
+ -TemplateParameterFile azuredeploy.parameters.json
+ ```
+
+After `New-AzResourceGroupDeployment` runs successfully, the output shows important information like the provisioning state, which should be `succeeded`, and any outputs for the template.
-### Full script
-Here's the full script, including some verbose messages and comments:
+## Add an artifact repository by using Azure PowerShell
-**New-DevTestLabArtifactRepository.ps1**:
+The following sample PowerShell script, *New-DevTestLabArtifactRepository.ps1*, adds an artifact repository to a lab. The full script includes some verbose messages and comments.
```powershell
The name of the lab.
The name of the resource group that contains the lab. .PARAMETER ArtifactRepositoryName
-Name for the new artifact repository.
-Script creates a random name for the repository if it is not specified.
+Name for the new artifact repository. The script creates a random name for the repository if not specified.
.PARAMETER ArtifactRepositoryDisplayName Display name for the artifact repository.
-This is the name that shows up in the Azure portal (https://portal.azure.com) when viewing all the artifact repositories for a lab.
+This name appears in the list of artifact repositories for a lab.
.PARAMETER RepositoryUri
-Uri to the repository.
+Uri to the artifact repository.
.PARAMETER RepositoryBranch
-Branch in which artifact files can be found. Defaults to 'master'.
+Branch that contains the artifact files. Defaults to 'main'.
.PARAMETER FolderPath
-Folder under which artifacts can be found. Defaults to '/Artifacts'
+Folder that contains the artifact files. Defaults to '/Artifacts'
.PARAMETER PersonalAccessToken
-Security token for access to GitHub or VSOGit repository.
-See https://azure.microsoft.com/documentation/articles/devtest-lab-add-artifact-repo/ for instructions to get personal access token
+Personal access token for the GitHub or Azure Repos repository.
.PARAMETER SourceType
-Whether artifact is VSOGit or GitHub repository.
+Whether the artifact repository is a VSOGit (Azure Repos) or GitHub repository.
.EXAMPLE Set-AzContext -SubscriptionId 11111111-1111-1111-1111-111111111111 .\New-DevTestLabArtifactRepository.ps1 -LabName "mydevtestlab" -LabResourceGroupName "mydtlrg" -ArtifactRepositoryName "MyTeam Repository" -RepositoryUri "https://github.com/<myteam>/<nameofrepo>.git" -PersonalAccessToken "1111...." -SourceType "GitHub" .NOTES
-Script uses the current Az context. To set the context, use the Set-AzContext cmdlet
+The script uses the current Azure context. To set the context, use Set-AzContext.
#>
Param(
[Parameter(Mandatory=$true)] $RepositoryUri,
- $RepositoryBranch = 'master',
+ $RepositoryBranch = 'main',
$FolderPath = '/Artifacts', [Parameter(Mandatory=$true)]
Param(
$SourceType ) -
-#Set artifact repository internal name,
-# if not set by user.
+# Set artifact repository internal name if not specified.
if ($ArtifactRepositoryName -eq $null){ $ArtifactRepositoryName = "PrivateRepo" + (Get-Random -Maximum 999) }
-# Sign in to Azure
+# Sign in to Azure.
Connect-AzAccount
-#Get Lab Resource
+#Get Lab Resource.
$LabResource = Get-AzResource -ResourceType 'Microsoft.DevTestLab/labs' -ResourceName $LabName -ResourceGroupName $LabResourceGroupName Write-Verbose "Lab Name: $($LabResource.Name)"
Write-Verbose "Lab Resource Location: $($LabResource.Location)"
Write-Verbose "Artifact Repository Internal Name: $ArtifactRepositoryName"
-#Prepare properties object for call to New-AzResource
+#Prepare properties object for the call to New-AzResource.
$propertiesObject = @{ uri = $RepositoryUri; folderPath = $FolderPath;
$propertiesObject = @{
Write-Verbose "Properties to be passed to New-AzResource:$($propertiesObject | Out-String)"
-#Resource will be added to current subscription.
+#Add resource to the current subscription.
$resourcetype = 'Microsoft.DevTestLab/labs/artifactSources' $resourceName = $LabName + '/' + $ArtifactRepositoryName Write-Verbose "Az ResourceType: $resourcetype"
Write-Verbose "Az ResourceName: $resourceName"
Write-Verbose "Creating artifact repository '$ArtifactRepositoryDisplayName'..." $result = New-AzResource -Location $LabResource.Location -ResourceGroupName $LabResource.ResourceGroupName -properties $propertiesObject -ResourceType $resourcetype -ResourceName $resourceName -ApiVersion 2016-05-15 -Force - #Alternate implementation: # Use resourceId rather than resourcetype and resourcename parameters.
-# Using resourceId allows you to specify the $SubscriptionId rather than using the
+# Using resourceId lets you specify the $SubscriptionId rather than using the
# subscription id of Get-AzContext. #$resourceId = "/subscriptions/$SubscriptionId/resourceGroups/$($LabResource.ResourceGroupName)/providers/Microsoft.DevTestLab/labs/$LabName/artifactSources/$ArtifactRepositoryName" #$result = New-AzResource -properties $propertiesObject -ResourceId $resourceId -ApiVersion 2016-05-15 -Force
-# Check the result
+# Check the result.
if ($result.Properties.ProvisioningState -eq "Succeeded") { Write-Verbose ("Successfully added artifact repository source '$ArtifactRepositoryDisplayName'") }
else {
Write-Error ("Error adding artifact repository source '$ArtifactRepositoryDisplayName'") }
-#Return the newly created resource so it may be used in subsequent scripts
+#Return the newly created resource to use in later scripts.
return $result ```
-### Run the PowerShell script
-The following example shows you how to run the script:
-
-```powershell
-Set-AzContext -SubscriptionId <Your Azure subscription ID>
-
-.\New-DevTestLabArtifactRepository.ps1 -LabName "mydevtestlab" -LabResourceGroupName "mydtlrg" -ArtifactRepositoryName "MyTeam Repository" -RepositoryUri "https://github.com/<myteam>/<nameofrepo>.git" -PersonalAccessToken "1111...." -SourceType "GitHub"
-```
-- ### Parameters
-The sample PowerShell script in this article takes the following parameters:
+
+The PowerShell script takes the following parameters:
| Parameter | Description | | | -- |
-| LabName | The name of the lab. |
-| ArtifactRepositoryName | Name for the new artifact repository. The script creates a random name for the repository if it isn't specified. |
-| ArtifactRepositoryDisplayName | Display name for the artifact repository. This is the name that shows up in the Azure portal (https://portal.azure.com) when viewing all the artifact repositories for a lab. |
-| RepositoryUri | Uri to the repository. Examples: `https://github.com/<myteam>/<nameofrepo>.git` or `"https://MyProject1.visualstudio.com/DefaultCollection/_git/TeamArtifacts"`.|
-| RepositoryBranch | Branch in which artifact files can be found. Defaults to `master`. |
-| FolderPath | Folder under which artifacts can be found. Defaults to '/Artifacts' |
-| PersonalAccessToken | Security token for accessing the GitHub or VSOGit repository. See the prerequisites section for instructions to get personal access token |
-| SourceType | Whether artifact is VSOGit or GitHub repository |
+| `LabName` | The name of the lab. |
+| `ArtifactRepositoryName` | Name for the new artifact repository. The script creates a random name for the repository if it isn't specified. |
+| `ArtifactRepositoryDisplayName` | Display name that appears in the lab's artifact repository list. |
+| `RepositoryUri` | URI of the artifact repository, which you copied earlier.
+| `RepositoryBranch` | Repository branch that contains the artifacts. The default value is `main`.|
+| `FolderPath` | Folder that contains the artifacts. The default value is: `/Artifacts`.|
+| `PersonalAccessToken` | Security token for accessing the repository, which you copied earlier.|
+| `SourceType` | Whether the artifact repository is a VSOGit (Azure Repos) or GitHub repository.|
-The repository itself need an internal name for identification, which is different than the display name that is seen in the Azure portal. You don't see the internal name using the Azure portal, but you see it when using Azure REST APIs or Azure PowerShell. The script provides a name, if one is not specified by the user of our script.
+
+The repository needs an internal name for identification, which is different than the display name in the Azure portal. You don't see the internal name when using the Azure portal, but you see it when using Azure REST APIs or Azure PowerShell. The script creates a random name if the deployment command doesn't specify one.
```powershell #Set artifact repository name, if not set by user
if ($ArtifactRepositoryName -eq $null){
} ```
-### PowerShell commands used in the script
+### PowerShell commands
-| PowerShell command | Notes |
-| | -- |
-| [Get-AzResource](/powershell/module/az.resources/get-azresource) | This command is used to get details about the lab such as its location. |
-| [New-AzResource](/powershell/module/az.resources/new-azresource) | There's no specific command for adding artifact repositories. The generic [New-AzResource](/powershell/module/az.resources/new-azresource) cmdlet does the job. This cmdlet needs either the **ResourceId** or the **ResourceName** and **ResourceType** pair to know the type of resource to create. This sample script uses the resource name and resource type pair. <br/><br/>Notice that you're creating the artifact repository source in the same location and under the same resource group as the lab.|
+The script uses the following PowerShell commands:
-The script adds a new resource to the current subscription. Use [Get-AzContext](/powershell/module/az.accounts/get-azcontext) to see this information. Use [Set-AzContext](/powershell/module/az.accounts/set-azcontext) to set the current tenant and subscription.
+| Command | Notes |
+| | -- |
+| [Get-AzResource](/powershell/module/az.resources/get-azresource) | Gets details about the lab, such as its location. You create the artifact repository source in the same location and under the same resource group as the lab.|
+| [New-AzResource](/powershell/module/az.resources/new-azresource) | Adds the Azure resource. There's no specific command for adding artifact repositories. This cmdlet needs either the `ResourceId` or the `ResourceName` and `ResourceType` pair to know the type of resource to create. The current script uses the `ResourceName` and `ResourceType` pair. |
-The best way to discover the resource name and resource type information is to use the [Test Drive Azure REST APIs](https://azure.github.io/projects/apis/) website. Check out the [DevTest Labs ΓÇô 2016-05-15](https://aka.ms/dtlrestapis) provider to see the available REST APIs for the DevTest Labs provider. The script users the following resource ID.
+A good way to discover resource name and resource type information is to use the [Azure REST API Browser](https://azure.github.io/projects/apis/) website. DevTest Labs [Artifact Sources](/rest/api/dtl/artifact-sources) shows REST APIs for creating and managing DevTest Labs artifact sources. The current script uses the following resource ID:
```powershell
-"/subscriptions/$SubscriptionId/resourceGroups/$($LabResource.ResourceGroupName)/providers/Microsoft.DevTestLab/labs/$LabName/artifactSources/$ArtifactRepositoryName"
+https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DevTestLab/labs/{labName}/artifactsources/{name}
```
-The resource type is everything listed after ΓÇÿprovidersΓÇÖ in the URI, except for items listed in the curly brackets. The resource name is everything seen in the curly brackets. If more than one item is expected for the resource name, separate each item with a slash as we've done.
+The resource type is everything listed after `providers` in the URI, except for items in curly brackets. The resource name is everything in the curly brackets. If you use more than one item for the resource name, separate each item with a slash:
```powershell $resourcetype = 'Microsoft.DevTestLab/labs/artifactSources' $resourceName = $LabName + '/' + $ArtifactRepositoryName ```
+### Run the PowerShell script
+
+Run the PowerShell script, substituting your own values for the example values in `LabName`, `LabResourceGroupName`, `ArtifactRepositoryName`, `RepositoryUri`, `PersonalAccessToken`, and `SourceType`:
+```powershell
+Set-AzContext -SubscriptionId <Your Azure subscription ID>
+
+.\New-DevTestLabArtifactRepository.ps1 -LabName "mydevtestlab" -LabResourceGroupName "mydtlrg" -ArtifactRepositoryName "myteamrepository" -RepositoryUri "https://github.com/myteam/myteamrepository.git" - "1111...." -SourceType "GitHub"
+```
## Next steps-- [Specify mandatory artifacts for your lab in Azure DevTest Labs](devtest-lab-mandatory-artifacts.md)-- [Create custom artifacts for your DevTest Labs virtual machine](devtest-lab-artifact-author.md)
+- [Specify mandatory artifacts for DevTest Labs VMs](devtest-lab-mandatory-artifacts.md)
- [Diagnose artifact failures in the lab](devtest-lab-troubleshoot-artifact-failure.md)
devtest-labs Add Artifact Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/add-artifact-vm.md
Title: Add an artifact to a VM
-description: Learn how to add an artifact to a virtual machine in a lab in Azure DevTest Labs
+description: Learn how to add an artifact to a virtual machine in a lab in Azure DevTest Labs.
Previously updated : 06/26/2020 Last updated : 01/11/2022
-# Add an artifact to a VM
-While creating a VM, you can add existing artifacts to it. These artifacts can be from either the [public DevTest Labs Git repository](https://github.com/Azure/azure-devtestlab/tree/master/Artifacts) or from your own Git repository. This article shows you how to add artifacts in the Azure portal, and by using Azure PowerShell.
+# Add artifacts to DevTest Labs VMs
-Azure DevTest Labs *artifacts* let you specify *actions* that are performed when the VM is provisioned, such as running Windows PowerShell scripts, running Bash commands, and installing software. Artifact *parameters* let you customize the artifact for your particular scenario.
+This article describes how to add *artifacts* to Azure DevTest Labs virtual machines (VMs). Artifacts specify actions to take to provision a VM, such as running Windows PowerShell scripts, running Bash commands, or installing software. You can use parameters to customize the artifacts for your own needs.
-To learn about how to create custom artifacts, see the article: [Create custom artifacts](devtest-lab-artifact-author.md).
+DevTest Labs artifacts can come from the [public DevTest Labs Git repository](https://github.com/Azure/azure-devtestlab/tree/master/Artifacts) or from private Git repositories. To create your own custom artifacts and store them in a repository, see [Create custom artifacts](devtest-lab-artifact-author.md). To add your artifact repository to a lab so lab users can access the custom artifacts, see [Add an artifact repository to your lab](add-artifact-repository.md).
+
+DevTest Labs lab owners can specify mandatory artifacts to be installed on all lab VMs at creation. For more information, see [Specify mandatory artifacts for DevTest Labs VMs](devtest-lab-mandatory-artifacts.md).
+
+You can't change or remove mandatory artifacts at VM creation time, but you can add any available individual artifacts. This article describes how to add available artifacts to VMs by using the Azure portal or Azure PowerShell.
+
+## Add artifacts to VMs from the Azure portal
+
+You can add artifacts during VM creation, or add artifacts to existing lab VMs.
+
+To add artifacts during VM creation:
+
+1. On the lab's home page, select **Add**.
+1. On the **Choose a base** page, select the type of VM you want.
+1. On the **Create lab resource** screen, select **Add or Remove Artifacts**.
+1. On the **Add artifacts** page, select the arrow next to each artifact you want to add to the VM.
+1. On each **Add artifact** pane, enter any required and optional parameter values, and then select **OK**. The artifact appears under **Selected artifacts**, and the number of configured artifacts updates.
+
+ ![Screenshot that shows adding artifacts on the Add artifacts screen.](media/add-artifact-vm/devtestlab-add-artifacts-blade-selected-artifacts.png)
+
+1. You can change the artifacts after adding them.
+
+ - By default, artifacts install in the order you add them. To rearrange the order, select the ellipsis **...** next to the artifact in the **Selected artifacts** list, and select **Move up**, **Move down**, **Move to top**, or **Move to bottom**.
+ - To edit the artifact's parameters, select **Edit** to reopen the **Add artifact** pane.
+ - To delete the artifact from the **Selected artifacts** list, select **Delete**.
+
+1. When you're done adding and arranging artifacts, select **OK**.
+1. The **Create lab resource** screen shows the number of artifacts added. To add, edit, rearrange, or delete the artifacts before you create the VM, select **Add or Remove Artifacts** again.
+
+After you create the VM, the installed artifacts appear on the VM's **Artifacts** page. To see details about each artifact's installation, select the artifact name.
+
+To install artifacts on an existing VM:
+
+1. From the lab's home page, select the VM from the **My virtual machines** list.
+1. On the VM page, select **Artifacts** in the top menu bar or left navigation.
+1. On the **Artifacts** page, select **Apply artifacts**.
+
+ ![Screenshot that shows the Artifacts screen for an existing V M.](media/add-artifact-vm/artifacts.png)
+
+1. On the **Add artifacts** page, select and configure artifacts the same as for a new VM.
+1. When you're done adding artifacts, select **Install**. The artifacts install on the VM immediately.
+
+## Add artifacts to VMs by using Azure PowerShell
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-## Use Azure portal
-1. Sign in to the [Azure portal](https://go.microsoft.com/fwlink/p/?LinkID=525040).
-1. Select **All Services**, and then select **DevTest Labs** from the list.
-1. From the list of labs, select the lab containing the VM with which you want to work.
-1. Select **My virtual machines**.
-1. Select the desired VM.
-1. Select **Manage artifacts**.
-1. Select **Apply artifacts**.
-1. On the **Apply artifacts** pane, select the artifact you wish to add to the VM.
-1. On the **Add artifact** pane, enter the required parameter values and any optional parameters that you need.
-1. Select **Add** to add the artifact and return to the **Apply artifacts** pane.
-1. Continue adding artifacts as needed for your VM.
-1. Once you've added your artifacts, you can [change the order in which the artifacts are run](#change-the-order-in-which-artifacts-are-run). You can also go back to [view or modify an artifact](#view-or-modify-an-artifact).
-1. When you're done adding artifacts, select **Apply**
-
-### Change the order in which artifacts are run
-By default, the actions of the artifacts are executed in the order in which they are added to the VM.
-The following steps illustrate how to change the order in which the artifacts are run.
-
-1. At the top of the **Apply artifacts** pane, select the link indicating the number of artifacts that have been added to the VM.
-
- ![Number of artifacts added to VM](./media/devtest-lab-add-vm-with-artifacts/devtestlab-add-artifacts-blade-selected-artifacts.png)
-1. On the **Selected artifacts** pane, drag and drop the artifacts into the desired order. If you have trouble dragging the artifact, make sure that you are dragging from the left side of the artifact.
-1. Select **OK** when done.
-
-### View or modify an artifact
-The following steps illustrate how to view or modify the parameters of an artifact:
-
-1. At the top of the **Apply artifacts** pane, select the link indicating the number of artifacts that have been added to the VM.
-
- ![Number of artifacts added to VM](./media/devtest-lab-add-vm-with-artifacts/devtestlab-add-artifacts-blade-selected-artifacts.png)
-1. On the **Selected artifacts** pane, select the artifact that you want to view or edit.
-1. On the **Add artifact** pane, make any needed changes, and select **OK** to close the **Add artifact** pane.
-1. Select **OK** to close the **Selected artifacts** pane.
-
-## Use PowerShell
-The following script applies the specified artifact to the specified VM. The [Invoke-AzResourceAction](/powershell/module/az.resources/invoke-azresourceaction) command is the one that performs the operation.
+The following PowerShell script applies an artifact to a VM by using the [Invoke-AzResourceAction](/powershell/module/az.resources/invoke-azresourceaction) cmdlet.
```powershell #Requires -Module Az.Resources
param
( [Parameter(Mandatory=$true, HelpMessage="The ID of the subscription that contains the lab")] [string] $SubscriptionId,
-[Parameter(Mandatory=$true, HelpMessage="The name of the lab containing the virtual machine")]
+[Parameter(Mandatory=$true, HelpMessage="The name of the lab that has the VM")]
[string] $DevTestLabName,
-[Parameter(Mandatory=$true, HelpMessage="The name of the virtual machine")]
+[Parameter(Mandatory=$true, HelpMessage="The name of the VM")]
[string] $VirtualMachineName, [Parameter(Mandatory=$true, HelpMessage="The repository where the artifact is stored")] [string] $RepositoryName,
-[Parameter(Mandatory=$true, HelpMessage="The artifact to apply to the virtual machine")]
+[Parameter(Mandatory=$true, HelpMessage="The artifact to apply to the VM")]
[string] $ArtifactName, [Parameter(ValueFromRemainingArguments=$true)] $Params
Set-AzContext -SubscriptionId $SubscriptionId | Out-Null
$resourceGroupName = (Get-AzResource -ResourceType 'Microsoft.DevTestLab/labs' | Where-Object { $_.Name -eq $DevTestLabName}).ResourceGroupName if ($resourceGroupName -eq $null) { throw "Unable to find lab $DevTestLabName in subscription $SubscriptionId." }
-# Get the internal repo name
+# Get the internal repository name
$repository = Get-AzResource -ResourceGroupName $resourceGroupName ` -ResourceType 'Microsoft.DevTestLab/labs/artifactsources' ` -ResourceName $DevTestLabName `
$template = Get-AzResource -ResourceGroupName $resourceGroupName `
if ($template -eq $null) { throw "Unable to find template $ArtifactName in lab $DevTestLabName." }
-# Find the virtual machine in Azure
+# Find the VM in Azure
$FullVMId = "/subscriptions/$SubscriptionId/resourceGroups/$resourceGroupName` /providers/Microsoft.DevTestLab/labs/$DevTestLabName/virtualmachines/$virtualMachineName"
$FullArtifactId = "/subscriptions/$SubscriptionId/resourceGroups/$resourceGroupN
/providers/Microsoft.DevTestLab/labs/$DevTestLabName/artifactSources/$($repository.Name)` /artifacts/$($template.Name)"
-# Handle the inputted parameters to pass through
+# Handle the input parameters to pass through
$artifactParameters = @()
-# Fill artifact parameter with the additional -param_ data and strip off the -param_
+# Fill the artifact parameter with the additional -param_ data and strip off the -param_
$Params | ForEach-Object { if ($_ -match '^-param_(.*)') { $name = $_.TrimStart('^-param_')
$Params | ForEach-Object {
} }
-# Create structure for the artifact data to be passed to the action
+# Create a structure to pass the artifact data to the action
$prop = @{ artifacts = @(
artifacts = @(
) }
-# Check the VM
+# Apply the artifact
if ($virtualMachine -ne $null) { # Apply the artifact by name to the virtual machine $status = Invoke-AzResourceAction -Parameters $prop -ResourceId $virtualMachine.ResourceId -Action "applyArtifacts" -ApiVersion 2016-05-15 -Force
if ($virtualMachine -ne $null) {
``` ## Next steps
-See the following articles on artifacts:
-- [Specify mandatory artifacts for your lab](devtest-lab-mandatory-artifacts.md)
+- [Specify mandatory artifacts](devtest-lab-mandatory-artifacts.md)
- [Create custom artifacts](devtest-lab-artifact-author.md) - [Add an artifact repository to a lab](devtest-lab-artifact-author.md) - [Diagnose artifact failures](devtest-lab-troubleshoot-artifact-failure.md)
devtest-labs Devtest Lab Add Artifact Repo https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-add-artifact-repo.md
- Title: Add a Git repository to a lab
-description: Learn how to add a GitHub or Azure DevOps Services Git repository for your custom artifacts source in Azure DevTest Labs.
- Previously updated : 06/26/2020--
-# Add a Git repository to store custom artifacts and Resource Manager templates
-
-You can [create custom artifacts](devtest-lab-artifact-author.md) for the VMs in your lab, or [use Azure Resource Manager templates to create a custom test environment](devtest-lab-create-environment-from-arm.md). You must add a private Git repository for the artifacts or Resource Manager templates that your team creates. The repository can be hosted on [GitHub](https://github.com) or on [Azure DevOps Services](https://visualstudio.com).
-
-We offer a [GitHub repository of artifacts](https://github.com/Azure/azure-devtestlab/tree/master/Artifacts) that you can deploy as-is, or you can customize them for your labs. When you customize or create an artifact, you can't store the artifact in the public repository. You must create your own private repo for custom artifacts and for artifacts that you create.
-
-When you create a VM, you can save the Resource Manager template, customize it if you want, and then use it later to create more VMs. You must create your own private repository to store your custom Resource Manager templates.
-
-* To learn how to create a GitHub repository, see [GitHub Bootcamp](https://help.github.com/categories/bootcamp/).
-* To learn how to create an Azure DevOps Services project that has a Git repository, see [Connect to Azure DevOps Services](https://azure.microsoft.com/services/devops/).
-
-The following figure is an example of how a repository that has artifacts might look in GitHub:
-
-![Sample GitHub artifacts repo](./media/devtest-lab-add-repo/devtestlab-github-artifact-repo-home.png)
-
-## Get the repository information and credentials
-To add a repository to your lab, first, get key information from your repository. The following sections describe how to get required information for repositories that are hosted on GitHub or Azure DevOps Services.
-
-### Get the GitHub repository clone URL and personal access token
-
-1. Go to the home page of the GitHub repository that contains the artifact or Resource Manager template definitions.
-2. Select **Clone or download**.
-3. To copy the URL to the clipboard, select the **HTTPS clone url** button. Save the URL for later use.
-4. In the upper-right corner of GitHub, select the profile image, and then select **Settings**.
-5. In the **Personal settings** menu on the left, select **Personal access tokens**.
-6. Select **Generate new token**.
-7. On the **New personal access token** page, under **Token description**, enter a description. Accept the default items under **Select scopes**, and then select **Generate Token**.
-8. Save the generated token. You use the token later.
-9. Close GitHub.
-10. Continue to the [Connect your lab to the repository](#connect-your-lab-to-the-repository) section.
-
-### Get the Azure Repos clone URL and personal access token
-
-1. Go to the home page of your team collection (for example, `https://contoso-web-team.visualstudio.com`), and then select your project.
-2. On the project home page, select **Code**.
-3. To view the clone URL, on the project **Code** page, select **Clone**.
-4. Save the URL. You use the URL later.
-5. To create a personal access token, in the user account drop-down menu, select **My profile**.
-6. On the profile information page, select **Security**.
-7. On the **Security** tab, select **Add**.
-8. On the **Create a personal access token** page:
- 1. Enter a **Description** for the token.
- 2. In the **Expires In** list, select **180 days**.
- 3. In the **Accounts** list, select **All accessible accounts**.
- 4. Select the **Read Only** option.
- 5. Select **Create Token**.
-9. The new token appears in the **Personal Access Tokens** list. Select **Copy Token**, and then save the token value for later use.
-10. Continue to the [Connect your lab to the repository](#connect-your-lab-to-the-repository) section.
-
-## Connect your lab to the repository
-1. Sign in to the [Azure portal](https://go.microsoft.com/fwlink/p/?LinkID=525040).
-2. Select **More Services**, and then select **DevTest Labs** from the list of services.
-3. From the list of labs, select your lab.
-4. Select **Configuration and policies** > **Repositories** > **+ Add**.
-
- ![The Add repository button](./media/devtest-lab-add-repo/devtestlab-add-repo.png)
-5. On the second **Repositories** page, specify the following information:
- 1. **Name**. Enter a name for the repository.
- 2. **Git Clone Url**. Enter the Git HTTPS clone URL that you copied earlier from either GitHub or Azure DevOps Services.
- 3. **Branch**. To get your definitions, enter the branch.
- 4. **Personal Access Token**. Enter the personal access token that you got earlier from either GitHub or Azure DevOps Services.
- 5. **Folder Paths**. Enter at least one folder path relative to the clone URL that contains your artifact or Resource Manager template definitions. When you specify a subdirectory, make sure you include the forward slash in the folder path.
-
- ![Repositories area](./media/devtest-lab-add-repo/devtestlab-repo-blade.png)
-6. Select **Save**.
-
-### Related blog posts
-* [Troubleshoot failing artifacts in DevTest Labs](devtest-lab-troubleshoot-artifact-failure.md)
-* [Join a VM to an existing Active Directory domain by using a Resource Manager template in DevTest Labs](https://www.visualstudiogeeks.com/blog/DevOps/Join-a-VM-to-existing-AD-domain-using-ARM-template-AzureDevTestLabs)
--
-## Next steps
-After you have created your private Git repository, you can do one or both of the following, depending on your needs:
-* Store your [custom artifacts](devtest-lab-artifact-author.md). You can use them later to create new VMs.
-* [Create multi-VM environments and PaaS resources by using Resource Manager templates](devtest-lab-create-environment-from-arm.md). Then, you can store the templates in your private repo.
-
-When you create a VM, you can verify that the artifacts or templates are added to your Git repository. They are immediately available in the list of artifacts or templates. The name of your private repo is shown in the column that specifies the source.
devtest-labs Devtest Lab Artifact Author https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-artifact-author.md
Title: Create custom artifacts for your Azure DevTest Labs virtual machine
-description: Learn how to create and use artifacts to deploy and set up applications after you provision a virtual machine.
+ Title: Create custom artifacts for virtual machines
+description: Learn how to create and use artifacts to deploy and set up applications on DevTest Labs virtual machines.
Previously updated : 06/26/2020 Last updated : 01/11/2022
-# Create custom artifacts for your DevTest Labs virtual machine
+# Create custom artifacts for DevTest Labs
-Watch the following video for an overview of the steps described in this article:
+This article describes how to create custom artifact files for Azure DevTest Labs virtual machines (VMs). DevTest Labs artifacts specify actions to take to provision a VM. An artifact consists of an artifact definition file and other script files that you store in a folder in a Git repository.
-> [!VIDEO https://channel9.msdn.com/Blogs/Azure/how-to-author-custom-artifacts/player]
->
->
+- For information about adding your artifact repositories to labs, see [Add an artifact repository to your lab](add-artifact-repository.md).
+- For information about adding the artifacts you create to VMs, see [Add artifacts to DevTest Labs VMs](add-artifact-vm.md).
+- For information about specifying mandatory artifacts to be added to all lab VMs, see [Specify mandatory artifacts for DevTest Labs VMs](devtest-lab-mandatory-artifacts.md).
-## Overview
-You can use *artifacts* to deploy and set up your application after you provision a VM. An artifact consists of an artifact definition file and other script files that are stored in a folder in a Git repository. Artifact definition files consist of JSON expressions that specify what you want to install on a VM. For example, you can define the name of an artifact, a command to run, and available parameters for the command. You can refer to other script files within the artifact definition file by name.
+## Artifact definition files
-## Artifact definition file format
-The following example shows the sections that make up the basic structure of a definition file:
+Artifact definition files are JSON expressions that specify what you want to install on a VM. The files define the name of an artifact, a command to run, and available parameters for the command. You can refer to other script files by name in the artifact definition file.
+
+The following example shows the sections that make up the basic structure of an *artifactfile.json* artifact definition file:
```json {
The following example shows the sections that make up the basic structure of a d
} ```
-| Element name | Required? | Description |
-| | | |
-| `$schema` |No |Location of the JSON schema file. The JSON schema file can help you test the validity of the definition file. |
-| `title` |Yes |Name of the artifact displayed in the lab. |
-| `description` |Yes |Description of the artifact displayed in the lab. |
-| `iconUri` |No |URI of the icon displayed in the lab. |
-| `targetOsType` |Yes |Operating system of the VM where the artifact is installed. Supported options are Windows and Linux. |
-| `parameters` |No |Values that are provided when the artifact install command is run on a machine. Parameters help you customize your artifact. |
-| `runCommand` |Yes |Artifact install command that is executed on a VM. |
+| Element name | Description |
+| | |
+| `$schema` |Location of the JSON schema file. The JSON schema file can help you test the validity of the definition file.|
+| `title` |Name of the artifact to display in the lab. **Required.**|
+| `description` |Description of the artifact to display in the lab. **Required.**|
+| `iconUri` |URI of the artifact icon to display in the lab.|
+| `targetOsType` |Operating system of the VM to install the artifact on. Supported values: `Windows`, `Linux`. **Required.**|
+| `parameters` |Values to customize the artifact when installing on the VM.|
+| `runCommand` |The artifact install command to execute on the VM. **Required.**|
### Artifact parameters
-In the parameters section of the definition file, specify which values a user can input when they install an artifact. You can refer to these values in the artifact install command.
+
+In the parameters section of the definition file, specify the values a user can input when installing an artifact. You can refer to these values in the artifact install command.
To define parameters, use the following structure:
To define parameters, use the following structure:
} ```
-| Element name | Required? | Description |
-| | | |
-| `type` |Yes |Type of parameter value. See the following list for the allowed types. |
-| `displayName` |Yes |Name of the parameter that is displayed to a user in the lab. |
-| `description` |Yes |Description of the parameter that is displayed in the lab. |
+| Element name | Description |
+| | |
+| `type` |Type of parameter value. **Required.**|
+| `displayName` |Name of the parameter to display to the lab user. **Required.**|
+| `description` |Description of the parameter to display to the lab user. **Required.**|
-Allowed types are:
+The allowed parameter value types are:
-* `string` (any valid JSON string)
-* `int` (any valid JSON integer)
-* `bool` (any valid JSON Boolean)
-* `array` (any valid JSON array)
+| Type | Description |
+| | |
+|`string`|Any valid JSON string|
+|`int`|Any valid JSON integer|
+|`bool`|Any valid JSON boolean|
+|`array`|Any valid JSON array|
-## Secrets as secure strings
-Declare secrets as secure strings. Here's the syntax for declaring a secure string parameter within the `parameters` section of the **artifactfile.json** file:
+### Secrets as secure strings
+
+To declare secrets as secure string parameters with masked characters in the UI, use the following syntax in the `parameters` section of the *artifactfile.json* file:
```json
Declare secrets as secure strings. Here's the syntax for declaring a secure stri
}, ```
-For the artifact install command, run the PowerShell script that takes the secure string created by using the ConvertTo-SecureString command.
+The artifact install command to run the PowerShell script takes the secure string created by using the `ConvertTo-SecureString` command.
```json "runCommand": {
For the artifact install command, run the PowerShell script that takes the secur
} ```
-For the complete example artifactfile.json and the artifact.ps1 (PowerShell script), see [this sample on GitHub](https://github.com/Azure/azure-devtestlab/tree/master/Artifacts/windows-test-paramtypes).
+Don't log secrets to the console, because the script captures output for user debugging.
-Another important point to note is to not log secrets to the console, as output is captured for user debugging.
+### Artifact expressions and functions
-## Artifact expressions and functions
-You can use expressions and functions to construct the artifact install command. Expressions evaluate when the artifact installs. Expressions can appear anywhere in a JSON string value, and always return another JSON value. Enclose expressions with brackets, [ and ]. If you need to use a literal string that starts with a [ bracket, use two brackets [[.
+You can use expressions and functions to construct the artifact install command. Expressions evaluate when the artifact installs. Expressions can appear anywhere in a JSON string value, and always return another JSON value. Enclose expressions with brackets, \[ \]. If you need to use a literal string that starts with a bracket, use two brackets \[\[.
-Typically, you use expressions with functions to construct a value. Just like in JavaScript, function calls are formatted as **functionName(arg1, arg2, arg3)**.
+You usually use expressions with functions to construct a value. Function calls are formatted as `functionName(arg1, arg2, arg3)`.
-The following list shows common functions:
+Common functions include:
-* **parameters(parameterName)**: Returns a parameter value that is provided when the artifact command is run.
-* **concat(arg1, arg2, arg3,….. )**: Combines multiple string values. This function can take various arguments.
+| Function | Description |
+| | |
+|`parameters(parameterName)`|Returns a parameter value to provide when the artifact command runs.|
+|`concat(arg1, arg2, arg3, ...)`|Combines multiple string values. This function can take various arguments.|
-The following example shows how to use expressions and functions to construct a value:
+The following example uses expressions and functions to construct a value:
```json runCommand": {
The following example shows how to use expressions and functions to construct a
## Create a custom artifact
-1. Install a JSON editor. You need a JSON editor to work with artifact definition files. We recommend using [Visual Studio Code](https://code.visualstudio.com/), which is available for Windows, Linux, and OS X.
-2. Get a sample artifactfile.json definition file. Check out the artifacts created by the DevTest Labs team in our [GitHub repository](https://github.com/Azure/azure-devtestlab). We've created a rich library of artifacts that can help you create your own artifacts. Download an artifact definition file and make changes to it to create your own artifacts.
-3. Make use of IntelliSense. Use IntelliSense to see valid elements that you can use to construct an artifact definition file. You also can see the different options for values of an element. For example, when you edit the **targetOsType** element, IntelliSense shows you two choices, for Windows or Linux.
-4. Store the artifact in the [public Git repository for DevTest Labs](https://github.com/Azure/azure-devtestlab/tree/master/Artifacts) or [your own Git repository](devtest-lab-add-artifact-repo.md). In the public repository, you can view artifacts shared by others that you can use directly or customize them to suit your needs.
-
- 1. Create a separate directory for each artifact. The directory name should be the same as the artifact name.
- 2. Store the artifact definition file (artifactfile.json) in the directory that you created.
- 3. Store the scripts that are referenced from the artifact install command.
-
- Here's an example of how an artifact folder might look:
+To create a custom artifact:
+
+- Install a JSON editor to work with artifact definition files. [Visual Studio Code](https://code.visualstudio.com/) is available for Windows, Linux, and macOS.
+
+- Start with a sample *artifactfile.json* definition file.
+
+ The public [DevTest Labs artifact repository](https://github.com/Azure/azure-devtestlab/tree/master/Artifacts) has a rich library of artifacts you can use. You can download an artifact definition file and customize it to create your own artifacts.
+
+ This article uses the *artifactfile.json* definition file and *artifact.ps1* PowerShell script at [https://github.com/Azure/azure-devtestlab/tree/master/Artifacts/windows-test-paramtypes](https://github.com/Azure/azure-devtestlab/tree/master/Artifacts/windows-test-paramtypes).
+
+- Use IntelliSense to see valid elements and value options that you can use to construct an artifact definition file. For example, when you edit the `targetOsType` element, IntelliSense shows you `Windows` or `Linux` options.
+
+- Store your artifacts in public or private Git artifact repositories.
+
+ - Store each *artifactfile.json* artifact definition file in a separate directory named the same as the artifact name.
+ - Store the scripts that the install command references in the same directory as the artifact definition file.
- ![Screenshot that shows an artifact folder example.](./media/devtest-lab-artifact-author/git-repo.png)
-5. If you're using your own repository to store artifacts, add the repository to the lab by following instructions in the article: [Add a Git repository for artifacts and templates](devtest-lab-add-artifact-repo.md).
+ The following screenshot shows an example artifact folder:
+
+ ![Screenshot that shows an example artifact folder.](./media/devtest-lab-artifact-author/git-repo.png)
-## Related articles
-* [How to diagnose artifact failures in DevTest Labs](devtest-lab-troubleshoot-artifact-failure.md)
-* [Join a VM to an existing Active Directory domain by using a Resource Manager template in DevTest Labs](https://www.visualstudiogeeks.com/blog/DevOps/Join-a-VM-to-existing-AD-domain-using-ARM-template-AzureDevTestLabs)
+- To store your custom artifacts in the public [DevTest Labs artifact repository](https://github.com/Azure/azure-devtestlab/tree/master/Artifacts), open a pull request against the repo.
+- To add your private artifact repository to a lab, see [Add an artifact repository to your lab in DevTest Labs](add-artifact-repository.md).
## Next steps
-* Learn how to [add a Git artifact repository to a lab](devtest-lab-add-artifact-repo.md).
+
+- [Add artifacts to DevTest Labs VMs](add-artifact-vm.md)
+- [Diagnose artifact failures in the lab](devtest-lab-troubleshoot-artifact-failure.md)
+- [Troubleshoot issues when applying artifacts](devtest-lab-troubleshoot-apply-artifacts.md)
devtest-labs Devtest Lab Create Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-create-template.md
Title: Create an Azure DevTest Labs custom image from a VHD file
-description: Learn how to create a custom image in Azure DevTest Labs from a VHD file using the Azure portal
+ Title: Create an Azure DevTest Labs virtual machine custom image from a VHD file
+description: Learn how to use a VHD file to create an Azure DevTest Labs virtual machine custom image in the Azure portal.
Previously updated : 06/26/2020 Last updated : 01/04/2022 # Create a custom image from a VHD file [!INCLUDE [devtest-lab-create-custom-image-from-vhd-selector](../../includes/devtest-lab-create-custom-image-from-vhd-selector.md)] --
-## Step-by-step instructions
-
-The following steps walk you through creating a custom image from a VHD file using the Azure portal:
-
-1. Sign in to the [Azure portal](https://go.microsoft.com/fwlink/p/?LinkID=525040).
+You can create a virtual machine (VM) custom image for Azure DevTest Labs by using a virtual hard drive (VHD) file.
-1. Select **All services**, and then select **DevTest Labs** from the list.
-
-1. From the list of labs, select the desired lab.
-
-1. On the lab's main pane, select **Configuration and policies**.
-1. On the **Configuration and policies** pane, select **Custom images**.
+This article describes how to create a custom image in the Azure portal. You can also [use PowerShell](devtest-lab-create-custom-image-from-vhd-using-powershell.md) to create a custom image.
-1. On the **Custom images** pane, select **+Add**.
- ![Add Custom image](./media/devtest-lab-create-template/add-custom-image.png)
+## Azure portal instructions
-1. Enter the name of the custom image. This name is displayed in the list of base images when creating a VM.
+To create a custom image from a VHD file in DevTest Labs in the Azure portal, follow these steps:
-1. Enter the description of the custom image. This description is displayed in the list of base images when creating a VM.
+1. In the [Azure portal](https://go.microsoft.com/fwlink/p/?LinkID=525040), go to the **Overview** page for the lab that has the uploaded VHD file.
-1. For **OS type**, select either **Windows** or **Linux**.
+1. Select **Configuration and policies** in the left navigation.
- - If you select **Windows**, specify via the checkbox whether *sysprep* has been run on the machine.
- - If you select **Linux**, specify via the checkbox whether *deprovision* has been run on the machine.
+1. On the **Configuration and policies** pane, select **Custom images** under **Virtual machine bases** in the left navigation.
-1. Select a **VHD** from the drop-down menu. This is the VHD that will be used to create the new custom image. If necessary, select to **Upload a VHD using PowerShell**.
+1. On the **Custom images** page, select **Add**.
-1. You can also enter a plan name, plan offer, and plan publisher if the image used to create the custom image is not a licensed image (published by Microsoft).
+ ![Screenshot that shows the Custom image page with the Add button.](media/devtest-lab-create-template/add-custom-image.png)
- - **Plan name:** Enter the name of the Marketplace image (SKU) from which this custom image is created
- - **Plan offer:** Enter the product (offer) of the Marketplace image from which this custom image is created
- - **Plan publisher:** Enter the publisher of the Marketplace image from which this custom image is created
+1. On the **Add custom image** page:
- > [!NOTE]
- > If the image you are using to create a custom image is **not** a licensed image, then these fields are empty and can be filled in if you choose. If the image **is** a licensed image, then the fields are auto populated with the plan information. If you try to change them in this case, a warning message is displayed.
- >
- >
+ - Enter a name for the custom image to display in the list of base images for creating a VM.
+ - Enter an optional description to display in the base image list.
+ - Under **OS type**, select whether the OS for the VHD and custom image is **Windows** or **Linux**.
+ - If you choose **Windows**, select the checkbox if you ran *sysprep* on the machine before creating the VHD file.
+ - If you choose **Linux**, select the checkbox if you ran *deprovision* on the machine before creating the VHD file.
-1. Select **OK** to create the custom image.
+1. Under **VHD**, select the uploaded VHD file for the custom image from the drop-down menu.
-After a few minutes, the custom image is created and is stored inside the labΓÇÖs storage account. When a lab user wants to create a new VM, the image is available in the list of base images.
+1. Optionally, enter a plan name, plan offer, and plan publisher if the VHD image isn't a licensed image published by Microsoft. If the image is a licensed image, these fields are pre-populated with the plan information.
-![Custom image available in list of base images](./media/devtest-lab-create-template/custom-image-available-as-base.png)
+ - **Plan name:** Name of the non-Microsoft Marketplace image or SKU used to create the VHD image.
+ - **Plan offer:** Product or offer name for the Marketplace image.
+ - **Plan publisher:** Publisher of the Marketplace image.
+1. Select **OK**.
+ ![Screenshot that shows the Add custom image page.](media/devtest-lab-create-template/create-custom-image.png)
-## Related blog posts
+After creation, the custom image is stored in the lab's storage account. The custom image appears in the list of VM base images for the lab. Lab users can create new VMs based on the custom image.
-- [Custom images or formulas?](./devtest-lab-faq.yml#blog-post)-- [Copying Custom Images between Azure DevTest Labs](https://www.visualstudiogeeks.com/blog/DevOps/How-To-Move-CustomImages-VHD-Between-AzureDevTestLabs#copying-custom-images-between-azure-devtest-labs)
+![Screenshot that shows the Custom images available in the list of base images.](media/devtest-lab-create-template/custom-image-available-as-base.png)
## Next steps -- [Add a VM to your lab](./devtest-lab-add-vm.md)
+- [Add a VM to your lab](./devtest-lab-add-vm.md)
+- [Compare custom images and formulas in DevTest Labs](devtest-lab-comparing-vm-base-image-types.md)
+- [Copying Custom Images between Azure DevTest Labs](https://www.visualstudiogeeks.com/blog/DevOps/How-To-Move-CustomImages-VHD-Between-AzureDevTestLabs#copying-custom-images-between-azure-devtest-labs)
devtest-labs Devtest Lab Mandatory Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-mandatory-artifacts.md
Title: Specify mandatory artifacts in Azure DevTest Labs
-description: Learn how to specify mandatory artifacts that need to be installed before installing any user-selected artifacts on virtual machines (VMs) in the lab.
+ Title: Specify mandatory artifacts for lab virtual machines
+description: Learn how to specify mandatory artifacts to install at creation of every lab virtual machine (VM) in Azure DevTest Labs.
Previously updated : 10/19/2021 Last updated : 01/12/2022
-# Specify mandatory artifacts for your lab in Azure DevTest Labs
+# Specify mandatory artifacts for DevTest Labs VMs
-As an owner of a lab, you can specify mandatory artifacts that are applied to every machine created in the lab. Imagine a scenario where you want each machine in your lab to have Visual Studio Code installed. In this case, each lab user would have to add a Visual Studio Code artifact during virtual machine creation to make sure their machine received Visual Studio Code. In other words, lab users would essentially have to re-create a machine in case they forget to apply mandatory artifacts on their machine. As a lab owner, you make the Visual Studio Code artifact as a mandatory artifact in your lab. This step makes sure that each machine has Visual Studio Code and saves the time and effort for your lab users.
-
-Other mandatory artifacts could include a common tool that your team uses, or a platform-related security pack that each machine needs to have by default, and so on. In short, any common software that every machine in your lab must have becomes a mandatory artifact. If you create a custom image from a machine that has mandatory artifacts applied to it and then create a fresh machine from that image, the mandatory artifacts are reapplied on the machine during creation. This behavior also means that even though the custom image is old, every time you create a machine from it the most updated version of mandatory artifacts are applied to it during the creation flow.
+This article describes how to specify mandatory *artifacts* in Azure DevTest Labs to install on every lab virtual machine (VM). Artifacts are tools and applications to add to VMs. Installing mandatory artifacts ensures all lab VMs have standardized, up-to-date artifacts. Lab users don't have to spend time and effort to add needed artifacts individually.
+
+Mandatory artifacts can include any software that every VM in your lab must have. If you create a custom image from a VM that has mandatory artifacts applied to it, and create new VMs from that image, those VMs also have the mandatory artifacts. Even if the custom image is old, VM creation applies the most updated versions of the mandatory artifacts.
-Only artifacts that have no parameters are supported as mandatory ones. Your lab user doesn't need to enter other parameters during lab creation making the process of VM creation simple.
+Only artifacts that have no parameters can be mandatory artifacts. Lab users don't have to enter extra parameter values, making the VM creation process simple.
+
+During VM creation, mandatory artifacts install before any artifacts the user chooses to install on the machine.
## Specify mandatory artifacts
-You can select mandatory artifacts for Windows and Linux machines separately. You can also reorder these artifacts depending on the order in which you would like them to applied.
-
-1. On the home page of your lab, select **Configuration and policies** under **SETTINGS**.
-3. Select **Mandatory artifacts** under **EXTERNAL RESOURCES**.
-4. Select **Edit** in the **Windows** section or the **Linux** section. This example uses the **Windows** option.
-
- ![Mandatory artifacts page - Edit button](media/devtest-lab-mandatory-artifacts/mandatory-artifacts-edit-button.png)
-4. Select an artifact. This example uses **7-Zip** option.
-5. On the **Add artifact** page, select **Add**.
-
- ![Mandatory artifacts page - Add 7-zip](media/devtest-lab-mandatory-artifacts/add-seven-zip.png)
-6. To add another artifact, select the article, and select **Add**. This example adds **Chrome** as the second mandatory artifact.
-
- ![Mandatory artifacts page - Add Chrome](media/devtest-lab-mandatory-artifacts/add-chrome.png)
-7. On the **Mandatory artifacts** page, you see a message that specifies the number of artifacts selected. If you select the message, you see the artifacts that you selected. Select **Save** to save.
-
- ![Mandatory artifacts page - Save artifacts](media/devtest-lab-mandatory-artifacts/save-artifacts.png)
-8. Repeat the steps to specify mandatory artifacts for Linux VMs.
-
- ![Mandatory artifacts page - Windows and Linux artifacts](media/devtest-lab-mandatory-artifacts/windows-linux-artifacts.png)
-9. To **delete** an artifact from the list, select **...(ellipsis)** at the end of the row, and select **Delete**.
-10. To **reorder** artifacts in the list, hover mouse over the artifact, select **...(ellipsis)** that shows up at the beginning of the row, and drag the item to the new position.
-11. To save mandatory artifacts in the lab, select **Save**.
-
- ![Mandatory artifacts page - Save artifacts in lab](media/devtest-lab-mandatory-artifacts/save-to-lab.png)
-12. Close the **Configuration and policies** page (select **X** in the upper-right corner) to get back to the home page for your lab.
-
-## Delete a mandatory artifact
-To delete a mandatory artifact from a lab, do the following actions:
-
-1. Select **Configuration and policies** under **SETTINGS**.
-2. Select **Mandatory artifacts** under **EXTERNAL RESOURCES**.
-3. Select **Edit** in the **Windows** section or the **Linux** section. This example uses the **Windows** option.
-4. Select the message with the number of mandatory artifacts at the top.
-
- ![Mandatory artifacts page - Select the message](media/devtest-lab-mandatory-artifacts/select-message-artifacts.png)
-5. On the **Selected artifacts** page, select **...(ellipsis)** for the artifact to be deleted, and select **Remove**.
-
- ![Mandatory artifacts page - Remove artifact](media/devtest-lab-mandatory-artifacts/remove-artifact.png)
-6. Select **OK** to close the **Selected artifacts** page.
-7. Select **Save** on the **Mandatory artifacts** page.
-8. Repeat steps for **Linux** images if needed.
-9. Select **Save** to save all the changes to the lab.
-
-## View mandatory artifacts when creating a VM
-Now, as a lab user you can view the list of mandatory artifacts while creating a VM in the lab. You can't edit or delete mandatory artifacts set in the lab by your lab owner.
-
-1. On the home page for your lab, select **Overview** from the menu.
-2. To add a VM to the lab, select **+ Add**.
-3. Select a **base image**. This example uses **Windows Server, version 1709**.
-4. Notice that you see a message for **Artifacts** with the number of mandatory artifacts selected.
-5. Select **Artifacts**.
-6. Confirm that you see the **mandatory artifacts** you specified in the lab's configuration and policies.
-
- ![Create a VM - mandatory artifacts](media/devtest-lab-mandatory-artifacts/create-vm-artifacts.png)
+
+You can select mandatory artifacts for Windows and Linux lab machines separately.
+
+1. On your lab's home page, under **Settings** in the left navigation, select **Configuration and policies**.
+1. On the **Configuration and policies** screen, under **External resources** in the left navigation, select **Mandatory artifacts**.
+1. For Windows VMs, select **Windows**, and then select **Edit Windows artifacts**. For Linux VMs, select **Linux**, and then select **Edit Linux artifacts**.
+
+ ![Screenshot that shows the Edit Windows artifacts button.](media/devtest-lab-mandatory-artifacts/mandatory-artifacts-edit-button.png)
+
+1. On the **Mandatory artifacts** page, select the arrow next to each artifact you want to add to the VM.
+1. On each **Add artifact** pane, select **OK**. The artifact appears under **Selected artifacts**, and the number of configured artifacts updates.
+
+ ![Screenshot that shows adding mandatory artifacts on the Mandatory artifacts screen.](media/devtest-lab-mandatory-artifacts/save-artifacts.png)
+
+1. By default, artifacts install in the order you add them. To rearrange the order, select the ellipsis **...** next to the artifact in the **Selected artifacts** list, and select **Move up**, **Move down**, **Move to top**, or **Move to bottom**. To delete the artifact from the list, select **Delete**.
+
+1. When you're done adding and arranging artifacts, select **Save**.
+
+## Delete or rearrange mandatory artifacts
+
+After you add mandatory artifacts, the lists of selected artifacts appear on the **Configuration and policies | Mandatory artifacts** screen under **Windows** and **Linux**. You can rearrange or delete the specified mandatory artifacts.
+
+To delete a mandatory artifact from the list, select the checkbox next to the artifact, and then select **Delete**.
+
+![Screenshot that shows the Delete button to remove a mandatory artifact.](media/devtest-lab-mandatory-artifacts/remove-artifact.png)
+
+To rearrange the order of the mandatory artifacts:
+
+1. Select **Edit Windows artifacts** or **Edit Linux artifacts**.
+1. On the **Mandatory artifacts** page, select the ellipsis **...** next to the artifact in the **Selected artifacts** list.
+1. Select **Move up**, **Move down**, **Move to top**, or **Move to bottom**.
+1. Select **Save**.
+
+## See mandatory artifacts for a VM
+
+Once you specify mandatory artifacts for a lab, all lab VMs for that operating system (Windows or Linux) have those artifacts installed at creation. Lab users can see the mandatory artifacts to be installed on their VMs.
+
+For example, to see the mandatory artifacts specified for lab Windows VMs in the earlier procedure:
+
+1. On your lab's home page, select **Add**.
+1. On the **Choose a base** page, select a Windows image, such as **Windows 11 Pro**.
+1. On the **Create lab resource** page, under **Artifacts**, note the number of mandatory artifacts. To see what the mandatory artifacts are, select **Add or Remove Artifacts**.
+
+ ![Screenshot that shows the Create lab resource screen with number of mandatory artifacts and Add or Remove Artifacts link.](media/devtest-lab-mandatory-artifacts/select-message-artifacts.png)
+
+1. On the **Add artifacts** screen, an informational message lists the mandatory artifacts to be installed, in order.
+
+ ![Screenshot that shows the Add artifacts screen with the list of mandatory artifacts to install.](media/devtest-lab-mandatory-artifacts/save-to-lab.png)
+
+You can't remove, rearrange, or change mandatory artifacts when you create an individual VM. However, you can add other available artifacts to the VM. For more information and instructions, see [Add artifacts to DevTest Labs VMs](add-artifact-vm.md).
+
+You can also create your own artifacts for VMs. For more information, see [Create custom artifacts for DevTest Labs VMs](devtest-lab-artifact-author.md).
## Next steps
-* Learn how to [add a Git artifact repository to a lab](devtest-lab-add-artifact-repo.md).
+
+- Learn how to [add a Git artifact repository to a lab](add-artifact-repository.md).
digital-twins Concepts Data Explorer Plugin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-data-explorer-plugin.md
There are various ways to ingest IoT data into Azure Data Explorer. Here are two
If you're ingesting time series data directly into Azure Data Explorer, you'll likely need to convert this raw time series data into a schema suitable for joint Azure Digital Twins/Azure Data Explorer queries.
-An [update policy](/azure/data-explorer/kusto/management/updatepolicy) in Azure Data Explorer allows you to automatically transform and append data to a target table whenever new data is inserted into a source table.
+An [update policy](/azure/data-explorer/kusto/management/updatepolicy) in Azure Data Explorer allows you to automatically transform and append data to a target table whenever new data is inserted into a source table.
You can use an update policy to enrich your raw time series data with the corresponding **twin ID** from Azure Digital Twins, and persist it to a target table. Using the twin ID, the target table can then be joined against the digital twins selected by the Azure Digital Twins plugin. For example, say you created the following table to hold the raw time series data flowing into your Azure Data Explorer instance. ```kusto
-.create-merge table rawData (Timestamp:datetime, someId:string, Value:string, ValueType:string) 
+.create-merge table rawData (Timestamp:datetime, someId:string, Value:string, ValueType:string)
``` You could create a mapping table to relate time series IDs with twin IDs, and other optional fields. ```kusto
-.create-merge table mappingTable (someId:string, twinId:string, otherMetadata:string)
+.create-merge table mappingTable (someId:string, twinId:string, otherMetadata:string)
``` Then, create a target table to hold the enriched time series data. ```kusto
-.create-merge table timeseriesSilver (twinId:string, Timestamp:datetime, someId:string, otherMetadata:string, ValueNumeric:real, ValueString:string) 
+.create-merge table timeseriesSilver (twinId:string, Timestamp:datetime, someId:string, otherMetadata:string, ValueNumeric:real, ValueString:string)
``` Next, create a function `Update_rawData` to enrich the raw data by joining it with the mapping table. Doing so will add the twin ID to the resulting target table. ```kusto
-.create-or-alter function with (folder = "Update", skipvalidation = "true") Update_rawData() {
+.create-or-alter function with (folder = "Update", skipvalidation = "true") Update_rawData() {
rawData
-| join kind=leftouter mappingTable on someId
-| project
-    Timestamp, ValueNumeric = toreal(Value), ValueString = Value, ...
+| join kind=leftouter mappingTable on someId
+| project
+ Timestamp, ValueNumeric = toreal(Value), ValueString = Value, ...
} ```
Once the target table is created, you can use the Azure Digital Twins plugin to
Here's an example of a schema that might be used to represent shared data.
-| timestamp | twinIdΓÇ»| modelIdΓÇ»| nameΓÇ»| valueΓÇ»| relationshipTarget | relationshipID |
+| timestamp | twinId | modelId | name | value | relationshipTarget | relationshipID |
| | | | | | | |
-| 2021-02-01 17:24 | ConfRoomTempSensor | dtmi:com:example:TemperatureSensor;1 | temperature | 301.0 | | |
+| 2021-02-01 17:24 | ConfRoomTempSensor | dtmi:com:example:TemperatureSensor;1 | temperature | 301.0 | | |
Digital twin properties are stored as key-value pairs (`name, value`). `name` and `value` are stored as dynamic data types.
digital-twins Concepts Ontologies Adopt https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-ontologies-adopt.md
Each ontology is focused on an initial set of models. The ontology authors welco
*Get the ontology from the following repository:* [Digital Twins Definition Language-based RealEstateCore ontology for smart buildings](https://github.com/Azure/opendigitaltwins-building).
-Microsoft has partnered with [RealEstateCore](https://www.realestatecore.io/) to deliver this open-source DTDL ontology for the real estate industry. [RealEstateCore](https://www.realestatecore.io/) is a Swedish consortium of real estate owners, software vendors, and research institutions.
+Microsoft has partnered with [RealEstateCore](https://www.realestatecore.io/) to deliver this open-source DTDL ontology for the real estate industry. [RealEstateCore](https://www.realestatecore.io/) is a Swedish consortium of real estate owners, software vendors, and research institutions.
-This smart buildings ontology provides common ground for modeling smart buildings, using industry standards (likeΓÇ»[BRICK Schema](https://brickschema.org/ontology/) orΓÇ»[W3C Building Topology Ontology](https://w3c-lbd-cg.github.io/bot/https://docsupdatetracker.net/index.html)) to avoid reinvention. The ontology also comes with best practices for how to consume and properly extend it.
+This smart buildings ontology provides common ground for modeling smart buildings, using industry standards (like [BRICK Schema](https://brickschema.org/ontology/) or [W3C Building Topology Ontology](https://w3c-lbd-cg.github.io/bot/https://docsupdatetracker.net/index.html)) to avoid reinvention. The ontology also comes with best practices for how to consume and properly extend it.
To learn more about the ontology's structure and modeling conventions, how to use it, how to extend it, and how to contribute, visit the ontology's repository on GitHub: [Azure/opendigitaltwins-building](https://github.com/Azure/opendigitaltwins-building).
digital-twins Concepts Ontologies Convert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-ontologies-convert.md
The following C# code snippet shows how an RDF model file is loaded into a graph
### RDF converter application
-There's a sample application available that converts an RDF-based model file to [DTDL (version 2)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) for use by the Azure Digital Twins service. It has been validated for the [Brick](https://brickschema.org/ontology/) schema, and can be extended for other schemas in the building industry (such as [Building Topology Ontology (BOT)](https://w3c-lbd-cg.github.io/bot/), [Semantic Sensor Network](https://www.w3.org/TR/vocab-ssn/), or [buildingSmart Industry Foundation Classes (IFC)](https://technical.buildingsmart.org/standards/ifc/ifc-schema-specifications/)).
+There's a sample application available that converts an RDF-based model file to [DTDL (version 2)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) for use by the Azure Digital Twins service. It has been validated for the [Brick](https://brickschema.org/ontology/) schema, and can be extended for other schemas in the building industry (such as [Building Topology Ontology (BOT)](https://w3c-lbd-cg.github.io/bot/), [Semantic Sensor Network](https://www.w3.org/TR/vocab-ssn/), or [buildingSmart Industry Foundation Classes (IFC)](https://technical.buildingsmart.org/standards/ifc/ifc-schema-specifications/)).
The sample is a [.NET Core command-line application called RdfToDtdlConverter](/samples/azure-samples/rdftodtdlconverter/digital-twins-model-conversion-samples/).
digital-twins Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-security.md
If a user attempts to perform an action not allowed by their role, they may rece
## Managed identity for accessing other resources
-Setting up an [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) **managed identity** for an Azure Digital Twins instance can allow the instance to easily access other Azure AD-protected resources, such as [Azure Key Vault](../key-vault/general/overview.md). The identity is managed by the Azure platform, and doesn't require you to provision or rotate any secrets. For more about managed identities in Azure AD, seeΓÇ»[Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
+Setting up an [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) **managed identity** for an Azure Digital Twins instance can allow the instance to easily access other Azure AD-protected resources, such as [Azure Key Vault](../key-vault/general/overview.md). The identity is managed by the Azure platform, and doesn't require you to provision or rotate any secrets. For more about managed identities in Azure AD, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
Azure supports two types of managed identities: system-assigned and user-assigned. Currently, Azure Digital Twins supports only **system-assigned identities**.
-You can use a system-assigned managed identity for your Azure Digital Instance to authenticate to a [custom-defined endpoint](concepts-route-events.md#create-an-endpoint). Azure Digital Twins supports system-assigned identity-based authentication to endpoints for [Event Hubs](../event-hubs/event-hubs-about.md) and [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) destinations, and to an [Azure Storage Container](../storage/blobs/storage-blobs-introduction.md) endpoint for [dead-letter events](concepts-route-events.md#dead-letter-events). [Event Grid](../event-grid/overview.md) endpoints are currently not supported for managed identities.
+You can use a system-assigned managed identity for your Azure Digital Instance to authenticate to a [custom-defined endpoint](concepts-route-events.md#create-an-endpoint). Azure Digital Twins supports system-assigned identity-based authentication to endpoints for [Event Hubs](../event-hubs/event-hubs-about.md) and [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) destinations, and to an [Azure Storage Container](../storage/blobs/storage-blobs-introduction.md) endpoint for [dead-letter events](concepts-route-events.md#dead-letter-events). [Event Grid](../event-grid/overview.md) endpoints are currently not supported for managed identities.
For instructions on how to enable a system-managed identity for Azure Digital Twins and use it to route events, see [Route events with a managed identity](how-to-route-with-managed-identity.md).
For instructions on how to set up Private Link for Azure Digital Twins, see [Ena
### Design considerations When working with Private Link for Azure Digital Twins, here are some factors you may want to consider:
-* **Pricing**: For pricing details, seeΓÇ»[Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link).
+* **Pricing**: For pricing details, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link).
* **Regional availability**: For Azure Digital Twins, this feature is available in all the Azure regions where Azure Digital Twins is available. * **Maximum number of private endpoints per Azure Digital Twins instance**: 10
-For information on the limits of Private Link, seeΓÇ»[Azure Private Link documentation: Limitations](../private-link/private-link-service-overview.md#limitations).
+For information on the limits of Private Link, see [Azure Private Link documentation: Limitations](../private-link/private-link-service-overview.md#limitations).
## Service tags
-A **service tag** represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, minimizing the complexity of frequent updates to network security rules. For more information about service tags, seeΓÇ»[Virtual network tags](../virtual-network/service-tags-overview.md).
+A **service tag** represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, minimizing the complexity of frequent updates to network security rules. For more information about service tags, see [Virtual network tags](../virtual-network/service-tags-overview.md).
-You can use service tags to define network access controls on [network security groups](../virtual-network/network-security-groups-overview.md#security-rules) or [Azure Firewall](../firewall/service-tags.md), by using service tags in place of specific IP addresses when you create security rules. By specifying the service tag name (in this case, **AzureDigitalTwins**) in the appropriate *source* or *destination* field of a rule, you can allow or deny the traffic for the corresponding service.
+You can use service tags to define network access controls on [network security groups](../virtual-network/network-security-groups-overview.md#security-rules) or [Azure Firewall](../firewall/service-tags.md), by using service tags in place of specific IP addresses when you create security rules. By specifying the service tag name (in this case, **AzureDigitalTwins**) in the appropriate *source* or *destination* field of a rule, you can allow or deny the traffic for the corresponding service.
Below are the details of the **AzureDigitalTwins** service tag.
digital-twins How To Route With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-route-with-managed-identity.md
# Enable a managed identity for routing Azure Digital Twins events
-This article describes how to enable a [system-assigned identity for an Azure Digital Twins instance](concepts-security.md#managed-identity-for-accessing-other-resources), and use the identity when forwarding events to supported routing destinations. Setting up a managed identity isn't required for routing, but it can help the instance to easily access other Azure AD-protected resources, such as [Event Hubs](../event-hubs/event-hubs-about.md), [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) destinations, and [Azure Storage Container](../storage/blobs/storage-blobs-introduction.md).
+This article describes how to enable a [system-assigned identity for an Azure Digital Twins instance](concepts-security.md#managed-identity-for-accessing-other-resources), and use the identity when forwarding events to supported routing destinations. Setting up a managed identity isn't required for routing, but it can help the instance to easily access other Azure AD-protected resources, such as [Event Hubs](../event-hubs/event-hubs-about.md), [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) destinations, and [Azure Storage Container](../storage/blobs/storage-blobs-introduction.md).
Here are the steps that are covered in this article:
To continue using an endpoint that was set up with a managed identity that's now
## Next steps
-Learn more about managed identities in Azure AD:ΓÇ»
+Learn more about managed identities in Azure AD:
* [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md)
digital-twins Reference Query Clause Match https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/reference-query-clause-match.md
This clause is optional while querying.
## Core syntax: MATCH
-`MATCH` supports any query that finds a path between twins with an unpredictable number of hops, based on certain relationship conditions. 
+`MATCH` supports any query that finds a path between twins with an unpredictable number of hops, based on certain relationship conditions.
The **relationship condition** can include one or more of the following details: * [Relationship direction](#specify-relationship-direction) (left-to-right, right-to-left, or non-directional)
The **relationship condition** can include one or more of the following details:
* [Number of "hops"](#specify-number-of-hops) from one twin to another (exact number or range) * [A query variable assignment](#assign-query-variable-to-relationship-and-specify-relationship-properties) to represent the relationship within the query text. This will also allow you to filter on relationship properties.
-A query with a `MATCH` clause must also use the [WHERE clause](reference-query-clause-where.md) to specify the `$dtId` for at least one of the twins it references.
+A query with a `MATCH` clause must also use the [WHERE clause](reference-query-clause-where.md) to specify the `$dtId` for at least one of the twins it references.
>[!NOTE] >`MATCH` is a superset of all `JOIN` queries that can be performed in the query store.
education-hub Find Ids https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/education-hub/find-ids.md
+
+ Title: Finding IDs for Azure Education Hub APIs
+description: Learn how to find all the IDs needed to call the education hub APIs
++++ Last updated : 12/21/2021+++
+# Tutorial: Find all IDs needed to call Azure Education Hub APIs
+
+This article helps you gather the necessary IDs needed to call the education hub APIs. If you go through the Education Hub UI, these IDs are gathered for you, but to call them publicly, you must have a billing account ID, billing profile ID, and invoice section ID.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Find your billing account ID
+> * Find your billing profile ID
+> * Find your invoice section ID
+
+## Prerequisites
+
+You must have an Azure account linked with education hub.
+
+## Sign in to Azure
+
+* Sign in to the Azure portal at https://portal.azure.com.
+
+## Navigate to Cost Management + Billing
+
+While in the Azure portal, search for Cost Management + Billing and click on the service from the dropdown menu.
++
+## Get Billing account ID
+
+This section will show you how to get your Billing Account ID.
+
+1. Click on the "Properties" tab under Settings
+2. The string listed in the ID box is your Billing Account ID
+3. Copy this and save it for later
++
+## Get Billing profile ID
+
+This section will show you how to get your Billing Profile ID.
+
+1. Click on "Billing Profiles" tab under the Billing section
+2. Click on the desired billing profile
+3. Click on the "Properties" tab under the Settings section
+4. This page will display your billing profile ID at the top of the page
+5. Copy this and save it for later. You can also see your Billing Account ID at the bottom of the page.
++
+## Get Invoice section ID
+
+This section will show you how to get your Invoice Section ID.
+
+1. Click on "Invoice sections" tab under the Billing tab. Note you must be in Billing Profile to see Invoice Sections
+2. Click on the desired Invoice Section
+3. Click on the "Properties" tab under the Settings section
+4. This page will display your invoice section ID at the top of the page
+5. Copy this and save it for later. You can also see your Billing Account ID at the bottom of the page.
++
+## Next steps
+
+- [Manage your Academic Grant using the Overview page](hub-overview-page.md)
+
+- [Support options](educator-service-desk.md)
event-grid Delivery And Retry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/delivery-and-retry.md
Title: Azure Event Grid delivery and retry description: Describes how Azure Event Grid delivers events and how it handles undelivered messages. Previously updated : 07/27/2021 Last updated : 01/12/2022 # Event Grid message delivery and retry
-Event Grid provides durable delivery. It tries to deliver each message **at least once** for each matching subscription immediately. If a subscriber's endpoint doesn't acknowledge receipt of an event or if there is a failure, Event Grid retries delivery based on a fixed [retry schedule](#retry-schedule) and [retry policy](#retry-policy). By default, the Event Grid module delivers one event at a time to the subscriber. The payload is however an array with a single event.
+Event Grid provides durable delivery. It tries to deliver each message **at least once** for each matching subscription immediately. If a subscriber's endpoint doesn't acknowledge receipt of an event or if there's a failure, Event Grid retries delivery based on a fixed [retry schedule](#retry-schedule) and [retry policy](#retry-policy). By default, the Event Grid module delivers one event at a time to the subscriber. The payload is however an array with a single event.
> [!NOTE] > Event Grid doesn't guarantee order for event delivery, so subscribers may receive them out of order. ## Retry schedule
-When EventGrid receives an error for an event delivery attempt, EventGrid decides whether it should retry the delivery, dead-letter the event, or drop the event based on the type of the error.
+When Event Grid receives an error for an event delivery attempt, Event Grid decides whether it should retry the delivery, dead-letter the event, or drop the event based on the type of the error.
-If the error returned by the subscribed endpoint is a configuration-related error that can't be fixed with retries (for example, if the endpoint is deleted), EventGrid will either perform dead-lettering on the event or drop the event if dead-letter isn't configured.
+If the error returned by the subscribed endpoint is a configuration-related error that can't be fixed with retries (for example, if the endpoint is deleted), Event Grid will either perform dead-lettering on the event or drop the event if dead-letter isn't configured.
The following table describes the types of endpoints and errors for which retry doesn't happen:
The following table describes the types of endpoints and errors for which retry
> [!NOTE] > If Dead-Letter isn't configured for an endpoint, events will be dropped when the above errors happen. Consider configuring Dead-Letter if you don't want these kinds of events to be dropped.
-If the error returned by the subscribed endpoint isn't among the above list, EventGrid performs the retry using policies described below:
+If the error returned by the subscribed endpoint isn't among the above list, Event Grid performs the retry using policies described below:
Event Grid waits 30 seconds for a response after delivering a message. After 30 seconds, if the endpoint hasnΓÇÖt responded, the message is queued for retry. Event Grid uses an exponential backoff retry policy for event delivery. Event Grid retries delivery on the following schedule on a best effort basis:
Event Grid sends an event to the dead-letter location when it has tried all of i
The time-to-live expiration is checked ONLY at the next scheduled delivery attempt. So, even if time-to-live expires before the next scheduled delivery attempt, event expiry is checked only at the time of the next delivery and then subsequently dead-lettered.
-There is a five-minute delay between the last attempt to deliver an event and when it is delivered to the dead-letter location. This delay is intended to reduce the number of Blob storage operations. If the dead-letter location is unavailable for four hours, the event is dropped.
+There's a five-minute delay between the last attempt to deliver an event and when it's delivered to the dead-letter location. This delay is intended to reduce the number of Blob storage operations. If the dead-letter location is unavailable for four hours, the event is dropped.
Before setting the dead-letter location, you must have a storage account with a container. You provide the endpoint for this container when creating the event subscription. The endpoint is in the format of: `/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Storage/storageAccounts/<storage-name>/blobServices/default/containers/<container-name>`
This section gives you examples of events and dead-lettered events in different
} ```
+Here are the possible values of `lastDeliveryOutcome` and their descriptions.
+
+| LastDeliveryOutcome | Description |
+| - | -- |
+| NotFound | Destination resource wasn't found. |
+| Disabled | Destination has disabled receiving events. Applicable for Azure Service Bus and Azure Event Hubs. |
+| Full | Exceeded maximum number of allowed operations on the destination. Applicable for Azure Service Bus and Azure Event Hubs. |
+| Unauthorized | Destination returned unauthorized response code. |
+| BadRequest | Destination returned bad request response code. |
+| TimedOut | Delivery operation timed out. |
+| Busy | Destination server is busy. |
+| PayloadTooLarge | Size of the message exceeded the maximum allowed size by the destination. Applicable for Azure Service Bus and Azure Event Hubs. |
+| Probation | Destination is put in probation by Event Grid. Delivery isn't attempted during probation. |
+| Canceled | Delivery operation canceled. |
+| Aborted | Delivery was aborted by Event Grid after a time interval. |
+| SocketError | Network communication error occurred during delivery. |
+| ResolutionError | DNS resolution of destination endpoint failed. |
+| Delivering | Delivering events to the destination. |
+| SessionQueueNotSupported | Event delivery without session ID is attempted on an entity, which has session support enabled. Applicable for Azure Service Bus entity destination. |
+| Forbidden | Delivery is forbidden by destination endpoint (could be because of IP firewalls or other restrictions) |
+| InvalidAzureFunctionDestination | Destination Azure function isn't valid. Probably because it doesn't have the EventGridTrigger type. |
+
+**LastDeliveryOutcome: Probation**
+
+An event subscription is put into probation for a duration by Event Grid if event deliveries to that destination start failing. Probation time is different for different errors returned by the destination endpoint. If an event subscription is in probation, events may get dead-lettered or dropped without even trying delivery depending on the error code due to which it's in probation.
+
+| Error | Probation Duration |
+| -- | |
+| Busy | 10 seconds |
+| NotFound | 5 minutes |
+| SocketError | 30 seconds |
+| ResolutionError | 5 minutes |
+| Disabled | 5 minutes |
+| Full | 5 minutes |
+| TimedOut | 10 seconds |
+| Unauthorized | 5 minutes |
+| Forbidden | 5 minutes |
+| InvalidAzureFunctionDestination | 10 minutes |
+
+> [!NOTE]
+> Event Grid uses probation duration for better delivery management and the duration might change in the future.
+ ### CloudEvents 1.0 schema #### Event
event-grid Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/whats-new.md
Title: What's new? Azure Event Grid description: Learn what is new with Azure Event Grid, such as the latest release notes, known issues, bug fixes, deprecated functionality, and upcoming changes. Previously updated : 04/27/2021 Last updated : 01/13/2022 # What's new in Azure Event Grid?
Last updated 04/27/2021
Azure Event Grid receives improvements on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about the features that are added or updated in a release. +
+## .NET 6.2.0-preview (2021-06)
+This release corresponds to api-version 2021-06-01-preview which includes the following new features:
+
+- [Azure Active Directory authentication for topics and domains, and partner namespaces](authenticate-with-active-directory.md)
+- Private link support for partner namespaces. Azure portal doesn't support it yet.
+- IP Filtering for partner namespaces. Azure portal doesn't support it yet.
+- System Identity for partner topics. Azure portal doesn't support it yet.
+- [User Identity for system topics, custom topics and domains](enable-identity-custom-topics-domains.md)
+ ## 6.1.0-preview (2020-10)+ - [Managed identities for system topics](enable-identity-system-topics.md) - [Custom delivery properties](delivery-properties.md) - [Storage queue - message time-to-live (TTL)](delivery-properties.md#configure-time-to-live-on-outgoing-events-to-azure-storage-queues)
firewall Forced Tunneling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/forced-tunneling.md
Previously updated : 08/13/2021 Last updated : 01/13/2022
When you configure a new Azure Firewall, you can route all Internet-bound traffic to a designated next hop instead of going directly to the Internet. For example, you may have a default route advertised via BGP or using User Defined Route (UDR) to force traffic to an on-premises edge firewall or other network virtual appliance (NVA) to process network traffic before it's passed to the Internet. To support this configuration, you must create Azure Firewall with Forced Tunnel configuration enabled. This is a mandatory requirement to avoid service disruption. If this is a pre-existing firewall, you must recreate the firewall in Forced Tunnel mode to support this configuration. For more information, see the [Azure Firewall FAQ](firewall-faq.yml#how-can-i-stop-and-start-azure-firewall) about stopping and restarting a firewall in Forced Tunnel mode.
+Azure Firewall provides automatic SNAT for all outbound traffic to public IP addresses. Azure Firewall doesnΓÇÖt SNAT when the destination IP address is a private IP address range per IANA RFC 1918. This logic works perfectly when you egress directly to the Internet. However, with forced tunneling enabled, Internet-bound traffic is SNATed to one of the firewall private IP addresses in the AzureFirewallSubnet. This hides the source address from your on-premises firewall. You can configure Azure Firewall to not SNAT regardless of the destination IP address by adding *0.0.0.0/0* as your private IP address range. With this configuration, Azure Firewall can never egress directly to the Internet. For more information, see [Azure Firewall SNAT private IP address ranges](snat-private-range.md).
+ ## Forced tunneling configuration
-You can configure Forced Tunneling during Firewall creation by enabling Forced Tunnel mode as shown below. To support forced tunneling, Service Management traffic is separated from customer traffic. An additional dedicated subnet named **AzureFirewallManagementSubnet** (minimum subnet size /26) is required with its own associated public IP address.
+You can configure Forced Tunneling during Firewall creation by enabling Forced Tunnel mode as shown below. To support forced tunneling, Service Management traffic is separated from customer traffic. An additional dedicated subnet named **AzureFirewallManagementSubnet** (minimum subnet size /26) is required with its own associated public IP address. This public IP address is for management traffic. It is used exclusively by the Azure platform and can't be used for any other purpose.
In Forced Tunneling mode, the Azure Firewall service incorporates the Management subnet (AzureFirewallManagementSubnet) for its *operational* purposes. By default, the service associates a system-provided route table to the Management subnet. The only route allowed on this subnet is a default route to the Internet and *Propagate gateway* routes must be disabled. Avoid associating customer route tables to the Management subnet when you create the firewall.
firewall Rule Processing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/rule-processing.md
If there's no network rule match, and if the protocol is HTTP, HTTPS, or MSSQL,
For HTTP, Azure Firewall looks for an application rule match according to the Host header. For HTTPS, Azure Firewall looks for an application rule match according to SNI only.
-In both HTTP and TLS inspected HTTPS cases, the firewall ignores packet the destination IP address and uses the DNS resolved IP address from the Host header. The firewall expects to get port number in the Host header, otherwise it assumes the standard port 80. If there's a port mismatch between the actual TCP port and the port in the host header, the traffic is dropped. DNS resolution is done by Azure DNS or by a custom DNS if configured on the firewall. 
+In both HTTP and TLS inspected HTTPS cases, the firewall ignores the packet's destination IP address and uses the DNS resolved IP address from the Host header. The firewall expects to get port number in the Host header, otherwise it assumes the standard port 80. If there's a port mismatch between the actual TCP port and the port in the host header, the traffic is dropped. DNS resolution is done by Azure DNS or by a custom DNS if configured on the firewall. 
> [!NOTE] > Both HTTP and HTTPS protocols (with TLS inspection) are always filled by Azure Firewall with XFF (X-Forwarded-For) header equal to the original source IP address. 
frontdoor Concept End To End Tls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/concept-end-to-end-tls.md
For HTTPS connections, Azure Front Door expects that your backend presents a cer
> [!NOTE] > The certificate must have a complete certificate chain with leaf and intermediate certificates. The root CA must be part of theΓÇ»[Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If a certificate without complete chain is presented, the requests which involve that certificate are not guaranteed to work as expected.
-From a security standpoint, Microsoft doesn't recommend disabling certificate subject name check. In certain use cases such as for testing, for example, your origin must use a self-signed certificate. As a work-around to resolve failing HTTPS connection, can you disable certificate subject name check for your Azure Front Door. The option to disable is present under the Azure Front Door settings in the Azure portal and on the BackendPoolsSettings in the Azure Front Door API.
+From a security standpoint, Microsoft doesn't recommend disabling certificate subject name check. In certain use cases such as for testing, as a work-around to resolve failing HTTPS connection, you can disable certificate subject name check for your Azure Front Door. Note that the origin still needs to present a certificate with a valid trusted chain, but doesn't have to match the origin host name. The option to disable is present under the Azure Front Door settings in the Azure portal and on the BackendPoolsSettings in the Azure Front Door API.
## Frontend TLS connection (Client to Front Door)
iot-central Quick Deploy Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/quick-deploy-iot-central.md
Title: Quickstart - Create and use an Azure IoT Central application | Microsoft Docs
-description: Quickstart - Create a new Azure IoT Central application and connect your first device. This quickstart uses a smartphone app from either the Google Play or Apple app store as an IoT device.
+ Title: Quickstart - Connect a device to an Azure IoT Central application | Microsoft Docs
+description: Quickstart - Connect your first device to a new IoT Central application. This quickstart uses a smartphone app from either the Google Play or Apple app store as an IoT device.
Previously updated : 12/27/2021 Last updated : 01/13/2022
-# Quickstart - Create an Azure IoT Central application and use your smartphone to send telemetry
+# Quickstart - Use your smartphone as a device to send telemetry to an IoT Central application
This quickstart shows you how to create an Azure IoT Central application and connect your first device. To get you started quickly, you install an app on your smartphone to act as the device. The app sends telemetry, reports properties, and responds to commands:
iot-hub Iot Hub Ha Dr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-ha-dr.md
IoT Hub supports [Availability Zones](../availability-zones/az-overview.md). An
- Canada Central - Central US - France Central-- West Us 2
+- Germany West Central
- Japan East - North Europe - Southeast Asia - UK South
+- West Us 2
## Cross region DR
load-balancer Load Balancer Outbound Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-outbound-connections.md
A port is reused for an unlimited number of connections. The port is only reused
* [Troubleshoot outbound connection failures because of SNAT exhaustion](./troubleshoot-outbound-connection.md) * [Review SNAT metrics](./load-balancer-standard-diagnostics.md#how-do-i-check-my-snat-port-usage-and-allocation) and familiarize yourself with the correct way to filter, split, and view them.
+* Learn how to [migrate your existing outbound connectivity method to NAT gateway](../virtual-network/nat-gateway/tutorial-migrate-outbound-nat.md)
load-testing How To Appservice Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/how-to-appservice-insights.md
In this section, you use [App Service diagnostics](/azure/app-service/overview-d
## Next steps -- To learn how to parameterize a load test by using secrets, see [Parameterize a load test](./how-to-parameterize-load-tests.md).
+- Learn how to [parameterize a load test](./how-to-parameterize-load-tests.md) with secrets.
-- To learn about performance test automation, see [Configure automated performance testing](./tutorial-cicd-azure-pipelines.md).--- To learn more about App Service diagnostics, see [Azure App Service diagnostics overview](/azure/app-service/overview-diagnostics/).
+- Learn how to [configure automated performance testing](./tutorial-cicd-azure-pipelines.md).
machine-learning How To Deploy Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-managed-online-endpoints.md
The reference for the endpoint YAML format is described in the following table.
| | | | `$schema` | (Optional) The YAML schema. To see all available options in the YAML file, you can view the schema in the preceding example in a browser.| | `name` | The name of the endpoint. It must be unique in the Azure region.|
-| `traffic` | The percentage of traffic from the endpoint to divert to each deployment. The sum of traffic values must be 100. |
| `auth_mode` | Use `key` for key-based authentication. Use `aml_token` for Azure Machine Learning token-based authentication. `key` doesn't expire, but `aml_token` does expire. (Get the most recent token by using the `az ml online-endpoint get-credentials` command.) | The example contains all the files needed to deploy a model on an online endpoint. To deploy a model, you must have:
machine-learning How To Network Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-network-security-overview.md
Previously updated : 10/29/2021 Last updated : 12/07/2021
The next sections show you how to secure the network scenario described above. T
If you want to access the workspace over the public internet while keeping all the associated resources secured in a virtual network, use the following steps:
-1. Create an [Azure Virtual Networks](../virtual-network/virtual-networks-overview.md) that will contain the resources used by the workspace.
+1. Create an [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) that will contain the resources used by the workspace.
1. Use __one__ of the following options to create a publicly accessible workspace: * Create an Azure Machine Learning workspace that __does not__ use the virtual network. For more information, see [Manage Azure Machine Learning workspaces](how-to-manage-workspace.md).
If you want to access the workspace over the public internet while keeping all t
Use the following steps to secure your workspace and associated resources. These steps allow your services to communicate in the virtual network.
-1. Create an [Azure Virtual Networks](../virtual-network/virtual-networks-overview.md) that will contain the workspace and other resources.
-1. Create a [Private Link-enabled workspace](how-to-secure-workspace-vnet.md#secure-the-workspace-with-private-endpoint) to enable communication between your VNet and workspace.
+1. Create an [Azure Virtual Networks](../virtual-network/virtual-networks-overview.md) that will contain the workspace and other resources. Then create a [Private Link-enabled workspace](how-to-secure-workspace-vnet.md#secure-the-workspace-with-private-endpoint) to enable communication between your VNet and workspace.
1. Add the following services to the virtual network by using _either_ a __service endpoint__ or a __private endpoint__. Also allow trusted Microsoft services to access these | Service | Endpoint information | Allow trusted information |
Use the following steps to secure your workspace and associated resources. These
| __Azure Container Registry__ | [Private endpoint](../container-registry/container-registry-private-link.md) | [Allow trusted services](../container-registry/allow-access-trusted-services.md) |
-![Architecture diagram showing how the workspace and associated resources communicate to each other over service endpoints or private endpoints inside of a VNet](./media/how-to-network-security-overview/secure-workspace-resources.png)
For detailed instructions on how to complete these steps, see [Secure an Azure Machine Learning workspace](how-to-secure-workspace-vnet.md).
In this section, you learn how to secure the training environment in Azure Machi
To secure the training environment, use the following steps: 1. Create an Azure Machine Learning [compute instance and computer cluster in the virtual network](how-to-secure-training-vnet.md#compute-cluster) to run the training job.
-1. [Allow inbound communication](how-to-secure-training-vnet.md#required-public-internet-access) so that management services can submit jobs to your compute resources.
+1. If your compute cluster or compute instance does not use a public IP address, you must [Allow inbound communication](how-to-secure-training-vnet.md#required-public-internet-access) so that management services can submit jobs to your compute resources.
-![Architecture diagram showing how to secure managed compute clusters and instances](./media/how-to-network-security-overview/secure-training-environment.png)
+ > [!TIP]
+ > Compute cluster and compute instance can be created with or without a public IP address. If created with a public IP address, they communicate with the Azure Batch Services over the public IP. If created without a public IP, they communicate with Azure Batch Services over the private IP. When using a private IP, you need to allow inbound communications from Azure Batch.
+ For detailed instructions on how to complete these steps, see [Secure a training environment](how-to-secure-training-vnet.md).
In this section, you learn how Azure Machine Learning securely communicates betw
1. Azure Batch service receives the job from the workspace. It then submits the training job to the compute environment through the public load balancer for the compute resource.
-1. The compute resource receive the job and begins training. The compute resources accesses secure storage accounts to download training files and upload output.
+1. The compute resource receives the job and begins training. The compute resource accesses secure storage accounts to download training files and upload output.
### Limitations
For detailed instructions on how to add default and private clusters, see [Secur
The following network diagram shows a secured Azure Machine Learning workspace with a private AKS cluster attached to the virtual network.
-![Architecture diagram showing how to attach a private AKS cluster to the virtual network. The AKS control plane is placed outside of the customer VNet](./media/how-to-network-security-overview/secure-inferencing-environment.png)
### Limitations
machine-learning Tutorial Designer Automobile Price Train Score https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-designer-automobile-price-train-score.md
There are several sample datasets included in the designer for you to experiment
You can visualize the data to understand the dataset that you'll use.
-1. Right-click the **Automobile price data (Raw)** and select **Visualize** > **Dataset output**.
+1. Right-click the **Automobile price data (Raw)** and select **Preview Data**.
1. Select the different columns in the data window to view information about each one.
Now that your pipeline is all setup, you can submit a pipeline run to train your
After the run completes, you can view the results of the pipeline run. First, look at the predictions generated by the regression model.
-1. Right-click the **Score Model** component, and select **Visualize** > **Scored dataset** to view its output.
+1. Right-click the **Score Model** component, and select **Preview data** > **Scored dataset** to view its output.
Here you can see the predicted prices and the actual prices from the testing data.
After the run completes, you can view the results of the pipeline run. First, lo
Use the **Evaluate Model** to see how well the trained model performed on the test dataset.
-1. Right-click the **Evaluate Model** component and select **Visualize** > **Evaluation results** to view its output.
+1. Right-click the **Evaluate Model** component and select **Preview data** > **Evaluation results** to view its output.
The following statistics are shown for your model:
mysql Concepts Query Performance Insight https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-query-performance-insight.md
Previously updated : 5/12/2020 Last updated : 01/12/2022 # Query Performance Insight in Azure Database for MySQL
In the portal page of your Azure Database for MySQL server, select **Query Perfo
### Long running queries
-The **Long running queries** tab shows the top 5 queries by average duration per execution, aggregated in 15-minute intervals. You can view more queries by selecting from the **Number of Queries** drop down. The chart colors may change for a specific Query ID when you do this.
+The **Long running queries** tab shows the top 5 Query IDs by average duration per execution, aggregated in 15-minute intervals. You can view more Query IDs by selecting from the **Number of Queries** drop down. The chart colors may change for a specific Query ID when you do this.
+
+> [!Note]
+> Displaying the Query Text is no longer supported and will show as empty. The query text is removed to avoid unauthorized access to the query text or underlying schema which can pose a security risk.
+
+The recommended steps to view the query text is shared below:
+ 1. Identify the query_id of the top queries from the Query Performance Insight blade in the Azure portal.
+1. Log in to your Azure Database for MySQL server from MySQL Workbench or mysql.exe client or your preferred query tool and execute the following queries.
+
+```sql
+ SELECT * FROM mysql.query_store where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for queries in Query Store
+ SELECT * FROM mysql.query_store_wait_stats where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for wait statistics
+```
You can click and drag in the chart to narrow down to a specific time window. Alternatively, use the zoom in and out icons to view a smaller or larger time period respectively.
Select the **Wait Statistics** tab to view the corresponding visualizations on w
Queries displayed in the wait statistics view are grouped by the queries that exhibit the largest waits during the specified time interval.
+> [!Note]
+> Displaying the Query Text is no longer supported and will show as empty. The query text is removed to avoid unauthorized access to the query text or underlying schema which can pose a security risk.
+
+The recommended steps to view the query text is shared below:
+ 1. Identify the query_id of the top queries from the Query Performance Insight blade in the Azure portal.
+1. Log in to your Azure Database for MySQL server from MySQL Workbench or mysql.exe client or your preferred query tool and execute the following queries.
+
+```sql
+ SELECT * FROM mysql.query_store where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for queries in Query Store
+ SELECT * FROM mysql.query_store_wait_stats where query_id = '<insert query id from Query performance insight blade in Azure portal'; // for wait statistics
+```
+ :::image type="content" source="./media/concepts-query-performance-insight/query-performance-insight-wait-statistics.png" alt-text="Query Performance Insight waits statistics"::: ## Next steps
purview Catalog Private Link End To End https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-private-link-end-to-end.md
Previously updated : 09/27/2021 Last updated : 01/12/2022 # Customer intent: As an Azure Purview admin, I want to set up private endpoints for my Azure Purview account to access purview account and scan data sources from restricted network.
Using one of the deployment options explained further in this guide, you can dep
- After completing the steps in this guide, add required DNS A records in your existing DNS servers manually. 3. Deploy a [new Purview account](#option-1deploy-a-new-azure-purview-account-with-account-portal-and-ingestion-private-endpoints) with account, portal and ingestion private endpoints, or deploy private endpoints for an [existing Purview account](#option-2enable-account-portal-and-ingestion-private-endpoint-on-existing-azure-purview-accounts). 4. [Enable access to Azure Active Directory](#enable-access-to-azure-active-directory) if your private network has network security group rules set to deny for all public internet traffic.
-5. Deploy and register [Self-hosted integration runtime](#deploy-self-hosted-integration-runtime-ir-and-scan-your-data-sources) inside the same VNet where Azure Purview ingestion private endpoints are deployed.
+5. Deploy and register [Self-hosted integration runtime](#deploy-self-hosted-integration-runtime-ir-and-scan-your-data-sources) inside the same VNet or a peered VNet where Azure Purview account and ingestion private endpoints are deployed.
6. After completing this guide, adjust DNS configurations if needed. 7. Validate your network and name resolution between management machine, self-hosted IR VM and data sources to Azure Purview.
Once you deploy ingestion private endpoints for your Azure Purview, you need to
- All on-premises source types like Microsoft SQL Server, Oracle, SAP, and others are currently supported only via self-hosted IR-based scans. The self-hosted IR must run within your private network and then be peered with your virtual network in Azure. -- For all Azure source types like Azure Blob Storage and Azure SQL Database, you must explicitly choose to run the scan by using a self-hosted integration runtime that is deployed in the same VNet as Azure Purview account and ingestion private endpoints.
+- For all Azure source types like Azure Blob Storage and Azure SQL Database, you must explicitly choose to run the scan by using a self-hosted integration runtime that is deployed in the same VNet or a peered VNet where Azure Purview account and ingestion private endpoints are deployed.
Follow the steps in [Create and manage a self-hosted integration runtime](manage-integration-runtimes.md) to set up a self-hosted IR. Then set up your scan on the Azure source by choosing that self-hosted IR in the **Connect via integration runtime** dropdown list to ensure network isolation.
purview Catalog Private Link Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-private-link-troubleshoot.md
Previously updated : 01/10/2022 Last updated : 01/12/2022 # Customer intent: As a Purview admin, I want to set up private endpoints for my Purview account, for secure access.
This guide summarizes known limitations related to using private endpoints for A
- Using Azure integration runtime to scan data sources behind private endpoint is not supported. - Using Azure portal, the ingestion private endpoints can be created via the Azure Purview portal experience described in the preceding steps. They can't be created from the Private Link Center. - Creating DNS A records for ingestion private endpoints inside existing Azure DNS Zones, while the Azure Private DNS Zones are located in a different subscription than the private endpoints is not supported via the Azure Purview portal experience. A records can be added manually in the destination DNS Zones in the other subscription. -- Self-hosted integration runtime machine must be deployed in the same VNet where Azure Purview ingestion private endpoint is deployed.
+- Self-hosted integration runtime machine must be deployed in the same VNet or a peered VNet where Azure Purview account and ingestion private endpoints are deployed.
- We currently do not support scanning a Power BI tenant, which has a private endpoint configured with public access blocked. - For limitation related to Private Link service, see [Azure Private Link limits](../azure-resource-manager/management/azure-subscription-service-limits.md#private-link-limits).
purview Concept Best Practices Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-best-practices-accounts.md
Title: Purview accounts architecture and best practices
+ Title: Azure Purview accounts architecture and best practices
description: This article provides examples of Azure Purview accounts architectures and describes best practices.
Last updated 10/12/2021
# Azure Purview accounts architectures and best practices
-Azure Purview is a unified data governance solution. You deploy an Azure Purview account to centrally manage data governance across your data estate, spanning both cloud and on-prem environments. To use Azure Purview as your centralized data governance solution, you need to deploy one or more Purview accounts inside your Azure subscription. We recommend keeping the number of Purview instances as minimum, however, in some cases more Purview instances are needed to fulfill business security and compliance requirements.
+Azure Purview is a unified data governance solution. You deploy an Azure Purview account to centrally manage data governance across your data estate, spanning both cloud and on-prem environments. To use Azure Purview as your centralized data governance solution, you need to deploy one or more Azure Purview accounts inside your Azure subscription. We recommend keeping the number of Azure Purview instances as minimum, however, in some cases more Azure Purview instances are needed to fulfill business security and compliance requirements.
-## Single Purview account
+## Single Azure Purview account
-Consider deploying minimum number of Purview accounts for the entire organization. This approach takes maximum advantage of the "network effects" where the value of the platform increases exponentially as a function of the data that resides inside the platform.
+Consider deploying minimum number of Azure Purview accounts for the entire organization. This approach takes maximum advantage of the "network effects" where the value of the platform increases exponentially as a function of the data that resides inside the platform.
-Use [Azure Purview collections hierarchy](./concept-best-practices-collections.md) to lay out your organization's data management structure inside a single Purview account. In this scenario, one Purview account is deployed in an Azure subscription. Data sources from one or more Azure subscriptions can be registered and scanned inside the Azure Purview. You can also register and scan data sources from your on-premises or multi-cloud environments.
+Use [Azure Purview collections hierarchy](./concept-best-practices-collections.md) to lay out your organization's data management structure inside a single Azure Purview account. In this scenario, one Purview account is deployed in an Azure subscription. Data sources from one or more Azure subscriptions can be registered and scanned inside the Azure Purview. You can also register and scan data sources from your on-premises or multi-cloud environments.
:::image type="content" source="media/concept-best-practices/accounts-single-account.png" alt-text="Screenshot that shows the single Azure Purview account."lightbox="media/concept-best-practices/accounts-single-account.png":::
-## Multiple Purview accounts
+## Multiple Azure Purview accounts
Some organizations may require setting up multiple Azure Purview accounts. Review the following scenarios as few examples when defining your Azure Purview accounts architecture:ΓÇ» ### Testing new features
-It is recommended to create a new instance of Purview account when testing scan configurations or classifications in isolated environments. For some scenarios, there is a "versioning" feature in some areas of the platform such as glossary, however, it would be easier to have a "disposable" instance of Purview to freely test expected functionality and then plan to roll out the feature into the production instance.
+It is recommended to create a new instance of Azure Purview account when testing scan configurations or classifications in isolated environments. For some scenarios, there is a "versioning" feature in some areas of the platform such as glossary, however, it would be easier to have a "disposable" instance of Azure Purview to freely test expected functionality and then plan to roll out the feature into the production instance.
-Additionally, consider using a test Purview account when you cannot perform a rollback. For example, currently you cannot remove a glossary term attribute from a Purview instance once it is added to your Purview account. In this case, it is recommended using a test Purview account first.
+Additionally, consider using a test Azure Purview account when you cannot perform a rollback. For example, currently you cannot remove a glossary term attribute from an Azure Purview instance once it is added to your Azure Purview account. In this case, it is recommended using a test Azure Purview account first.
### Isolating Production and non-production environments
-Consider deploying separate instances of Purview accounts for development, testing and production environments, specially when you have separate instances of data for each environment.
+Consider deploying separate instances of Azure Purview accounts for development, testing and production environments, specially when you have separate instances of data for each environment.
-In this scenario, production and non-production data sources can be registered and scanned inside their corresponding Purview instances.
+In this scenario, production and non-production data sources can be registered and scanned inside their corresponding Azure Purview instances.
-Optionally, you can register a data source in more than one Purview instance, if needed.
+Optionally, you can register a data source in more than one Azure Purview instance, if needed.
:::image type="content" source="media/concept-best-practices/accounts-multiple-accounts.png" alt-text="Screenshot that shows multiple Azure Purview accounts based on environments."lightbox="media/concept-best-practices/accounts-multiple-accounts.png"::: ### Fulfilling compliance requirements
-When you scan data sources in Azure Purview, information related to your metadata is ingested and stored inside your Azure Purview Data Map in the Azure region where your Purview account is deployed. Consider deploying separate instances of Azure Purview if you have specific regulatory and compliance requirements that include even having metadata in a specific geographical location.
+When you scan data sources in Azure Purview, information related to your metadata is ingested and stored inside your Azure Purview Data Map in the Azure region where your Azure Purview account is deployed. Consider deploying separate instances of Azure Purview if you have specific regulatory and compliance requirements that include even having metadata in a specific geographical location.
-If your organization has data in multiple geographies and you must keep metadata in the same region as the actual data, you have to deploy multiple Purview instances, one for each geography. In this case, data sources from each regions should be registered and scanned in the Purview account that corresponds to the data source region or geography.
+If your organization has data in multiple geographies and you must keep metadata in the same region as the actual data, you have to deploy multiple Azure Purview instances, one for each geography. In this case, data sources from each regions should be registered and scanned in the Azure Purview account that corresponds to the data source region or geography.
:::image type="content" source="media/concept-best-practices/accounts-multiple-regions.png" alt-text="Screenshot that shows multiple Azure Purview accounts based on compliance requirements."lightbox="media/concept-best-practices/accounts-multiple-regions.png"::: ### Having Data sources distributed across multiple tenants
-Currently, Purview doesn't support multi-tenancy. If you have Azure data sources distributed across multiple Azure subscriptions under different Azure Active Directory tenants, it is recommended deploying separate Azure Purview accounts under each tenant.
+Currently, Azure Purview doesn't support multi-tenancy. If you have Azure data sources distributed across multiple Azure subscriptions under different Azure Active Directory tenants, it is recommended deploying separate Azure Purview accounts under each tenant.
-An exception applies to VM-based data sources and Power BI tenants.For more information about how to scan and register a cross tenant Power BI in a single Purview account, see, [Register and scan a cross-tenant Power BI](./register-scan-power-bi-tenant.md).
+An exception applies to VM-based data sources and Power BI tenants.For more information about how to scan and register a cross tenant Power BI in a single Azure Purview account, see, [Register and scan a cross-tenant Power BI](./register-scan-power-bi-tenant.md).
:::image type="content" source="media/concept-best-practices/accounts-multiple-tenants.png" alt-text="Screenshot that shows multiple Azure Purview accounts based on multi-tenancy requirements."lightbox="media/concept-best-practices/accounts-multiple-tenants.png"::: ### Billing model
-Review [Azure Purview Pricing model](https://azure.microsoft.com/pricing/details/azure-purview) when defining budgeting model and designing Azure Purview architecture for your organization. One billing is generated for a single Purview account in the subscription where Purview account is deployed. This model also applies to other Purview costs such as scanning and classifying metadata inside Purview Data Map.
+Review [Azure Purview Pricing model](https://azure.microsoft.com/pricing/details/azure-purview) when defining budgeting model and designing Azure Purview architecture for your organization. One billing is generated for a single Azure Purview account in the subscription where Azure Purview account is deployed. This model also applies to other Azure Purview costs such as scanning and classifying metadata inside Azure Purview Data Map.
-Some organizations often have many business units (BUs) that operate separately, and, in some cases, they don't even share billing with each other. In those cases, the organization will end up creating a Purview instance for each BU. This model is not ideal, however, may be necessary, especially because Business Units are often not willing to share Azure billing.
+Some organizations often have many business units (BUs) that operate separately, and, in some cases, they don't even share billing with each other. In those cases, the organization will end up creating a Azure Purview instance for each BU. This model is not ideal, however, may be necessary, especially because Business Units are often not willing to share Azure billing.
For more information about cloud computing cost model in chargeback and showback models, see, [What is cloud accounting?](/azure/cloud-adoption-framework/strategy/cloud-accounting). ## Additional considerations and recommendations -- Keep the number of Purview accounts low for simplified administrative overhead. If you plan building multiple Purview accounts, you may require creating and managing additional scans, access control model, credentials, and runtimes across your Purview accounts. Additionally, you may need to manage classifications and glossary terms for each Purview account.
+- Keep the number of Azure Purview accounts low for simplified administrative overhead. If you plan building multiple Azure Purview accounts, you may require creating and managing additional scans, access control model, credentials, and runtimes across your Azure Purview accounts. Additionally, you may need to manage classifications and glossary terms for each Azure Purview account.
-- Review your budgeting and financial requirements. If possible, use chargeback or showback model when using Azure services and divide the cost of Azure Purview across the organization to keep the number of Purview accounts minimum.
+- Review your budgeting and financial requirements. If possible, use chargeback or showback model when using Azure services and divide the cost of Azure Purview across the organization to keep the number of Azure Purview accounts minimum.
- Use [Azure Purview collections](concept-best-practices-collections.md) to define metadata access control inside Azure Purview Data Map for your organization's business users, data management and governance teams. For more information, see [Access control in Azure Purview](./catalog-permissions.md). -- Review [Azure Purview limits](./how-to-manage-quotas.md#azure-purview-limits) before deploying any new Purview accounts. Currently, the default limit of Purview accounts per region, per tenant (all subscriptions combined) is 3. You may need to contact Microsoft support to increase this limit in your subscription or tenant before deploying extra instances of Azure Purview.ΓÇ»
+- Review [Azure Purview limits](./how-to-manage-quotas.md#azure-purview-limits) before deploying any new Azure Purview accounts. Currently, the default limit of Azure Purview accounts per region, per tenant (all subscriptions combined) is 3. You may need to contact Microsoft support to increase this limit in your subscription or tenant before deploying extra instances of Azure Purview.ΓÇ»
-- Review [Azure Purview prerequisites](./create-catalog-portal.md#prerequisites) before deploying any new Purview accounts in your environment.
+- Review [Azure Purview prerequisites](./create-catalog-portal.md#prerequisites) before deploying any new Azure Purview accounts in your environment.
ΓÇ» ## Next steps - [Create a Purview account](./create-catalog-portal.md)
purview Concept Best Practices Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-best-practices-automation.md
Last updated 11/23/2021
# Azure Purview automation best practices
-While Azure Purview provides an out of the box user experience with Purview Studio, not all tasks are suited to the point-and-click nature of the graphical user experience.
+While Azure Purview provides an out of the box user experience with Azure Purview Studio, not all tasks are suited to the point-and-click nature of the graphical user experience.
For example: * Triggering a scan to run as part of an automated process.
When to use?
* [Docs](/python/api/azure-mgmt-purview/?view=azure-python&preserve-view=true) | [PyPi](https://pypi.org/project/azure-mgmt-purview/) azure-mgmt-purview ## Next steps
-* [Azure Purview REST API](/rest/api/purview)
+* [Azure Purview REST API](/rest/api/purview)
purview Concept Best Practices Classification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-best-practices-classification.md
Here are some considerations to bear in mind as you're defining classifications:
* The sampling rules apply to resource sets as well. For more information, see the "Resource set file sampling" section in [Supported data sources and file types in Azure Purview](./sources-and-scans.md#resource-set-file-sampling). * Custom classifications can't be applied on document type assets using custom classification rules. Classifications for such types can be applied manually only. * Custom classifications aren't included in any default scan rules. Therefore, if automatic assignment of custom classifications is expected, you must deploy and use a custom scan rule that includes the custom classification to run the scan.
-* If you apply classifications manually from Purview Studio, such classifications are retained in subsequent scans.
+* If you apply classifications manually from Azure Purview Studio, such classifications are retained in subsequent scans.
* Subsequent scans won't remove any classifications from assets, if they were detected previously, even if the classification rules are inapplicable. * For *encrypted source* data assets, Azure Purview picks only file names, fully qualified names, schema details for structured file types, and database tables. For classification to work, decrypt the encrypted data before you run scans.
purview Concept Best Practices Collections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-best-practices-collections.md
Title: Purview collections architecture and best practices
+ Title: Azure Purview collections architecture and best practices
description: This article provides examples of Azure Purview collections architectures and describes best practices.
Consider deploying collections in Azure Purview to fulfill the following require
### Design recommendations -- Review the [Azure Purview account best practices](./deployment-best-practices.md#determine-the-number-of-purview-instances) and define the adequate number of Purview accounts required in your organization before you plan the collection structure.
+- Review the [Azure Purview account best practices](./deployment-best-practices.md#determine-the-number-of-azure-purview-instances) and define the adequate number of Azure Purview accounts required in your organization before you plan the collection structure.
- We recommend that you design your collection architecture based on the security requirements and data management and governance structure of your organization. Review the recommended [collections archetypes](#collections-archetypes) in this article.
Consider deploying collections in Azure Purview to fulfill the following require
### Design considerations -- Each Purview account is created with a default _root collection_. The root collection name is the same as your Azure Purview account name. The root collection can't be removed. To change the root collection's friendly name, you can change the friendly name of your Purview account from Purview Management center.
+- Each Azure Purview account is created with a default _root collection_. The root collection name is the same as your Azure Purview account name. The root collection can't be removed. To change the root collection's friendly name, you can change the friendly name of your Azure Purview account from Azure Purview Management center.
- Collections can hold data sources, scans, assets, and role assignments.
Consider deploying collections in Azure Purview to fulfill the following require
- A collections hierarchy in an Azure Purview can support as many as 256 collections, with a maximum of eight levels of depth. This doesn't include the root collection. -- By design, you can't register data sources multiple times in a single Purview account. This architecture helps to avoid the risk of assigning different levels of access control to a single data source. If multiple teams consume the metadata of a single data source, you can register and manage the data source in a parent collection. You can then create corresponding scans under each subcollection so that relevant assets appear under each child collection.
+- By design, you can't register data sources multiple times in a single Azure Purview account. This architecture helps to avoid the risk of assigning different levels of access control to a single data source. If multiple teams consume the metadata of a single data source, you can register and manage the data source in a parent collection. You can then create corresponding scans under each subcollection so that relevant assets appear under each child collection.
- Lineage connections and artifacts are attached to the root collection even if the data sources are registered at lower-level collections.
Consider deploying collections in Azure Purview to fulfill the following require
## Define an authorization model
-Azure Purview data-plane roles are managed in Azure Purview. After you deploy a Purview account, the creator of the Purview account is automatically assigned the following roles at the root collection. You can use [Purview Studio](https://web.purview.azure.com/resource/) or a programmatic method to directly assign and manage roles in Azure Purview.
+Azure Purview data-plane roles are managed in Azure Purview. After you deploy an Azure Purview account, the creator of the Azure Purview account is automatically assigned the following roles at the root collection. You can use [Azure Purview Studio](https://web.purview.azure.com/resource/) or a programmatic method to directly assign and manage roles in Azure Purview.
- - **Collection Admins** can edit Purview collections and their details and add subcollections. They can also add users to other Purview roles on collections where they're admins.
+ - **Collection Admins** can edit Azure Purview collections and their details and add subcollections. They can also add users to other Azure Purview roles on collections where they're admins.
- **Data Source Admins** can manage data sources and data scans. - **Data Curators** can create, read, modify, and delete catalog data assets and establish relationships between assets. - **Data Readers** can access but not modify catalog data assets. ### Design recommendations -- Consider implementing [emergency access](/azure/active-directory/users-groups-roles/directory-emergency-access) or a break-glass strategy for the Collection Admin role at your Azure Purview root collection level to avoid Purview account-level lockouts. Document the process for using emergency accounts.
+- Consider implementing [emergency access](/azure/active-directory/users-groups-roles/directory-emergency-access) or a break-glass strategy for the Collection Admin role at your Azure Purview root collection level to avoid Azure Purview account-level lockouts. Document the process for using emergency accounts.
> [!NOTE]
- > In certain scenarios, you might need to use an emergency account to sign in to Azure Purview. You might need this type of account to fix organization-level access problems when nobody else can sign in to Purview or when other admins can't accomplish certain operations because of corporate authentication problems. We strongly recommended that you follow Microsoft best practices around implementing [emergency access accounts](/azure/active-directory/users-groups-roles/directory-emergency-access) by using cloud-only users.
+ > In certain scenarios, you might need to use an emergency account to sign in to Azure Purview. You might need this type of account to fix organization-level access problems when nobody else can sign in to Azure Purview or when other admins can't accomplish certain operations because of corporate authentication problems. We strongly recommended that you follow Microsoft best practices around implementing [emergency access accounts](/azure/active-directory/users-groups-roles/directory-emergency-access) by using cloud-only users.
>
- > Follow the instructions in [this article](./concept-account-upgrade.md#what-happens-when-your-upgraded-account-doesnt-have-a-collection-admin) to recover access to your Purview root collection if your previous Collection Admin is unavailable.
+ > Follow the instructions in [this article](./concept-account-upgrade.md#what-happens-when-your-upgraded-account-doesnt-have-a-collection-admin) to recover access to your Azure Purview root collection if your previous Collection Admin is unavailable.
- Minimize the number of root Collection Admins. Assign a maximum of three Collection Admin users at the root collection, including the SPN and your break-glass accounts. Assign your Collection Admin roles to the top-level collection or to subcollections instead.
Azure Purview data-plane roles are managed in Azure Purview. After you deploy a
- Azure Purview access management has moved into data plane. Azure Resource Manager roles aren't used anymore, so you should use Azure Purview to assign roles. -- In Azure Purview, you can assign roles to users, security groups, and service principals (including managed identities) from Azure Active Directory (Azure AD) on the same Azure AD tenant where the Purview account is deployed.
+- In Azure Purview, you can assign roles to users, security groups, and service principals (including managed identities) from Azure Active Directory (Azure AD) on the same Azure AD tenant where the Azure Purview account is deployed.
-- You must first add guest accounts to your Azure AD tenant as B2B users before you can assign Purview roles to external users.
+- You must first add guest accounts to your Azure AD tenant as B2B users before you can assign Azure Purview roles to external users.
- By default, Collection Admins don't have access to read or modify assets. But they can elevate their access and add themselves to more roles.
Azure Purview data-plane roles are managed in Azure Purview. After you deploy a
- For Azure Data Factory connection: to connect to Azure Data Factory, you have to be a Collection Admin for the root collection. -- If you need to connect to Azure Data Factory for lineage, grant the Data Curator role to the data factory's managed identity at your Purview root collection level. When you connect Data Factory to Purview in the authoring UI, Data Factory tries to add these role assignments automatically. If you have the Collection Admin role on the Purview root collection, this operation will work.
+- If you need to connect to Azure Data Factory for lineage, grant the Data Curator role to the data factory's managed identity at your Azure Purview root collection level. When you connect Data Factory to Azure Purview in the authoring UI, Data Factory tries to add these role assignments automatically. If you have the Collection Admin role on the Azure Purview root collection, this operation will work.
## Collections archetypes
The collection hierarchy consists of these verticals:
- Departments (a delegated collection for each department) - Teams or projects (further segregation based on teams or projects)
-In this scenario, each region has a subcollection of its own under the top-level collection in the Purview account. Data sources are registered and scanned in the corresponding subcollections in their own geographic locations. So assets also appear in the subcollection hierarchy for the region.
+In this scenario, each region has a subcollection of its own under the top-level collection in the Azure Purview account. Data sources are registered and scanned in the corresponding subcollections in their own geographic locations. So assets also appear in the subcollection hierarchy for the region.
If you have centralized data management and governance teams, you can grant them access from the top-level collection. When you do, they gain oversight for the entire data estate in the data map. Optionally, the centralized team can register and scan any shared data sources.
The collection hierarchy consists of these verticals:
- Geographic locations (mid-level collections based on geographic locations where data sources and data owners are located) - Major business functions or clients (further segregation based on functions or clients)
-Each region has a subcollection of its own under the top-level collection in the Purview account. Data sources are registered and scanned in the corresponding subcollections in their own geographic locations. So assets are added to the subcollection hierarchy for the region.
+Each region has a subcollection of its own under the top-level collection in the Azure Purview account. Data sources are registered and scanned in the corresponding subcollections in their own geographic locations. So assets are added to the subcollection hierarchy for the region.
If you have centralized data management and governance teams, you can grant them access from the top-level collection. When you do, they gain oversight for the entire data estate in the data map. Optionally, the centralized team can register and scan any shared data sources.
If you want to implement data democratization across an entire organization, ass
If you need to restrict access to metadata search and discovery in your organization, assign Data Reader and Data Curator roles at the specific collection level. For example, you could restrict US employees so they can read data only at the US collection level and not in the LATAM collection.
-You can apply a combination of these two scenarios in your Purview data map if total data democratization is required with a few exceptions for some collections. You can assign Purview roles at the top-level collection and restrict inheritance to the specific child collections.
+You can apply a combination of these two scenarios in your Azure Purview data map if total data democratization is required with a few exceptions for some collections. You can assign Azure Purview roles at the top-level collection and restrict inheritance to the specific child collections.
Assign the Collection Admin role to the centralized data security and management team at the top-level collection. Delegate further collection management of lower-level collections to corresponding teams. ## Next steps-- [Create a collection and assign permissions in Purview](./quickstart-create-collection.md)
+- [Create a collection and assign permissions in Azure Purview](./quickstart-create-collection.md)
- [Create and manage collections in Azure Purview](./how-to-create-and-manage-collections.md) - [Access control in Azure Purview](./catalog-permissions.md)
purview Concept Best Practices Glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-best-practices-glossary.md
You will also observe when there are language barriers, in which, most organizat
## Recommendations for implementing new glossary terms
-Creating terms is necessary to build the business vocabulary and apply it to assets within Azure Purview. When a new Purview account is created, by default, there are no built-in terms in the account.
+Creating terms is necessary to build the business vocabulary and apply it to assets within Azure Purview. When a new Azure Purview account is created, by default, there are no built-in terms in the account.
This creation process should follow strict naming standards to ensure that the glossary does not contain duplicate or competing terms.
This creation process should follow strict naming standards to ensure that the g
- Always use the provide search glossary terms feature before adding a new term. This will help you avoid adding duplicate terms to the glossary. - Avoid deploying terms with duplicated names. In Azure Purview, terms with the same name can exist under different parent terms. This can lead to confusion and should be well thought out before building your business glossary to avoid duplicated terms.
-Glossary terms in Purview are case sensitive and allow white space. The following shows a poorly executed example of implementing glossary terms and demonstrates the confusion caused:
+Glossary terms in Azure Purview are case sensitive and allow white space. The following shows a poorly executed example of implementing glossary terms and demonstrates the confusion caused:
:::image type="content" source="media/concept-best-practices/glossary-duplicated-term-search.png" alt-text="Screenshot that shows searching duplicated glossary terms.":::
As a best practice it always best to: Plan, search, and strictly follow standard
## Recommendations for deploying glossary term templates
-When building new term templates in Purview, review the following considerations:
+When building new term templates in Azure Purview, review the following considerations:
- Term templates are used to add custom attributes to glossary terms.-- By default, Purview offers several [out-of-the-box term attributes](./concept-business-glossary.md#custom-attributes) such as Name, Nick Name, Status, Definition, Acronym, Resources, Related terms, Synonyms, Stewards, Experts, and Parent term, which are found in the "System Default" template.
+- By default, Azure Purview offers several [out-of-the-box term attributes](./concept-business-glossary.md#custom-attributes) such as Name, Nick Name, Status, Definition, Acronym, Resources, Related terms, Synonyms, Stewards, Experts, and Parent term, which are found in the "System Default" template.
- Default attributes cannot be edited or deleted. - Custom attributes extend beyond default attributes, allowing the data curators to add more descriptive details to each term to completely describe the term in the organization. - As a reminder, Azure Purview stores only meta-data. Attributes should describe the meta-data; not the data itself.
When building new term templates in Purview, review the following considerations
- Terms may be imported with the "System default" or custom template. - When importing terms, use the sample .CSV file to guide you. This can save hours of frustration.-- When importing terms from a .CSV file, be sure that terms already existing in Purview are intended to be updated. When using the import feature, Purview will overwrite existing terms.
+- When importing terms from a .CSV file, be sure that terms already existing in Azure Purview are intended to be updated. When using the import feature, Azure Purview will overwrite existing terms.
- Before importing terms, test the import in a lab environment to ensure that no unexpected results occur, such as duplicate terms. - The email address for Stewards and Experts should be the primary address of the user from the Azure Active Directory group. Alternate email, user principal name and non-Azure Active Directory emails are not yet supported. - Glossary terms provide fours status: draft, approved, expire, alert. Draft is not officially implemented, approved is official/stand/approved for production, expired means should no longer be used, alert need to pay more attention.
For more information, see [Create, import, and export glossary terms](./how-to-c
## Recommendations for exporting glossary terms
-Exporting terms may be useful in Purview account to account, Backup, or Disaster Recovery scenarios. Exporting terms in Purview Studio must be done one term template at a time. Choosing terms from multiple templates will disable the "Export terms" button. As a best practice, using the "Term template" filter before bulk selecting will make the export process quick.
+Exporting terms may be useful in Azure Purview account to account, Backup, or Disaster Recovery scenarios. Exporting terms in Azure Purview Studio must be done one term template at a time. Choosing terms from multiple templates will disable the "Export terms" button. As a best practice, using the "Term template" filter before bulk selecting will make the export process quick.
## Glossary Management
Exporting terms may be useful in Purview account to account, Backup, or Disaster
- While classifications and sensitivity labels are applied to assets automatically by the system based on classification rules, glossary terms are not applied automatically. - Similar to classifications, glossary terms can be mapped to assets at the asset level or scheme level.-- In Purview, terms can be added to assets in different ways:
+- In Azure Purview, terms can be added to assets in different ways:
- Manually, using Azure Purview Studio. - Using Bulk Edit mode to update up to 25 assets, using Azure Purview Studio. - Curated Code using the Atlas API.
purview Concept Best Practices Lineage Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-best-practices-lineage-azure-data-factory.md
Last updated 10/25/2021
# Azure Purview Data Lineage best practices
-Data Lineage is broadly understood as the lifecycle that spans the dataΓÇÖs origin, and where it moves over time across the data estate. Purview can capture lineage for data in different parts of your organization's data estate, and at different levels of preparation including:
+Data Lineage is broadly understood as the lifecycle that spans the dataΓÇÖs origin, and where it moves over time across the data estate. Azure Purview can capture lineage for data in different parts of your organization's data estate, and at different levels of preparation including:
* Completely raw data staged from various platforms * Transformed and prepared data * Data used by visualization platforms
Data lineage is the process of describing what data exists, where it is
:::image type="content" source="./media/how-to-link-azure-data-factory/data-factory-connection.png" alt-text="Screen shot showing a data factory connection list." lightbox="./media/how-to-link-azure-data-factory/data-factory-connection.png":::
-* Each Data Factory instance can connect to only one Purview account. You can establish new connection in another Purview account, but this will turn existing connection to disconnected.
+* Each Data Factory instance can connect to only one Azure Purview account. You can establish new connection in another Azure Purview account, but this will turn existing connection to disconnected.
:::image type="content" source="./media/how-to-link-azure-data-factory/warning-for-disconnect-factory.png" alt-text="Screenshot showing warning to disconnect Azure Data Factory.":::
-* Data factory's managed identity is used to authenticate lineage in Purview account, the data factory's managed identity Data Curator role on Purview root collection is required.
+* Data factory's managed identity is used to authenticate lineage in Azure Purview account, the data factory's managed identity Data Curator role on Azure Purview root collection is required.
* Support no more than 10 data factories at once. If you want to add more than 10 data factories at once, please file a support ticket. ### Azure Data Factory activities
Data lineage is the process of describing what data exists, where it is
* Supported data sources in data flow activity is listed **Data Flow support** of [Connect to Azure Data Factory](how-to-link-azure-data-factory.md) * Supported data sources in SSIS is listed **SSIS execute package activity support** of [Lineage from SQL Server Integration Services](how-to-lineage-sql-server-integration-services.md)
-* Purview cannot capture lineage if Azure Data Factory copy activity use copy activity features listed in **Limitations on copy activity lineage** of [Connect to Azure Data Factory](how-to-link-azure-data-factory.md)
+* Azure Purview cannot capture lineage if Azure Data Factory copy activity use copy activity features listed in **Limitations on copy activity lineage** of [Connect to Azure Data Factory](how-to-link-azure-data-factory.md)
-* For the lineage of Dataflow activity, Purview only support source and sink. The lineage for Dataflow transformation is not supported yet.
+* For the lineage of Dataflow activity, Azure Purview only support source and sink. The lineage for Dataflow transformation is not supported yet.
-* Data flow lineage doesn't integrate with Purview resource set.
+* Data flow lineage doesn't integrate with Azure Purview resource set.
**Resource set example 1**
Data lineage is the process of describing what data exists, where it is
* For the lineage of Execute SSIS Package activity, we only support source and destination. The lineage for transformation is not supported yet.
- :::image type="content" source="./media/concept-best-practices-lineage/ssis-lineage.png" alt-text="Screenshot of the Execute SSIS lineage in Purview." lightbox="./media/concept-best-practices-lineage/ssis-lineage.png":::
+ :::image type="content" source="./media/concept-best-practices-lineage/ssis-lineage.png" alt-text="Screenshot of the Execute SSIS lineage in Azure Purview." lightbox="./media/concept-best-practices-lineage/ssis-lineage.png":::
-* Please refer the following step-by-step guide to [push Azure Data Factory lineage in Purview](../data-factory/tutorial-push-lineage-to-purview.md).
+* Please refer the following step-by-step guide to [push Azure Data Factory lineage in Azure Purview](../data-factory/tutorial-push-lineage-to-purview.md).
## Next steps - [Manage data sources](./manage-data-sources.md)
purview Concept Best Practices Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-best-practices-migration.md
Title: Purview migration best practices
+ Title: Azure Purview migration best practices
description: This article provides steps to perform backup and recovery for migration best practices.
Below steps are referring to [Azure Purview API documentation](/rest/api/purview
|**Data sources**|Call the [Get all data sources API](/rest/api/purview/scanningdataplane/scans/list-by-data-source) to list data sources with details. You also have to get the triggers by calling [Get trigger API](/rest/api/purview/scanningdataplane/triggers/get-trigger). There is also [Create data sources API](/rest/api/purview/scanningdataplane/data-sources/create-or-update) if you need to re-create the sources in bulk in the new account.| |**Credentials**|Create and maintain credentials used while scanning. There is no API to extract credentials, so this must be redone in the new account.| |**Self-hosted integration runtime (SHIR)**|Get a list of SHIR and get updated keys from the new account then update the SHIRs. This must be done [manually inside the SHIRs' hosts](manage-integration-runtimes.md#create-a-self-hosted-integration-runtime).|
-|**ADF connections**|Currently an ADF can be connected to one Purview at a time. You must disconnect ADF from failed Purview account and reconnect it to the new account later.|
+|**ADF connections**|Currently an ADF can be connected to one Azure Purview at a time. You must disconnect ADF from failed Azure Purview account and reconnect it to the new account later.|
### Run scans
To complete the asset migration, you must remap the relationships. There are thr
> Before migrating terms, you need to migrate the term templates. This step should be already covered in the custom `typedef` migration. #### Using Azure Purview Portal
-The quickest way to migrate glossary terms is to [export terms to a .csv file](how-to-create-import-export-glossary.md). You can do this using the Purview Studio.
+The quickest way to migrate glossary terms is to [export terms to a .csv file](how-to-create-import-export-glossary.md). You can do this using the Azure Purview Studio.
#### Using Azure Purview API To automate glossary migration, you first need to get the glossary `guid` (`glossaryGuid`) via [List Glossaries API](/rest/api/purview/catalogdataplane/glossary/list-glossaries). The `glossaryGuid` is the top/root level glossary `guid`.
If you have extracted asset information from previous steps, the contact details
To assign contacts to assets, you need a list of `guids` and identify all `objectid` of the contacts. You can automate this process by iterating through all assets and reassign contacts to all assets using the [Create Or Update Entities API](/rest/api/purview/catalogdataplane/entity/create-or-update-entities) ## Next steps-- [Create a Purview account](./create-catalog-portal.md)
+- [Create an Azure Purview account](./create-catalog-portal.md)
purview Concept Best Practices Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-best-practices-network.md
Previously updated : 09/29/2021 Last updated : 01/13/2022 # Azure Purview network architecture and best practices
You must use private endpoints for your Azure Purview account if you have any of
### Integration runtime options -- If your data sources are in Azure, you need to set up and use a self-hosted integration runtime on a Windows virtual machine that's deployed inside the same virtual network where Azure Purview ingestion private endpoints are deployed. The Azure integration runtime won't work with ingestion private endpoints.
+- If your data sources are in Azure, you need to set up and use a self-hosted integration runtime on a Windows virtual machine that's deployed inside the same or a peered virtual network where Azure Purview ingestion private endpoints are deployed. The Azure integration runtime won't work with ingestion private endpoints.
- To scan on-premises data sources, you can also install a self-hosted integration runtime either on an on-premises Windows machine or on a VM inside an Azure virtual network.
In hub-and-spoke network architectures, your organization's data governance team
In a hub-and-spoke architecture, you can deploy Azure Purview and one or more self-hosted integration runtime VMs in the hub subscription and virtual network. You can register and scan data sources from other virtual networks from multiple subscriptions in the same region.
-The self-hosted integration runtime VMs must be in the same virtual network as the ingestion private endpoint, but they can be in a separate subnet.
+The self-hosted integration runtime VMs can be deployed inside the same Azure virtual network or a peered virtual network where the account and ingestion private endpoints are deployed.
:::image type="content" source="media/concept-best-practices/network-pe-multi-vnet.png" alt-text="Screenshot that shows Azure Purview with private endpoints in a scenario of multiple virtual networks."lightbox="media/concept-best-practices/network-pe-multi-vnet.png":::
-You can optionally deploy an additional self-hosted integration runtime in the spoke virtual networks. In that case, you must deploy an additional account and ingestion private endpoint in the spoke virtual networks.
+You can optionally deploy an additional self-hosted integration runtime in the spoke virtual networks.
#### Multiple regions, multiple virtual networks If your data sources are distributed across multiple Azure regions in one or more Azure subscriptions, you can use this scenario.
-For performance and cost optimization, we highly recommended deploying one or more self-hosted integration runtime VMs in each region where data sources are located. In that case, you need to deploy an additional account and ingestion private endpoint for the Azure Purview account in the region and virtual network where self-hosted integration runtime VMs are located.
-
-If you need to register and scan any Azure Data Lake Storage (Gen2) resources from other regions, you need to have a local self-hosted integration runtime VM in the region where the data source is located.
+For performance and cost optimization, we highly recommended deploying one or more self-hosted integration runtime VMs in each region where data sources are located.
:::image type="content" source="media/concept-best-practices/network-pe-multi-region.png" alt-text="Screenshot that shows Azure Purview with private endpoints in a scenario of multiple virtual networks and multiple regions."lightbox="media/concept-best-practices/network-pe-multi-region.png":::
If you need to scan some data sources by using an ingestion private endpoint and
### Integration runtime options -- To scan an Azure data source that's configured with a private endpoint, you need to set up and use a self-hosted integration runtime on a Windows virtual machine that's deployed inside the same virtual network where Azure Purview ingestion private endpoints are deployed.
+- To scan an Azure data source that's configured with a private endpoint, you need to set up and use a self-hosted integration runtime on a Windows virtual machine that's deployed inside the same or a peered virtual network where Azure Purview account and ingestion private endpoints are deployed.
When you're using a private endpoint with Azure Purview, you need to allow network connectivity from data sources to a self-hosted integration VM on the Azure virtual network where Azure Purview private endpoints are deployed.
purview Concept Best Practices Scanning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-best-practices-scanning.md
Title: Best practices for scanning of data sources in Purview
+ Title: Best practices for scanning of data sources in Azure Purview
description: This article provides best practices for registering and scanning various data sources in Azure Purview.
# Azure Purview scanning best practices
-Azure Purview supports automated scanning of on-prem, multi-cloud, and SaaS data sources. Running a "scan" invokes the process to ingest metadata from the registered data sources. The metadata curated at the end of scan and curation process includes technical metadata like data asset names (table names/ file names), file size, columns, data lineage and so on. For structured data sources (for example Relational Database Management System) the schema details are also captured. The curation process applies automated classification labels on the schema attributes based on the scan rule set configured, and sensitivity labels if your Purview account is connected to a Microsoft 365 Security & Compliance Center.
+Azure Purview supports automated scanning of on-prem, multi-cloud, and SaaS data sources. Running a "scan" invokes the process to ingest metadata from the registered data sources. The metadata curated at the end of scan and curation process includes technical metadata like data asset names (table names/ file names), file size, columns, data lineage and so on. For structured data sources (for example Relational Database Management System) the schema details are also captured. The curation process applies automated classification labels on the schema attributes based on the scan rule set configured, and sensitivity labels if your Azure Purview account is connected to a Microsoft 365 Security & Compliance Center.
## Why do you need best practices to manage data sources?
The design considerations and recommendations have been organized based on the k
- The hierarchy aligning with the organizationΓÇÖs strategy (geographical, business function, source of data, etc.) defining the data sources to be registered and scanned needs to be created using Collections. -- By design, you cannot register data sources multiple times in the same Purview account. This architecture helps to avoid the risk of assigning different access control to the same data source.
+- By design, you cannot register data sources multiple times in the same Azure Purview account. This architecture helps to avoid the risk of assigning different access control to the same data source.
### Design recommendations
To avoid unexpected cost and rework, it is recommended to plan and follow the be
> This feature has cost considerations, refer to the [pricing page](https://azure.microsoft.com/pricing/details/azure-purview/) for details. 3. **Set up a scan** for the registered data source(s)
- - **Scan name**: By default, Purview uses a naming convention **SCAN-[A-Z][a-z][a-z]** which is not helpful when trying to identify a scan that you have run. As a best practice, use a meaningful naming convention. An instance could be naming the scan as _environment-source-frequency-time_, for example DEVODS-Daily-0200, which would represent a daily scan at 0200 hrs.
+ - **Scan name**: By default, Azure Purview uses a naming convention **SCAN-[A-Z][a-z][a-z]** which is not helpful when trying to identify a scan that you have run. As a best practice, use a meaningful naming convention. An instance could be naming the scan as _environment-source-frequency-time_, for example DEVODS-Daily-0200, which would represent a daily scan at 0200 hrs.
- **Authentication**: Azure Purview offers various authentication methods for scanning the data sources, depending on the type of source (Azure cloud or on-prem or third-party sources). It is recommended to follow the least privilege principle for authentication method following below order of preference:
- - Purview MSI - Managed Identity (for example, for Azure Data Lake Gen2 sources)
+ - Azure Purview MSI - Managed Identity (for example, for Azure Data Lake Gen2 sources)
- User-assigned Managed Identity - Service Principal - SQL Authentication (for example, for on-prem or Azure SQL sources)
To avoid unexpected cost and rework, it is recommended to plan and follow the be
### Points to note -- If a field / column, table, or a file is removed from the source system after the scan was executed, it will only be reflected (removed) in Purview after the next scheduled full / incremental scan.
+- If a field / column, table, or a file is removed from the source system after the scan was executed, it will only be reflected (removed) in Azure Purview after the next scheduled full / incremental scan.
- An asset can be deleted from Azure Purview catalog using the **delete** icon under the name of the asset (this will not remove the object in the source). However, if you run full scan on the same source, it would get reingested in the catalog. If you have scheduled a weekly / monthly scan instead (incremental) the deleted asset will not be picked unless the object is modified at source (for example, a column is added / removed from the table).-- To understand the behavior of subsequent scans after *manually* editing a data asset or an underlying schema through Purview Studio, refer to [Catalog asset details](./catalog-asset-details.md#scans-on-edited-assets).
+- To understand the behavior of subsequent scans after *manually* editing a data asset or an underlying schema through Azure Purview Studio, refer to [Catalog asset details](./catalog-asset-details.md#scans-on-edited-assets).
- For more details refer the tutorial on [how to view, edit, and delete assets](./catalog-asset-details.md) ## Next steps
purview Concept Best Practices Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-best-practices-security.md
Title: Purview security best practices
+ Title: Azure Purview security best practices
description: This article provides Azure Purview best practices.
For more information, see [Best practices related to connectivity to Azure PaaS
### Deploy private endpoints for Azure Purview accounts
-If you need to use Azure Purview from inside your private network, it is recommended to use Azure Private Link Service with your Azure Purview accounts for partial or [end-to-end isolation](catalog-private-link-end-to-end.md) to connect to Azure Purview Studio, access Purview endpoints and to scan data sources.
+If you need to use Azure Purview from inside your private network, it is recommended to use Azure Private Link Service with your Azure Purview accounts for partial or [end-to-end isolation](catalog-private-link-end-to-end.md) to connect to Azure Purview Studio, access Azure Purview endpoints and to scan data sources.
The Azure Purview _account_ private endpoint is used to add another layer of security, so only client calls that are originated from within the virtual network are allowed to access the Azure Purview account. This private endpoint is also a prerequisite for the portal private endpoint.
For more information, see [Azure Purview network architecture and best practices
You can disable Azure Purview Public access to cut off access to the Azure Purview account completely from the public internet. In this case, you should consider the following requirements: - Azure Purview must be deployed based on [end-to-end network isolation scenario](catalog-private-link-end-to-end.md).-- To access Purview Studio and Purview endpoints, you need to use a management machine that is connected to private network to access Azure Purview through private network.
+- To access Azure Purview Studio and Azure Purview endpoints, you need to use a management machine that is connected to private network to access Azure Purview through private network.
- Review [known limitations](catalog-private-link-troubleshoot.md). - To scan Azure platform as a service data sources, review [Support matrix for scanning data sources through ingestion private endpoint](catalog-private-link.md#support-matrix-for-scanning-data-sources-through-ingestion-private-endpoint). - Azure data sources must be also configured with private endpoints.
Network Security Groups can be applied to network interface or Azure virtual net
For more information, see [apply NSG rules for private endpoints](../private-link/disable-private-endpoint-network-policy.md).
-The following NSG rules are required on **data sources** for Purview scanning:
+The following NSG rules are required on **data sources** for Azure Purview scanning:
|Direction |Source |Source port range |Destination |Destination port |Protocol |Action | |||||||| |Inbound | Self-hosted integration runtime VMs' private IP addresses or subnets | * | Data Sources private IP addresses or Subnets | 443 | Any | Allow |
-The following NSG rules are required on from the **management machines** to access Purview Studio:
+The following NSG rules are required on from the **management machines** to access Azure Purview Studio:
|Direction |Source |Source port range |Destination |Destination port |Protocol |Action | ||||||||
-|Outbound | Management machines' private IP addresses or subnets | * | Purview account and portal private endpoint IP addresses or subnets | 443 | Any | Allow |
+|Outbound | Management machines' private IP addresses or subnets | * | Azure Purview account and portal private endpoint IP addresses or subnets | 443 | Any | Allow |
|Outbound | Management machines' private IP addresses or subnets | * | Service tag: `AzureCloud` | 443 | Any | Allow |
-The following NSG rules are required on **self-hosted integration runtime VMs** for Purview scanning and metadata ingestion:
+The following NSG rules are required on **self-hosted integration runtime VMs** for Azure Purview scanning and metadata ingestion:
> [!IMPORTANT] > Consider adding additional rules with relevant Service Tags, based on your data source types.
The following NSG rules are required on **self-hosted integration runtime VMs**
|Direction |Source |Source port range |Destination |Destination port |Protocol |Action | |||||||| |Outbound | Self-hosted integration runtime VMs' private IP addresses or subnets | * | Data Sources private IP addresses or subnets | 443 | Any | Allow |
-|Outbound | Self-hosted integration runtime VMs' private IP addresses or subnets | * | Purview account and ingestion private endpoint IP addresses or Subnets | 443 | Any | Allow |
+|Outbound | Self-hosted integration runtime VMs' private IP addresses or subnets | * | Azure Purview account and ingestion private endpoint IP addresses or Subnets | 443 | Any | Allow |
|Outbound | Self-hosted integration runtime VMs' private IP addresses or subnets | * | Service tag: `Servicebus` | 443 | Any | Allow | |Outbound | Self-hosted integration runtime VMs' private IP addresses or subnets | * | Service tag: `Storage` | 443 | Any | Allow | |Outbound | Self-hosted integration runtime VMs' private IP addresses or subnets | * | Service tag: `AzureActiveDirectory` | 443 | Any | Allow |
The following NSG rules are required on **self-hosted integration runtime VMs**
|Outbound | Self-hosted integration runtime VMs' private IP addresses or subnets | * | Service tag: `KeyVault` | 443 | Any | Allow |
-The following NSG rules are required on for **Purview account, portal and ingestion private endpoints**:
+The following NSG rules are required on for **Azure Purview account, portal and ingestion private endpoints**:
|Direction |Source |Source port range |Destination |Destination port |Protocol |Action | ||||||||
-|Inbound | Self-hosted integration runtime VMs' private IP addresses or subnets | * | Purview account and ingestion private endpoint IP addresses or subnets | 443 | Any | Allow |
-|Inbound | Management machines' private IP addresses or subnets | * | Purview account and ingestion private endpoint IP addresses or subnets | 443 | Any | Allow |
+|Inbound | Self-hosted integration runtime VMs' private IP addresses or subnets | * | Azure Purview account and ingestion private endpoint IP addresses or subnets | 443 | Any | Allow |
+|Inbound | Management machines' private IP addresses or subnets | * | Azure Purview account and ingestion private endpoint IP addresses or subnets | 443 | Any | Allow |
For more information, see [Self-hosted integration runtime networking requirements](manage-integration-runtimes.md#networking-requirements).
Related to roles and access management in Azure Purview, you can apply the follo
- Define roles and task needed to perform data management and governance using Azure Purview. - Assign roles to Azure Active Directory groups instead of assigning roles to individual users. - Use Azure [Active Directory Entitlement Management](../active-directory/governance/entitlement-management-overview.md) to map user access to Azure AD groups using Access Packages. -- Enforce multi-factor authentication for Purview users, especially, for users with privileged roles such as collection admins, data source admins or data curators.
+- Enforce multi-factor authentication for Azure Purview users, especially, for users with privileged roles such as collection admins, data source admins or data curators.
### Manage an Azure Purview account in control plane and data plane
Examples of control plane operations and data plane operations:
|Task |Scope |Recommended role |What roles to use? | |||||
-|Deploy a Purview account | Control plane | Azure subscription owner or contributor | Azure RBAC roles |
+|Deploy an Azure Purview account | Control plane | Azure subscription owner or contributor | Azure RBAC roles |
|Setup a Private Endpoint for Azure Purview | Control plane | Contributor  | Azure RBAC roles |
-|Delete a Purview account | Control plane | Contributor  | Azure RBAC roles |
-|View Purview metrics to get current capacity units | Control plane | Reader | Azure RBAC roles |
+|Delete an Azure Purview account | Control plane | Contributor  | Azure RBAC roles |
+|View Azure Purview metrics to get current capacity units | Control plane | Reader | Azure RBAC roles |
|Create a collection | Data plane | Collection Admin | Azure Purview roles | |Register a data source | Data plane | Collection Admin | Azure Purview roles | |Scan a SQL Server | Data plane | Data source admin and data reader or data curator | Azure Purview roles |
-|Search inside Purview Data Catalog | Data plane | Data source admin and data reader or data curator | Azure Purview roles |
+|Search inside Azure Purview Data Catalog | Data plane | Data source admin and data reader or data curator | Azure Purview roles |
-Azure Purview plane roles are defined and managed inside Azure Purview instance in Purview collections. For more information, see [Access control in Azure Purview](catalog-permissions.md#roles).
+Azure Purview plane roles are defined and managed inside Azure Purview instance in Azure Purview collections. For more information, see [Access control in Azure Purview](catalog-permissions.md#roles).
Follow [Azure role-based access recommendations](../role-based-access-control/best-practices.md) for Azure control plane tasks. ### Authentication and authorization
-To gain access to Azure Purview, users must be authenticated and authorized. Authentication is the process of proving the user is who they claim to be. Authorization refers to controlling access inside Purview assigned on collections.
+To gain access to Azure Purview, users must be authenticated and authorized. Authentication is the process of proving the user is who they claim to be. Authorization refers to controlling access inside Azure Purview assigned on collections.
-We use Azure Active Directory to provide authentication and authorization mechanisms for Purview inside Collections. You can assign Purview roles to the following security principals from your Azure Active Directory tenant which is associated with Azure subscription where your Azure Purview instance is hosted:
+We use Azure Active Directory to provide authentication and authorization mechanisms for Azure Purview inside Collections. You can assign Azure Purview roles to the following security principals from your Azure Active Directory tenant which is associated with Azure subscription where your Azure Purview instance is hosted:
- Users and guest users (if they are already added into your Azure AD tenant) - Security groups - Managed Identities - Service Principals
-Azure Purview fine-grained roles can be assigned to a flexible Collections hierarchy inside the Purview instance.
+Azure Purview fine-grained roles can be assigned to a flexible Collections hierarchy inside the Azure Purview instance.
:::image type="content" source="media/concept-best-practices/security-access-management.png" alt-text="Screenshot that shows Azure Purview access management."lightbox="media/concept-best-practices/security-access-management.png":::
Azure Purview fine-grained roles can be assigned to a flexible Collections hiera
As a general rule, restrict access based on the [need to know](https://en.wikipedia.org/wiki/Need_to_know) and [least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege) security principles is imperative for organizations that want to enforce security policies for data access.
-In Azure Purview, data sources, assets and scans can be organized using [Azure Purview Collections](quickstart-create-collection.md). Collections are hierarchical grouping of metadata in Purview, but at the same time they provide a mechanism to manage access across Purview. Roles in Azure Purview can be assigned to a collection based on your collection's hierarchy.
+In Azure Purview, data sources, assets and scans can be organized using [Azure Purview Collections](quickstart-create-collection.md). Collections are hierarchical grouping of metadata in Azure Purview, but at the same time they provide a mechanism to manage access across Azure Purview. Roles in Azure Purview can be assigned to a collection based on your collection's hierarchy.
Use [Azure Purview collections](concept-best-practices-collections.md#define-a-collection-hierarchy) to implement your organization's metadata hierarchy for centralized or delegated management and governance hierarchy based on least privileged model. Follow least privilege access model when assigning roles inside Azure Purview collections by segregating duties within your team and grant only the amount of access to users that they need to perform their jobs.
-For more information how to assign least privilege access model in Azure Purview, based on Purview collection hierarchy, see [Access control in Azure Purview](catalog-permissions.md#assign-permissions-to-your-users).
+For more information how to assign least privilege access model in Azure Purview, based on Azure Purview collection hierarchy, see [Access control in Azure Purview](catalog-permissions.md#assign-permissions-to-your-users).
### Lower exposure of privileged accounts Securing privileged access is a critical first step to protecting business assets. Minimizing the number of people who have access to secure information or resources, reduces the chance of a malicious user getting access, or an authorized user inadvertently affecting a sensitive resource.
-Reduce the number of users with write access inside your Purview instance. Keep the number of collection admins and data curator roles minimum at root collection.
+Reduce the number of users with write access inside your Azure Purview instance. Keep the number of collection admins and data curator roles minimum at root collection.
### Use multi-factor authentication and conditional access [Azure Active Directory Multi-Factor Authentication](../active-directory/authentication/concept-mfa-howitworks.md) provides another layer of security and authentication. For more security, we recommend enforcing [conditional access policies](../active-directory/conditional-access/overview.md) for all privileged accounts.
-By using Azure Active Directory Conditional Access policies, apply Azure AD Multi-Factor Authentication at sign-in for all individual users who are assigned to Purview roles with modify access inside your Purview instances: Collection Admin, Data Source Admin, Data Curator.
+By using Azure Active Directory Conditional Access policies, apply Azure AD Multi-Factor Authentication at sign-in for all individual users who are assigned to Azure Purview roles with modify access inside your Azure Purview instances: Collection Admin, Data Source Admin, Data Curator.
Enable multi-factor authentication for your admin accounts and ensure that admin account users have registered for MFA.
In Azure, you can apply [resource locks](../azure-resource-manager/management/lo
Enable Azure resource lock for your Azure Purview accounts to prevent accidental deletion of Azure Purview instances in your Azure subscriptions.
-Adding a `CanNotDelete` or `ReadOnly` lock to Azure Purview account does not prevent deletion or modification operations inside Azure Purview data plane, however, it prevents any operations in control plane, such as deleting the Purview account, deploying a private endpoint or configuration of diagnostic settings.
+Adding a `CanNotDelete` or `ReadOnly` lock to Azure Purview account does not prevent deletion or modification operations inside Azure Purview data plane, however, it prevents any operations in control plane, such as deleting the Azure Purview account, deploying a private endpoint or configuration of diagnostic settings.
For more information, see [Understand scope of locks](../azure-resource-manager/management/lock-resources.md#understand-scope-of-locks).
-Resource locks can be assigned to Purview resource groups or resources, however, you cannot assign an Azure resource lock to Purview Managed resources or managed Resource Group.
+Resource locks can be assigned to Azure Purview resource groups or resources, however, you cannot assign an Azure resource lock to Azure Purview Managed resources or managed Resource Group.
### Implement a break glass strategy Plan for a break glass strategy for your Azure Active Directory tenant, Azure subscription and Azure Purview accounts to prevent tenant-wide account lockout. For more information about Azure AD and Azure emergency access planning, see [Manage emergency access accounts in Azure AD](../active-directory/roles/security-emergency-access.md).
-For more information about Azure Purview break glass strategy, see [Purview collections best practices and design recommendations](concept-best-practices-collections.md#design-recommendations).
+For more information about Azure Purview break glass strategy, see [Azure Purview collections best practices and design recommendations](concept-best-practices-collections.md#design-recommendations).
## Threat protection and preventing data exfiltration
For more information, see [Integrate Azure Purview with Azure security products]
### Secure metadata extraction and storage
-Azure Purview is a data governance solution in cloud. You can register and scan different data sources from various data systems from your on-premises, Azure, or multi-cloud environments into Azure Purview. While data source is registered and scanned in Purview, the actual data and data sources stay in their original locations, only metadata is extracted from data sources and stored in Purview Data Map which means, you do not need to move data out of the region or their original location to extract the metadata into Azure Purview.
+Azure Purview is a data governance solution in cloud. You can register and scan different data sources from various data systems from your on-premises, Azure, or multi-cloud environments into Azure Purview. While data source is registered and scanned in Azure Purview, the actual data and data sources stay in their original locations, only metadata is extracted from data sources and stored in Azure Purview Data Map which means, you do not need to move data out of the region or their original location to extract the metadata into Azure Purview.
-When an Azure Purview account is deployed, in addition, a managed resource group is also deployed in your Azure subscription. A managed Azure Storage Account and a Managed Event Hub are deployed inside this resource group. The managed storage account is used to ingest metadata from data sources during the scan. Since these resources are consumed by the Azure Purview they cannot be accessed by any other users or principals, except the Azure Purview account. This is because an Azure role-based access control (RBAC) deny assignment is added automatically for all principals to this resource group at the time of Purview account deployment, preventing any CRUD operations on these resources if they are not initiated from Azure Purview.
+When an Azure Purview account is deployed, in addition, a managed resource group is also deployed in your Azure subscription. A managed Azure Storage Account and a Managed Event Hub are deployed inside this resource group. The managed storage account is used to ingest metadata from data sources during the scan. Since these resources are consumed by the Azure Purview they cannot be accessed by any other users or principals, except the Azure Purview account. This is because an Azure role-based access control (RBAC) deny assignment is added automatically for all principals to this resource group at the time of Azure Purview account deployment, preventing any CRUD operations on these resources if they are not initiated from Azure Purview.
### Where is metadata stored?
-Purview extracts only the metadata from different data source systems into [Azure Purview Data Map](concept-elastic-data-map.md) during the scanning process.
+Azure Purview extracts only the metadata from different data source systems into [Azure Purview Data Map](concept-elastic-data-map.md) during the scanning process.
-You can deploy a Purview account inside your Azure subscription in any [supported Azure regions](https://azure.microsoft.com/global-infrastructure/services/?products=purview&regions=all).
+You can deploy a Azure Purview account inside your Azure subscription in any [supported Azure regions](https://azure.microsoft.com/global-infrastructure/services/?products=purview&regions=all).
All metadata is stored inside Data Map inside your Azure Purview instance. This means the metadata is stored in the same region as your Azure Purview instance.
To connect to a data source Azure Purview requires a credential with read-only a
It is recommended prioritizing the use of the following credential options for scanning, when possible:
-1. Purview Managed Identity
+1. Azure Purview Managed Identity
2. User Assigned Managed Identity 3. Service Principals 4. Other options such as Account key, SQL Authentication, etc.
As a general rule, you can use the following options to set up integration runti
|Data source is inside an Azure IaaS VM such as SQL Server | Self-hosted integration runtime deployed in Azure | SQL Authentication or Basic Authentication (depending on Azure data source type) | |Data source is inside an on-premises system such as SQL Server or Oracle | Self-hosted integration runtime deployed in Azure or in the on-premises network | SQL Authentication or Basic Authentication (depending on Azure data source type) | |Multi-cloud | Azure runtime or self-hosted integration runtime based on data source types | Supported credential options vary based on data sources types |
-|Power BI tenant | Azure Runtime | Purview Managed Identity |
+|Power BI tenant | Azure Runtime | Azure Purview Managed Identity |
Use [this guide](purview-connector-overview.md) to read more about each connector and their supported authentication options. ## Additional recommendations
-### Define required number of Purview accounts for your organization
+### Define required number of Azure Purview accounts for your organization
-As part of security planning for implementation of Azure Purview in your organization, review your business and security requirements to define [how many Purview accounts are needed](concept-best-practices-accounts.md) in your organization. various factors may impact the decision, such as [multi-tenancy](/azure/cloud-adoption-framework/ready/enterprise-scale/enterprise-enrollment-and-azure-ad-tenants#define-azure-ad-tenants) billing or compliance requirements.
+As part of security planning for implementation of Azure Purview in your organization, review your business and security requirements to define [how many Azure Purview accounts are needed](concept-best-practices-accounts.md) in your organization. various factors may impact the decision, such as [multi-tenancy](/azure/cloud-adoption-framework/ready/enterprise-scale/enterprise-enrollment-and-azure-ad-tenants#define-azure-ad-tenants) billing or compliance requirements.
### Apply security best practices for Self-hosted runtime VMs
For self-hosted integration runtime VMs deployed as virtual machines in Azure, f
- Lock down inbound traffic to your VMs using Network Security Groups and [Azure Defender access Just-in-Time](../defender-for-cloud/just-in-time-access-usage.md). - Install antivirus or antimalware. - Deploy Azure Defender to get insights around any potential anomaly on the VMs. -- Limit the number of software in the self-hosted integration runtime VMs. Although it is not a mandatory requirement to have a dedicated VM for a self-hosted runtime for Purview, we highly suggest using dedicated VMs especially for production environments.
+- Limit the number of software in the self-hosted integration runtime VMs. Although it is not a mandatory requirement to have a dedicated VM for a self-hosted runtime for Azure Purview, we highly suggest using dedicated VMs especially for production environments.
- Monitor the VMs using [Azure Monitor for VMs](../azure-monitor/vm/vminsights-overview.md). By using Log analytics agent you can capture telemetry such as performance metrics to adjust required capacity for your VMs. - By integrating virtual machines with Microsoft Defender for Cloud, you can you prevent, detect, and respond to threats . - Keep your machines current. You can enable Automatic Windows Update or use [Update Management in Azure Automation](../automation/update-management/overview.md) to manage operating system level updates for the OS.
purview Concept Best Practices Sensitivity Labels https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-best-practices-sensitivity-labels.md
Title: Best practices for applying sensitivity labels in Purview
+ Title: Best practices for applying sensitivity labels in Azure Purview
description: This article provides best practices for applying sensitivity labels in Azure Purview.
# Labeling best practices
-Azure Purview supports labeling of both structured and unstructured data stored across various data sources. Labeling of data within Purview allows users to easily find data that matches pre-defined auto-labeling rules that have been configured in the Microsoft 365 Security and Compliance Center (SCC). Azure Purview extends the use of Microsoft 365 sensitivity labels to assets stored in infrastructure cloud locations and structured data sources.
+Azure Purview supports labeling of both structured and unstructured data stored across various data sources. Labeling of data within Azure Purview allows users to easily find data that matches pre-defined auto-labeling rules that have been configured in the Microsoft 365 Security and Compliance Center (SCC). Azure Purview extends the use of Microsoft 365 sensitivity labels to assets stored in infrastructure cloud locations and structured data sources.
## Protect Personal Identifiable Information(PII) with Custom Sensitivity Label for Azure Purview, using Microsoft Information Protection
It also abstracts the data itself, so you use labels to track the type of data,
### Label recommendations -- When configuring sensitivity labels for Azure Purview, you may define autolabeling rules for files, database columns, or both within the label properties. Azure Purview will label files within the Purview data map when the autolabeling rule is configured to automatically apply the label or recommend that the label is applied.
+- When configuring sensitivity labels for Azure Purview, you may define autolabeling rules for files, database columns, or both within the label properties. Azure Purview will label files within the Azure Purview data map when the autolabeling rule is configured to automatically apply the label or recommend that the label is applied.
> [!WARNING] > If you have not already configured autolabeling for files and emails on your sensitivity labels, keep in mind this can have user impact within your Office and Microsoft 365 environment. You may however test autolabeling on database columns without user impact. -- If you are defining new autolabeling rules for files when configuring labels for Purview, make sure that you have the condition for applying the label set appropriately.
+- If you are defining new autolabeling rules for files when configuring labels for Azure Purview, make sure that you have the condition for applying the label set appropriately.
- You can set the detection criteria to **All of these** or **Any of these** in the upper right of the autolabeling for files and emails page of the label properties. - The default setting for detection criteria is **All of these** which means that the asset must contain all of the specified sensitive info types for the label to be applied. While the default setting may be valid in some instances, many customers prefer to change the setting to **Any of these** meaning that if at least one of them is found the label is applied.
It also abstracts the data itself, so you use labels to track the type of data,
- Build groups of Sensitivity Labels and store them as dedicated Sensitivity Label Policy ΓÇô for example store all required Sensitivity Labels for Regulatory Rules by using the same Sensitivity Label Policy to publish. - Capture all test cases for your labels and test your Label policies with all applications you want to secure. - Promote Sensitivity Label Policies to Azure Purview.-- Run test scans from Purview on different Data Sources (for Example Hybrid-Cloud, On-Premise) to identify Sensitivity Labels.-- Gather and consider insights (for example by using Purview insights) and use alerting mechanism to mitigate potential breaches of Regulations.
+- Run test scans from Azure Purview on different Data Sources (for Example Hybrid-Cloud, On-Premise) to identify Sensitivity Labels.
+- Gather and consider insights (for example by using Azure Purview insights) and use alerting mechanism to mitigate potential breaches of Regulations.
By using Sensitivity Labels with Azure Purview you are able to extend your Microsoft Information Protection beyond the border of Microsoft Data Estate to your On-prem, Hybrid-Could, Multi-Cloud and SaaS Scenarios.
purview Concept Default Purview Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-default-purview-account.md
Last updated 12/01/2021
-# Default Purview Account
+# Default Azure Purview Account
-In general, our guidance is to have a single Purview account for entire customer's data estate. However, there are cases in which customers would like to have multiple Purview accounts in their organization. The top reasons for different Purview accounts are listed below:
+In general, our guidance is to have a single Azure Purview account for entire customer's data estate. However, there are cases in which customers would like to have multiple Azure Purview accounts in their organization. The top reasons for different Azure Purview accounts are listed below:
* Testing new configurations - Customers want to create multiple catalogs for testing out configurations such as scan or classification rules before moving the configuration to a higher environment like pre-production or production. * Storing test/pre-production/production data separately - Customers want to create different catalogs for different kinds of data stored in different environments.
-* Conglomerates - Conglomerates often have many business units (BUs) that operate separately to the extent that they won't even share billing with each other. Hence, this might require the conglomerates to create different Purview accounts for different BUs.
+* Conglomerates - Conglomerates often have many business units (BUs) that operate separately to the extent that they won't even share billing with each other. Hence, this might require the conglomerates to create different Azure Purview accounts for different BUs.
-* Compliance - There are some strict compliance regulations, which treat even metadata as sensitive and require it to be in a particular geography. For the same reason customers might end up with multiple Purview accounts per region.
+* Compliance - There are some strict compliance regulations, which treat even metadata as sensitive and require it to be in a particular geography. For the same reason customers might end up with multiple Azure Purview accounts per region.
-Having multiple Purview accounts in a tenant now poses the challenge of which Purview account should all other services like PBI, Synapse connect to. A PBI admin or Synapse Admin who is given the responsibility of pairing their PBI tenant or Synapse account with right Purview account. This is where default Purview account will help our customers. Azure global administrator (or tenant admin) can designate a Purview account as default Purview account at tenant level. At any point in time a tenant can have only 0 or 1 default accounts. Once this is set PBI Admin or Synapse Admin or any user in your organization has clear understanding that this account is the "right" one, discover the same and all other services should connect to this one.
+Having multiple Azure Purview accounts in a tenant now poses the challenge of which Azure Purview account should all other services like PBI, Synapse connect to. A PBI admin or Synapse Admin who is given the responsibility of pairing their PBI tenant or Synapse account with right Azure Purview account. This is where default Azure Purview account will help our customers. Azure global administrator (or tenant admin) can designate an Azure Purview account as default Azure Purview account at tenant level. At any point in time a tenant can have only 0 or 1 default accounts. Once this is set PBI Admin or Synapse Admin or any user in your organization has clear understanding that this account is the "right" one, discover the same and all other services should connect to this one.
## Manage default account for tenant
Having multiple Purview accounts in a tenant now poses the challenge of which Pu
* Setting up wrong default account can have security implications so only Azure global administrator at tenant level (Tenant Admin) can set the default account flag as 'Yes'.
-* Changing the default account is a two-step process. First you need to change the flag as 'No' to the current default Purview account and then set the flag as 'Yes' to the new Purview account.
+* Changing the default account is a two-step process. First you need to change the flag as 'No' to the current default Azure Purview account and then set the flag as 'Yes' to the new Azure Purview account.
-* Setting up default account is a control plane operation and hence Purview studio will not have any changes if an account is defined as default. However, in the studio you can see the account name is appended with "(default)" for the default Purview account.
+* Setting up default account is a control plane operation and hence Azure Purview studio will not have any changes if an account is defined as default. However, in the studio you can see the account name is appended with "(default)" for the default Azure Purview account.
## Next steps - [Create an Azure Purview account](create-catalog-portal.md)-- [Purview Pricing](https://azure.microsoft.com/pricing/details/azure-purview/)
+- [Azure Purview Pricing](https://azure.microsoft.com/pricing/details/azure-purview/)
purview Concept Guidelines Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-guidelines-pricing.md
Title: Purview pricing guidelines
-description: This article provides a guideline towards understanding the various components in Purview pricing.
+ Title: Azure Purview pricing guidelines
+description: This article provides a guideline towards understanding the various components in Azure Purview pricing.
Azure Purview enables a unified governance experience by providing a single pane
## Why do you need to understand the components of the Azure Purview pricing? -- While the pricing for Azure Purview is on a subscription-based **Pay-As-You-Go** model, there are various dimensions that you can consider while budgeting for Purview-- This guideline is intended to help you plan the budgeting for Purview by providing a view on the various control factors that impact the budget
+- While the pricing for Azure Purview is on a subscription-based **Pay-As-You-Go** model, there are various dimensions that you can consider while budgeting for Azure Purview
+- This guideline is intended to help you plan the budgeting for Azure Purview by providing a view on the various control factors that impact the budget
## Factors impacting Azure Pricing
-There are **direct** and **indirect** costs that need to be considered while planning the Purview budgeting and cost management.
+There are **direct** and **indirect** costs that need to be considered while planning the Azure Purview budgeting and cost management.
### Direct costs
Direct costs impacting Azure Purview pricing are based on the following three di
#### Elastic data map -- The **Data map** is the foundation of the Purview architecture and so needs to be up to date with asset information in the data estate at any given point
+- The **Data map** is the foundation of the Azure Purview architecture and so needs to be up to date with asset information in the data estate at any given point
- The data map is charged in terms of **Capacity Unit** (CU). The data map is provisioned at one CU if the catalog is storing up to 10 GB of metadata storage and serves up to 25 data map operations/sec
Direct costs impacting Azure Purview pricing are based on the following three di
- **Advanced Resource Set** is an optional feature, which allows for customers to get enriched resource set information computed such as Total Size, Partition Count, etc., and enables the customization of resource set grouping via pattern rules. If Advanced Resource Set feature is not enabled, your data catalog will still contain resource set assets, but without the aggregated properties. There will be no "Resource Set" meter billed to the customer in this case. -- Use the basic resource set feature, before switching on the Advanced Resource Sets in Purview to verify if requirements are met
+- Use the basic resource set feature, before switching on the Advanced Resource Sets in Azure Purview to verify if requirements are met
- Consider turning on Advanced Resource Sets if:
- - your data lakes schema is constantly changing, and you are looking for additional value beyond the basic Resource Set feature to enable Purview to compute parameters such as #partitions, size of the data estate, etc., as a service
+ - your data lakes schema is constantly changing, and you are looking for additional value beyond the basic Resource Set feature to enable Azure Purview to compute parameters such as #partitions, size of the data estate, etc., as a service
- there is a need to customize how resource set assets get grouped - It is important to note that billing for Advanced Resource Sets is based on the compute used by the offline tier to aggregate resource set information and is dependent on the size/number of resource sets in your catalog
Direct costs impacting Azure Purview pricing are based on the following three di
Indirect costs impacting Azure Purview pricing to be considered are: - [Managed resources](https://azure.microsoft.com/pricing/details/azure-purview/)
- - When a Purview account is provisioned, a storage account and event hub queue are created within the subscription in order to cater to secured scanning, which may be charged separately
+ - When an Azure Purview account is provisioned, a storage account and event hub queue are created within the subscription in order to cater to secured scanning, which may be charged separately
- [Azure private endpoint](./catalog-private-link.md)
- - Azure private end points are used for Purview accounts where it is required for users on a virtual network (VNet) to securely access the catalog over a private link
+ - Azure private end points are used for Azure Purview accounts where it is required for users on a virtual network (VNet) to securely access the catalog over a private link
- The prerequisites for setting up private endpoints could result in extra costs - [Self-hosted integration runtime related costs](./manage-integration-runtimes.md)
purview Deployment Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/deployment-best-practices.md
Last updated 11/23/2020
# Azure Purview deployment best practices
-This article identifies common tasks that can help you deploy Purview into production. These tasks can be completed in phases, over the course of a month or more. Even organizations who have already deployed Purview can use this guide to ensure they're getting the most out of their investment.
+This article identifies common tasks that can help you deploy Azure Purview into production. These tasks can be completed in phases, over the course of a month or more. Even organizations who have already deployed Azure Purview can use this guide to ensure they're getting the most out of their investment.
A well-planned deployment of a data governance platform (such as Azure Purview), can give the following benefits:
A well-planned deployment of a data governance platform (such as Azure Purview),
## Prerequisites - Access to Microsoft Azure with a development or production subscription-- Ability to create Azure resources including Purview
+- Ability to create Azure resources including Azure Purview
- Access to data sources such as Azure Data Lake Storage or Azure SQL in test, development, or production environments - For Data Lake Storage, the required role to scan is Reader Role - For SQL, the identity must be able to query tables for sampling of classifications
The general approach is to break down those overarching objectives into various
Once your organization agrees on the high-level objectives and goals, there will be many questions from multiple groups. ItΓÇÖs crucial to gather these questions in order to craft a plan to address all of the concerns. Some example questions that you may run into during the initial phase: 1. What are the main organization data sources and data systems?
-2. For data sources that are not supported yet by Purview, what are my options?
-3. How many Purview instances do we need?
+2. For data sources that are not supported yet by Azure Purview, what are my options?
+3. How many Azure Purview instances do we need?
4. Who are the users? 5. Who can scan new data sources?
-6. Who can modify content inside of Purview?
-7. What process can I use to improve the data quality in Purview?
+6. Who can modify content inside of Azure Purview?
+7. What process can I use to improve the data quality in Azure Purview?
8. How to bootstrap the platform with existing critical assets, glossary terms, and contacts? 9. How to integrate with existing systems? 10. How to gather feedback and build a sustainable process?
While you might not have the answer to most of these questions right away, it ca
## Include the right stakeholders
-To ensure the success of implementing Purview for the entire enterprise, itΓÇÖs important to involve the right stakeholders. Only a few people are involved in the initial phase. However, as the scope expands, you will require additional personas to contribute to the project and provide feedback.
+To ensure the success of implementing Azure Purview for the entire enterprise, itΓÇÖs important to involve the right stakeholders. Only a few people are involved in the initial phase. However, as the scope expands, you will require additional personas to contribute to the project and provide feedback.
Some key stakeholders that you may want to include: |Persona|Roles| |||
-|**Chief Data Officer**|The CDO oversees a range of functions that may include data management, data quality, master data management, data science, business intelligence, and creating data strategy. They can be the sponsor of the Purview implementation project.|
+|**Chief Data Officer**|The CDO oversees a range of functions that may include data management, data quality, master data management, data science, business intelligence, and creating data strategy. They can be the sponsor of the Azure Purview implementation project.|
|**Domain/Business Owner**|A business person who influences usage of tools and has budget control| |**Data Analyst**|Able to frame a business problem and analyze data to help leaders make business decisions| |**Data Architect**|Design databases for mission-critical line-of-business apps along with designing and implementing data security|
Some key stakeholders that you may want to include:
|**Data Scientist**|Build analytical models and set up data products to be accessed by APIs| |**DB Admin**|Own, track, and resolve database-related incidents and requests within service-level agreements (SLAs); May set up data pipelines| |**DevOps**|Line-of-Business application development and implementation; may include writing scripts and orchestration capabilities|
-|**Data Security Specialist**|Assess overall network and data security, which involves data coming in and out of Purview|
+|**Data Security Specialist**|Assess overall network and data security, which involves data coming in and out of Azure Purview|
## Identify key scenarios
-Purview can be used to centrally manage data governance across an organizationΓÇÖs data estate spanning cloud and on-premises environments. To have a successful implementation, you must identify key scenarios that are critical to the business. These scenarios can cross business unit boundaries or impact multiple user personas either upstream or downstream.
+Azure Purview can be used to centrally manage data governance across an organizationΓÇÖs data estate spanning cloud and on-premises environments. To have a successful implementation, you must identify key scenarios that are critical to the business. These scenarios can cross business unit boundaries or impact multiple user personas either upstream or downstream.
These scenarios can be written up in various ways, but you should include at least these five dimensions: 1. Persona ΓÇô Who are the users? 2. Source system ΓÇô What are the data sources such as Azure Data Lake Storage Gen2 or Azure SQL Database? 3. Impact Area ΓÇô What is the category of this scenario?
-4. Detail scenarios ΓÇô How the users use Purview to solve problems?
+4. Detail scenarios ΓÇô How the users use Azure Purview to solve problems?
5. Expected outcome ΓÇô What is the success criteria? The scenarios must be specific, actionable, and executable with measurable results. Some example scenarios that you can use:
The scenarios must be specific, actionable, and executable with measurable resul
|Discover business-critical assets|I need to have a search engine that can search through all metadata in the catalog. I should be able to search using technical term, business term with either simple or complex search using wildcard.|Business Analyst, Data Scientist, Data Engineer, Data Admin| |Track data to understand its origin and troubleshoot data issues|I need to have data lineage to track data in reports, predictions, or models back to its original source and understand the changes and where the data has resided through the data life cycle. This scenario needs to support prioritized data pipelines Azure Data Factory and Databricks.|Data Engineer, Data Scientist| |Enrich metadata on critical data assets|I need to enrich the data set in the catalog with technical metadata that is generated automatically. Classification and labeling are some examples.|Data Engineer, Domain/Business Owner|
-|Govern data assets with friendly user experience|I need to have a Business glossary for business-specific metadata. The business users can use Purview for self-service scenarios to annotate their data and enable the data to be discovered easily via search.|Domain/Business Owner, Business Analyst, Data Scientist, Data Engineer|
+|Govern data assets with friendly user experience|I need to have a Business glossary for business-specific metadata. The business users can use Azure Purview for self-service scenarios to annotate their data and enable the data to be discovered easily via search.|Domain/Business Owner, Business Analyst, Data Scientist, Data Engineer|
## Deployment models
-If you have only one small group using Purview with basic consumption use cases, the approach could be as simple as having one Purview instance to service the entire group. However, you may also wonder whether your organization needs more than one Purview instance. And if using multiple Purview instances, how can employees promote the assets from one stage to another.
+If you have only one small group using Azure Purview with basic consumption use cases, the approach could be as simple as having one Azure Purview instance to service the entire group. However, you may also wonder whether your organization needs more than one Azure Purview instance. And if using multiple Azure Purview instances, how can employees promote the assets from one stage to another.
-### Determine the number of Purview instances
+### Determine the number of Azure Purview instances
-In most cases, there should only be one Purview account for the entire organization. This approach takes maximum advantage of the ΓÇ£network effectsΓÇ¥ where the value of the platform increases exponentially as a function of the data that resides inside the platform.
+In most cases, there should only be one Azure Purview account for the entire organization. This approach takes maximum advantage of the ΓÇ£network effectsΓÇ¥ where the value of the platform increases exponentially as a function of the data that resides inside the platform.
However, there are exceptions to this pattern: 1. **Testing new configurations** ΓÇô Organizations may want to create multiple instances for testing out scan configurations or classifications in isolated environments. Although there is a ΓÇ£versioningΓÇ¥ feature in some areas of the platform such as glossary, it would be easier to have a ΓÇ£disposableΓÇ¥ instance to freely test. 2. **Separating Test, Pre-production and Production** ΓÇô Organizations want to create different platforms for different kinds of data stored in different environments. It is not recommended as those kinds of data are different content types. You could use glossary term at the top hierarchy level or category to segregate content types.
-3. **Conglomerates and federated model** ΓÇô Conglomerates often have many business units (BUs) that operate separately, and, in some cases, they won't even share billing with each other. In those cases, the organization will end up creating a Purview instance for each BU. This model is not ideal, but may be necessary, especially because BUs are often not willing to share billing.
-4. **Compliance** ΓÇô There are some strict compliance regimes, which treat even metadata as sensitive and require it to be in a specific geography. If a company has multiple geographies, the only solution is to have multiple Purview instances, one for each geography.
+3. **Conglomerates and federated model** ΓÇô Conglomerates often have many business units (BUs) that operate separately, and, in some cases, they won't even share billing with each other. In those cases, the organization will end up creating an Azure Purview instance for each BU. This model is not ideal, but may be necessary, especially because BUs are often not willing to share billing.
+4. **Compliance** ΓÇô There are some strict compliance regimes, which treat even metadata as sensitive and require it to be in a specific geography. If a company has multiple geographies, the only solution is to have multiple Azure Purview instances, one for each geography.
### Create a process to move to production
-Some organizations may decide to keep things simple by working with a single production version of Purview. They probably donΓÇÖt need to go beyond discovery, search, and browse scenarios. If some assets have incorrect glossary terms, itΓÇÖs quite forgiving to let people self-correct. However, most organizations that want to deploy Purview across various business units will want to have some form of process and control.
+Some organizations may decide to keep things simple by working with a single production version of Azure Purview. They probably donΓÇÖt need to go beyond discovery, search, and browse scenarios. If some assets have incorrect glossary terms, itΓÇÖs quite forgiving to let people self-correct. However, most organizations that want to deploy Azure Purview across various business units will want to have some form of process and control.
-Another important aspect to include in your production process is how classifications and labels can be migrated. Purview has over 90 system classifiers. You can apply system or custom classifications on file, table, or column assets. Classifications are like subject tags and are used to mark and identify content of a specific type found within your data estate during scanning. Sensitivity labels are used to identify the categories of classification types within your organizational data, and then group the policies you wish to apply to each category. It makes use of the same sensitive information types as Microsoft 365, allowing you to stretch your existing security policies and protection across your entire content and data estate. It can scan and automatically classify documents. For example, if you have a file named multiple.docx and it has a National ID number in its content, Purview will add classification such as EU National Identification Number in the Asset Detail page.
+Another important aspect to include in your production process is how classifications and labels can be migrated. Azure Purview has over 90 system classifiers. You can apply system or custom classifications on file, table, or column assets. Classifications are like subject tags and are used to mark and identify content of a specific type found within your data estate during scanning. Sensitivity labels are used to identify the categories of classification types within your organizational data, and then group the policies you wish to apply to each category. It makes use of the same sensitive information types as Microsoft 365, allowing you to stretch your existing security policies and protection across your entire content and data estate. It can scan and automatically classify documents. For example, if you have a file named multiple.docx and it has a National ID number in its content, Azure Purview will add classification such as EU National Identification Number in the Asset Detail page.
-In Purview, there are several areas where the Catalog Administrators need to ensure consistency and maintenance best practices over its life cycle:
+In Azure Purview, there are several areas where the Catalog Administrators need to ensure consistency and maintenance best practices over its life cycle:
-* **Data assets** ΓÇô Data sources will need to be rescanned across environments. ItΓÇÖs not recommended to scan only in development and then regenerate them using APIs in Production. The main reason is that the Purview scanners do a lot more ΓÇ£wiringΓÇ¥ behind the scenes on the data assets, which could be complex to move them to a different Purview instance. ItΓÇÖs much easier to just add the same data source in production and scan the sources again. The general best practice is to have documentation of all scans, connections, and authentication mechanisms being used.
+* **Data assets** ΓÇô Data sources will need to be rescanned across environments. ItΓÇÖs not recommended to scan only in development and then regenerate them using APIs in Production. The main reason is that the Azure Purview scanners do a lot more ΓÇ£wiringΓÇ¥ behind the scenes on the data assets, which could be complex to move them to a different Azure Purview instance. ItΓÇÖs much easier to just add the same data source in production and scan the sources again. The general best practice is to have documentation of all scans, connections, and authentication mechanisms being used.
* **Scan rule sets** ΓÇô This is your collection of rules assigned to specific scan such as file type and classifications to detect. If you donΓÇÖt have that many scan rule sets, itΓÇÖs possible to just re-create them manually again via Production. This will require an internal process and good documentation. However, if you rule sets change on the daily or weekly basis, this could be addressed by exploring the REST API route. * **Custom classifications** ΓÇô Your classifications may not also change on a regular basis. During the initial phase of deployment, it may take some time to understand various requirements to come up with custom classifications. However, once settled, this will require little change. So the recommendation here is to manually migrate any custom classifications over or use the REST API. * **Glossary** ΓÇô ItΓÇÖs possible to export and import glossary terms via the UX. For automation scenarios, you can also use the REST API.
-* **Resource set pattern policies** ΓÇô This functionality is very advanced for any typical organizations to apply. In some cases, your Azure Data Lake Storage has folder naming conventions and specific structure that may cause problems for Purview to generate the resource set. Your business unit may also want to change the resource set construction with additional customizations to fit the business needs. For this scenario, itΓÇÖs best to keep track of all changes via REST API, and document the changes through external versioning platform.
-* **Role assignment** ΓÇô This is where you control who has access to Purview and which permissions they have. Purview also has REST API to support export and import of users and roles but this is not Atlas API-compatible. The recommendation is to assign an Azure Security Group and manage the group membership instead.
+* **Resource set pattern policies** ΓÇô This functionality is very advanced for any typical organizations to apply. In some cases, your Azure Data Lake Storage has folder naming conventions and specific structure that may cause problems for Azure Purview to generate the resource set. Your business unit may also want to change the resource set construction with additional customizations to fit the business needs. For this scenario, itΓÇÖs best to keep track of all changes via REST API, and document the changes through external versioning platform.
+* **Role assignment** ΓÇô This is where you control who has access to Azure Purview and which permissions they have. Azure Purview also has REST API to support export and import of users and roles but this is not Atlas API-compatible. The recommendation is to assign an Azure Security Group and manage the group membership instead.
-### Plan and implement different integration points with Purview
+### Plan and implement different integration points with Azure Purview
-ItΓÇÖs likely that a mature organization already has an existing data catalog. The key question is whether to continue to use the existing technology and sync with Purview or not. To handle syncing with existing products in an organization, Purview provides Atlas REST APIs. Atlas APIs provide a powerful and flexible mechanism handling both push and pull scenarios. Information can be published to Purview using Atlas APIs for bootstrapping or to push latest updates from another system into Purview. The information available in Purview can also be read using Atlas APIs and then synced back to existing products.
+ItΓÇÖs likely that a mature organization already has an existing data catalog. The key question is whether to continue to use the existing technology and sync with Azure Purview or not. To handle syncing with existing products in an organization, Azure Purview provides Atlas REST APIs. Atlas APIs provide a powerful and flexible mechanism handling both push and pull scenarios. Information can be published to Azure Purview using Atlas APIs for bootstrapping or to push latest updates from another system into Azure Purview. The information available in Azure Purview can also be read using Atlas APIs and then synced back to existing products.
-For other integration scenarios such as ticketing, custom user interface, and orchestration you can use Atlas APIs and Kafka endpoints. In general, there are four integration points with Purview:
+For other integration scenarios such as ticketing, custom user interface, and orchestration you can use Atlas APIs and Kafka endpoints. In general, there are four integration points with Azure Purview:
-* **Data Asset** ΓÇô This enables Purview to scan a storeΓÇÖs assets in order to enumerate what those assets are and collect any readily available metadata about them. So for SQL this could be a list of DBs, tables, stored procedures, views and config data about them kept in places like `sys.tables`. For something like Azure Data Factory (ADF) this could be enumerating all the pipelines and getting data on when they were created, last run, current state.
-* **Lineage** ΓÇô This enables Purview to collect information from an analysis/data mutation system on how data is moving around. For something like Spark this could be gathering information from the execution of a notebook to see what data the notebook ingested, how it transformed it and where it outputted it. For something like SQL, it could be analyzing query logs to reverse engineer what mutation operations were executed and what they did. We support both push and pull based lineage depending on the needs.
-* **Classification** ΓÇô This enables Purview to take physical samples from data sources and run them through our classification system. The classification system figures out the semantics of a piece of data. For example, we may know that a file is a Parquet file and has three columns and the third one is a string. But the classifiers we run on the samples will tell us that the string is a name, address, or phone number. Lighting up this integration point means that we have defined how Purview can open up objects like notebooks, pipelines, parquet files, tables, and containers.
-* **Embedded Experience** ΓÇô Products that have a ΓÇ£studioΓÇ¥ like experience (such as ADF, Synapse, SQL Studio, PBI, and Dynamics) usually want to enable users to discover data they want to interact with and also find places to output data. PurviewΓÇÖs catalog can help to accelerate these experiences by providing an embedding experience. This experience can occur at the API or the UX level at the partnerΓÇÖs option. By embedding a call to Purview, the organization can take advantage of PurviewΓÇÖs map of the data estate to find data assets, see lineage, check schemas, look at ratings, contacts etc.
+* **Data Asset** ΓÇô This enables Azure Purview to scan a storeΓÇÖs assets in order to enumerate what those assets are and collect any readily available metadata about them. So for SQL this could be a list of DBs, tables, stored procedures, views and config data about them kept in places like `sys.tables`. For something like Azure Data Factory (ADF) this could be enumerating all the pipelines and getting data on when they were created, last run, current state.
+* **Lineage** ΓÇô This enables Azure Purview to collect information from an analysis/data mutation system on how data is moving around. For something like Spark this could be gathering information from the execution of a notebook to see what data the notebook ingested, how it transformed it and where it outputted it. For something like SQL, it could be analyzing query logs to reverse engineer what mutation operations were executed and what they did. We support both push and pull based lineage depending on the needs.
+* **Classification** ΓÇô This enables Azure Purview to take physical samples from data sources and run them through our classification system. The classification system figures out the semantics of a piece of data. For example, we may know that a file is a Parquet file and has three columns and the third one is a string. But the classifiers we run on the samples will tell us that the string is a name, address, or phone number. Lighting up this integration point means that we have defined how Azure Purview can open up objects like notebooks, pipelines, parquet files, tables, and containers.
+* **Embedded Experience** ΓÇô Products that have a ΓÇ£studioΓÇ¥ like experience (such as ADF, Synapse, SQL Studio, PBI, and Dynamics) usually want to enable users to discover data they want to interact with and also find places to output data. Azure PurviewΓÇÖs catalog can help to accelerate these experiences by providing an embedding experience. This experience can occur at the API or the UX level at the partnerΓÇÖs option. By embedding a call to Azure Purview, the organization can take advantage of Azure PurviewΓÇÖs map of the data estate to find data assets, see lineage, check schemas, look at ratings, contacts etc.
## Phase 1: Pilot
-In this phase, Purview must be created and configured for a very small set of users. Usually, it is just a group of 2-3 people working together to run through end-to-end scenarios. They are considered the advocates of Purview in their organization. The main goal of this phase is to ensure key functionalities can be met and the right stakeholders are aware of the project.
+In this phase, Azure Purview must be created and configured for a very small set of users. Usually, it is just a group of 2-3 people working together to run through end-to-end scenarios. They are considered the advocates of Azure Purview in their organization. The main goal of this phase is to ensure key functionalities can be met and the right stakeholders are aware of the project.
### Tasks to complete |Task|Detail|Duration| |||| |Gather & agree on requirements|Discussion with all stakeholders to gather a full set of requirements. Different personas must participate to agree on a subset of requirements to complete for each phase of the project.|1 Week|
-|Navigating Purview|Understand how to use Purview from the home page.|1 Day|
+|Navigating Azure Purview|Understand how to use Azure Purview from the home page.|1 Day|
|Configure ADF for lineage|Identify key pipelines and data assets. Gather all information required to connect to an internal ADF account.|1 Day| |Scan a data source such as Azure Data Lake Storage|Add the data source and set up a scan. Ensure the scan successfully detects all assets.|2 Day|
-|Search and browse|Allow end users to access Purview and perform end-to-end search and browse scenarios.|1 Day|
+|Search and browse|Allow end users to access Azure Purview and perform end-to-end search and browse scenarios.|1 Day|
### Acceptance criteria
-* Purview account is created successfully in organization subscription under the organization tenant.
-* A small group of users with multiple roles can access Purview.
-* Purview is configured to scan at least one data source.
-* Users should be able to extract key values of Purview such as:
+* Azure Purview account is created successfully in organization subscription under the organization tenant.
+* A small group of users with multiple roles can access Azure Purview.
+* Azure Purview is configured to scan at least one data source.
+* Users should be able to extract key values of Azure Purview such as:
* Search and browse * Lineage * Users should be able to assign asset ownership in the asset page.
In this phase, Purview must be created and configured for a very small set of us
## Phase 2: Minimum viable product
-Once you have the agreed requirements and participated business units to onboard Purview, the next step is to work on a Minimum Viable Product (MVP) release. In this phase, you will expand the usage of Purview to more users who will have additional needs horizontally and vertically. There will be key scenarios that must be met horizontally for all users such as glossary terms, search, and browse. There will also be in-depth requirements vertically for each business unit or group to cover specific end-to-end scenarios such as lineage from Azure Data Lake Storage to Azure Synapse DW to Power BI.
+Once you have the agreed requirements and participated business units to onboard Azure Purview, the next step is to work on a Minimum Viable Product (MVP) release. In this phase, you will expand the usage of Azure Purview to more users who will have additional needs horizontally and vertically. There will be key scenarios that must be met horizontally for all users such as glossary terms, search, and browse. There will also be in-depth requirements vertically for each business unit or group to cover specific end-to-end scenarios such as lineage from Azure Data Lake Storage to Azure Synapse DW to Power BI.
### Tasks to complete |Task|Detail|Duration| |||| |[Scan Azure Synapse Analytics](register-scan-azure-synapse-analytics.md)|Start to onboard your database sources and scan them to populate key assets|2 Days|
-|[Create custom classifications and rules](create-a-custom-classification-and-classification-rule.md)|Once your assets are scanned, your users may realize that there are additional use cases for more classification beside the default classifications from Purview.|2-4 Weeks|
+|[Create custom classifications and rules](create-a-custom-classification-and-classification-rule.md)|Once your assets are scanned, your users may realize that there are additional use cases for more classification beside the default classifications from Azure Purview.|2-4 Weeks|
|[Scan Power BI](register-scan-power-bi-tenant.md)|If your organization uses Power BI, you can scan Power BI in order to gather all data assets being used by Data Scientists or Data Analysts which have requirements to include lineage from the storage layer.|1-2 Weeks|
-|[Import glossary terms](how-to-create-import-export-glossary.md)|In most cases, your organization may already develop a collection of glossary terms and term assignment to assets. This will require an import process into Purview via .csv file.|1 Week|
+|[Import glossary terms](how-to-create-import-export-glossary.md)|In most cases, your organization may already develop a collection of glossary terms and term assignment to assets. This will require an import process into Azure Purview via .csv file.|1 Week|
|Add contacts to assets|For top assets, you may want to establish a process to either allow other personas to assign contacts or import via REST APIs.|1 Week| |Add sensitive labels and scan|This might be optional for some organizations, depending on the usage of Labeling from Microsoft 365.|1-2 Weeks|
-|Get classification and sensitive insights|For reporting and insight in Purview, you can access this functionality to get various reports and provide presentation to management.|1 Day|
-|Onboard additional users using Purview managed users|This step will require the Purview Admin to work with the Azure Active Directory Admin to establish new Security Groups to grant access to Purview.|1 Week|
+|Get classification and sensitive insights|For reporting and insight in Azure Purview, you can access this functionality to get various reports and provide presentation to management.|1 Day|
+|Onboard additional users using Azure Purview managed users|This step will require the Azure Purview Admin to work with the Azure Active Directory Admin to establish new Security Groups to grant access to Azure Purview.|1 Week|
### Acceptance criteria
-* Successfully onboard a larger group of users to Purview (50+)
+* Successfully onboard a larger group of users to Azure Purview (50+)
* Scan business critical data sources * Import and assign all critical glossary terms * Successfully test important labeling on key assets
Once you have the agreed requirements and participated business units to onboard
## Phase 3: Pre-production
-Once the MVP phase has passed, itΓÇÖs time to plan for pre-production milestone. Your organization may decide to have a separate instance of Purview for pre-production and production, or keep the same instance but restrict access. Also in this phase, you may want to include scanning on on-premises data sources such as SQL Server. If there is any gap in data sources not supported by Purview, it is time to explore the Atlas API to understand additional options.
+Once the MVP phase has passed, itΓÇÖs time to plan for pre-production milestone. Your organization may decide to have a separate instance of Azure Purview for pre-production and production, or keep the same instance but restrict access. Also in this phase, you may want to include scanning on on-premises data sources such as SQL Server. If there is any gap in data sources not supported by Azure Purview, it is time to explore the Atlas API to understand additional options.
### Tasks to complete
Once the MVP phase has passed, itΓÇÖs time to plan for pre-production milestone.
|||| |Refine your scan using scan rule set|Your organization will have a lot of data sources for pre-production. ItΓÇÖs important to pre-define key criteria for scanning so that classifications and file extension can be applied consistently across the board.|1-2 Days| |Assess region availability for scan|Depending on the region of the data sources and organizational requirements on compliance and security, you may want to consider what regions must be available for scanning.|1 Day|
-|Understand firewall concept when scanning|This step requires some exploration of how the organization configures its firewall and how Purview can authenticate itself to access the data sources for scanning.|1 Day|
+|Understand firewall concept when scanning|This step requires some exploration of how the organization configures its firewall and how Azure Purview can authenticate itself to access the data sources for scanning.|1 Day|
|Understand Private Link concept when scanning|If your organization uses Private Link, you must lay out the foundation of network security to include Private Link as a part of the requirements.|1 Day| |[Scan on-premises SQL Server](register-scan-on-premises-sql-server.md)|This is optional if you have on-premises SQL Server. The scan will require setting up [Self-hosted Integration Runtime](manage-integration-runtimes.md) and adding SQL Server as a data source.|1-2 Weeks|
-|Use Purview REST API for integration scenarios|If you have requirements to integrate Purview with other 3rd party technologies such as orchestration or ticketing system, you may want to explore REST API area.|1-4 Weeks|
-|Understand Purview pricing|This step will provide the organization important financial information to make decision.|1-5 Days|
+|Use Azure Purview REST API for integration scenarios|If you have requirements to integrate Azure Purview with other 3rd party technologies such as orchestration or ticketing system, you may want to explore REST API area.|1-4 Weeks|
+|Understand Azure Purview pricing|This step will provide the organization important financial information to make decision.|1-5 Days|
### Acceptance criteria
purview How To Access Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-access-policies-storage.md
Steps to create a new policy in Azure Purview are as follows.
1. Log in to Azure Purview portal.
-1. Navigate to Azure Purview policy app using the left side panel.
+1. Navigate to Azure Purview Policy management app using the left side panel.
![Image shows how a data owner can access the Policy functionality in Azure Purview when it wants to update a policy.](./media/how-to-access-policies-storage/policy-onboard-guide-2.png)
The steps to publish a policy are as follows
1. Log in to Azure Purview portal.
-1. Navigate to the Azure Purview Policy app using the left side panel.
+1. Navigate to the Azure Purview Policy management app using the left side panel.
![Image shows how a data owner can access the Policy functionality in Azure Purview when it wants to publish a policy.](./media/how-to-access-policies-storage/policy-onboard-guide-2.png)
role-based-access-control Custom Roles Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/custom-roles-portal.md
# Create or update Azure custom roles using the Azure portal
-If the [Azure built-in roles](built-in-roles.md) don't meet the specific needs of your organization, you can create your own Azure custom roles. Just like built-in roles, you can assign custom roles to users, groups, and service principals at subscription and resource group scopes. Custom roles are stored in an Azure Active Directory (Azure AD) directory and can be shared across subscriptions. Each directory can have up to 5000 custom roles. Custom roles can be created using the Azure portal, Azure PowerShell, Azure CLI, or the REST API. This article describes how to create custom roles using the Azure portal.
+If the [Azure built-in roles](built-in-roles.md) don't meet the specific needs of your organization, you can create your own Azure custom roles. Just like built-in roles, you can assign custom roles to users, groups, and service principals at management group (in preview only), subscription and resource group scopes. Custom roles are stored in an Azure Active Directory (Azure AD) directory and can be shared across subscriptions. Each directory can have up to 5000 custom roles. Custom roles can be created using the Azure portal, Azure PowerShell, Azure CLI, or the REST API. This article describes how to create custom roles using the Azure portal.
## Prerequisites
sentinel Notebook Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/notebook-get-started.md
This warning doesn't impact notebook functionality.
### Authenticate to your Microsoft Sentinel workspace from your notebook
-Authenticate to your Microsoft Sentinel workspace using [device authorization](../active-directory/develop/v2-oauth2-device-code.md) with your Azure credentials.
+In Azure ML notebooks, the authentication defaults to using the credentials you used to authenticate to the Azure ML workspace.
-Device authorization adds another factor to the authentication by generating a one-time device code that you supply as part of the authentication process.
+**Authenticate by using managed identity**
-**To authenticate using device authorization**:
-
-1. Run the following code cell to generate and display a device code:
+Run the following code to authenticate to your Sentinel workspace.
```python
- # Get the Microsoft Sentinel workspace details from msticpyconfig
- # Loading WorkspaceConfig with no parameters uses the details
- # of your Default workspace
- # If you want to connect to a specific workspace use this syntax:
- # ws_config = WorkspaceConfig(workspace="WorkspaceName")
- # ('WorkspaceName' should be one of the workspaces defined in msticpyconfig.yaml)
+ # Get the default Microsoft Sentinel workspace details from msticpyconfig.yaml
+ ws_config = WorkspaceConfig()
- # Connect to Microsoft Sentinel with your QueryProvider and config details
+ # Connect to Microsoft Sentinel with our QueryProvider and config details
qry_prov.connect(ws_config) ```
- For example:
-
- :::image type="content" source="media/notebook-get-started/device-authorization.png" alt-text="Screenshot showing a device authorization code.":::
-
-1. Select and copy the indicated code to your clipboard. Then, go to [https://microsoft.com/devicelogin](https://microsoft.com/devicelogin) and paste the code in where prompted.
-
-1. When you see the confirmation message that you've signed in, close the browser tab return to your notebook in Microsoft Sentinel.
-
- Output similar to the following is displayed in your notebook:
+Output similar to the following is displayed in your notebook:
- :::image type="content" source="media/notebook-get-started/authorization-complete.png" alt-text="Screenshot showing that the device authorization process is complete.":::
+ :::image type="content" source="media/notebook-get-started/authorization-connected-workspace.png" alt-text="Screenshot that shows authentication to Azure that ends with a connected message.":::
**Cache your sign-in token using Azure CLI**
sentinel Sap Deploy Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sap-deploy-solution.md
This procedure describes how to ensure that your SAP system has the correct prer
1. Download and install one of the following SAP change requests from the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/CR):
- - **SAP version 750 or later**: Install the SAP change request *NPLK900170*
- - **SAP version 740**: Install the SAP change request *NPLK900169*
+ - **SAP version 750 or later**: Install the SAP change request *NPLK900180*
+ - **SAP version 740**: Install the SAP change request *NPLK900179*
When you're performing this step, be sure to use binary mode to transfer the files to the SAP system, and use the **STMS_IMPORT** SAP transaction code.
sentinel Sap Solution Detailed Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sap-solution-detailed-requirements.md
For example, in Ubuntu, you can mount a disk to the `/var/lib/docker` directory
The following SAP log change requests are required for the SAP solution, depending on your SAP Basis version: -- **SAP Basis versions 7.50 and higher**, install NPLK900170-- **For lower versions**, install NPLK900169
+- **SAP Basis versions 7.50 and higher**, install NPLK900180
+- **For lower versions**, install NPLK900179
- **To create an SAP role with the required authorizations**, for any supported SAP Basis version, install NPLK900163. For more information, see [Configure your SAP system](sap-deploy-solution.md#configure-your-sap-system) and [Required ABAP authorizations](#required-abap-authorizations). > [!NOTE]
service-bus-messaging Message Transfers Locks Settlement https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/message-transfers-locks-settlement.md
The central capability of a message broker such as Service Bus is to accept messages into a queue or topic and hold them available for later retrieval. *Send* is the term that is commonly used for the transfer of a message into the message broker. *Receive* is the term commonly used for the transfer of a message to a retrieving client.
-When a client sends a message, it usually wants to know whether the message has been properly transferred to and accepted by the broker or whether some sort of error occurred. This positive or negative acknowledgment settles the client and the broker understanding about the transfer state of the message. So, it' referred to as *settlement*.
+When a client sends a message, it usually wants to know whether the message has been properly transferred to and accepted by the broker or whether some sort of error occurred. This positive or negative acknowledgment settles the understanding of both the client and broker about the transfer state of the message. Therefore, it's referred to as a *settlement*.
Likewise, when the broker transfers a message to a client, the broker and client want to establish an understanding of whether the message has been successfully processed and can therefore be removed, or whether the message delivery or processing failed, and thus the message might have to be delivered again.
The default value for the lock duration is **30 seconds**. You can specify a dif
## Next steps - A special case of settlement is deferral. See the [Message deferral](message-deferral.md) for details. - To learn about dead-lettering, see [Dead-letter queues](service-bus-dead-letter-queues.md).-- To learn more about Service Bus messaging in general, see [Service Bus queues, topics, and subscriptions](service-bus-queues-topics-subscriptions.md)
+- To learn more about Service Bus messaging in general, see [Service Bus queues, topics, and subscriptions](service-bus-queues-topics-subscriptions.md)
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/secure-file-transfer-protocol-support.md
SFTP support is available in the following regions:
- Germany West Central - East Asia - France Central
+- West Europe
## Pricing and billing
storage Versioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/versioning-overview.md
# Blob versioning
-You can enable Blob storage versioning to automatically maintain previous versions of an object. When blob versioning is enabled, you can restore an earlier version of a blob to recover your data if it is erroneously modified or deleted.
+You can enable Blob storage versioning to automatically maintain previous versions of an object. When blob versioning is enabled, you can access earlier versions of a blob to recover your data if it is modified or deleted.
## Recommended data protection configuration
To learn how to enable or disable blob versioning, see [Enable and manage blob v
Disabling blob versioning does not delete existing blobs, versions, or snapshots. When you turn off blob versioning, any existing versions remain accessible in your storage account. No new versions are subsequently created.
-If a blob was created or modified after versioning was disabled on the storage account, then overwriting the blob creates a new version. The updated blob is no longer the current version and does not have a version ID. All subsequent updates to the blob will overwrite its data without saving the previous state.
+After versioning is disabled, modifying the current version creates a blob that is not a version. All subsequent updates to the blob will overwrite its data without saving the previous state. All existing versions persist as previous versions.
You can read or delete versions using the version ID after versioning is disabled. You can also list a blob's versions after versioning is disabled. The following diagram shows how modifying a blob after versioning is disabled creates a blob that is not versioned. Any existing versions associated with the blob persist. ## Blob versioning and soft delete
The following table shows the permission required on a SAS to delete a blob vers
Enabling blob versioning can result in additional data storage charges to your account. When designing your application, it is important to be aware of how these charges might accrue so that you can minimize costs.
-Blob versions, like blob snapshots, are billed at the same rate as active data. How versions are billed depends on whether you have explicitly set the tier for the base blob or for any of its versions (or snapshots). For more information about blob tiers, see [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md).
+Blob versions, like blob snapshots, are billed at the same rate as active data. How versions are billed depends on whether you have explicitly set the tier for the current or previous versions of a blob (or snapshots). For more information about blob tiers, see [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md).
If you have not changed a blob or version's tier, then you are billed for unique blocks of data across that blob, its versions, and any snapshots it may have. For more information, see [Billing when the blob tier has not been explicitly set](#billing-when-the-blob-tier-has-not-been-explicitly-set).
For more information about billing details for blob snapshots, see [Blob snapsho
### Billing when the blob tier has not been explicitly set
-If you have not explicitly set the blob tier for a base blob or any of its versions, then you are charged for unique blocks or pages across the blob, its versions, and any snapshots it may have. Data that is shared across a blob and its versions is charged only once. When a blob is updated, then data in a base blob diverges from the data stored in its versions, and the unique data is charged per block or page.
+If you have not explicitly set the blob tier for any versions of a blob, then you are charged for unique blocks or pages across all versions, and any snapshots it may have. Data that is shared across blob versions is charged only once. When a blob is updated, then data in the new current version diverges from the data stored in previous versions, and the unique data is charged per block or page.
When you replace a block within a block blob, that block is subsequently charged as a unique block. This is true even if the block has the same block ID and the same data as it has in the previous version. After the block is committed again, it diverges from its counterpart in the previous version, and you will be charged for its data. The same holds true for a page in a page blob that's updated with identical data.
In scenario 2, one block (block 3 in the diagram) in the blob has been updated.
#### Scenario 3
-In scenario 3, the blob has been updated, but the version has not. Block 3 was replaced with block 4 in the base blob, but the previous version still reflects block 3. As a result, the account is charged for four blocks.
+In scenario 3, the blob has been updated, but the version has not. Block 3 was replaced with block 4 in the current blob, but the previous version still reflects block 3. As a result, the account is charged for four blocks.
![Diagram 3 showing billing for unique blocks in base blob and previous version.](./media/versioning-overview/versions-billing-scenario-3.png) #### Scenario 4
-In scenario 4, the base blob has been completely updated and contains none of its original blocks. As a result, the account is charged for all eight unique blocks &mdash; four in the base blob, and four in the previous version. This scenario can occur if you are writing to a blob with the [Put Blob](/rest/api/storageservices/put-blob) operation, because it replaces the entire contents of the base blob.
+In scenario 4, the current version has been completely updated and contains none of its original blocks. As a result, the account is charged for all eight unique blocks &mdash; four in the current version, and four combined in the two previous versions. This scenario can occur if you are writing to a blob with the [Put Blob](/rest/api/storageservices/put-blob) operation, because it replaces the entire contents of the blob.
![Diagram 4 showing billing for unique blocks in base blob and previous version.](./media/versioning-overview/versions-billing-scenario-4.png)
If you have explicitly set the blob tier for a blob or version (or snapshot), th
The following table describes the billing behavior for a blob or version when it is moved to a new tier.
-| When blob tier is set explicitly on… | Then you are billed for... |
+| When blob tier is set… | Then you are billed for... |
|-|-|
-| A base blob with a previous version | The base blob in the new tier and the oldest version in the original tier, plus any unique blocks in other versions.<sup>1</sup> |
-| A base blob with a previous version and a snapshot | The base blob in the new tier, the oldest version in the original tier, and the oldest snapshot in the original tier, plus any unique blocks in other versions or snapshots<sup>1</sup>. |
-| A previous version | The version in the new tier and the base blob in the original tier, plus any unique blocks in other versions.<sup>1</sup> |
+| Explicitly on a version, whether current or previous | The full content length of that version. Versions that don't have an explicitly set tier are billed only for unique blocks.<sup>1</sup> |
+| To archive | The full content length of all versions and snapshots.<sup>1</sup>. |
<sup>1</sup>If there are other previous versions or snapshots that have not been moved from their original tier, those versions or snapshots are charged based on the number of unique blocks they contain, as described in [Billing when the blob tier has not been explicitly set](#billing-when-the-blob-tier-has-not-been-explicitly-set).
Operations that explicitly set the tier of a blob, version, or snapshot include:
#### Deleting a blob when soft delete is enabled
-When blob soft delete is enabled, if you delete or overwrite a base blob that has had its tier explicitly set, then any previous versions of the soft-deleted blob are billed at full content length. For more information about how blob versioning and soft delete work together, see [Blob versioning and soft delete](#blob-versioning-and-soft-delete).
-
-The following table describes the billing behavior for a blob that is soft-deleted, depending on whether versioning is enabled or disabled. When versioning is enabled, a version is created when a blob is soft-deleted. When versioning is disabled, soft-deleting a blob creates a soft-delete snapshot.
-
-| When you overwrite a base blob with its tier explicitly set… | Then you are billed for... |
-|-|-|
-| If blob soft delete and versioning are both enabled | All existing versions at full content length regardless of tier. |
-| If blob soft delete is enabled but versioning is disabled | All existing soft-delete snapshots at full content length regardless of tier. |
+When blob soft delete is enabled, all soft-deleted entities are billed at full content length. If you delete or overwrite a current version that has had its tier explicitly set, then any previous versions of the soft-deleted blob are billed at full content length. For more information about how blob versioning and soft delete work together, see [Blob versioning and soft delete](#blob-versioning-and-soft-delete).
## Feature support
stream-analytics Input Validation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/input-validation.md
Last updated 12/10/2021
# Input validation in Azure Stream Analytics queries
-**Input validation** is a technique to use to protect the main query logic from malformed or unexpected events. It adds a first stage to a query, in which we make sure the schema we submit to the core business logic matches its expectations. It also adds a second stage, in which we triage exceptions. In this stage, we can reject invalid records into a secondary output. This article illustrates how to implement this technic.
+**Input validation** is a technique to use to protect the main query logic from malformed or unexpected events. The query is upgraded to explicitly process and check records so they can't break the main logic.
-To see an example of a query set up with input validation, see the section: [Example of query with input validation](#example-of-query-with-input-validation)
+To implement input validation, we add two initial steps to a query. We first make sure the schema submitted to the core business logic matches its expectations. We then triage exceptions, and optionally route invalid records into a secondary output.
+
+A query with input validation will be structured as follows:
+
+```SQL
+WITH preProcessingStage AS (
+ SELECT
+ -- Rename incoming fields, used for audit and debugging
+ field1 AS in_field1,
+ field2 AS in_field2,
+ ...
+
+ -- Try casting fields in their expected type
+ TRY_CAST(field1 AS bigint) as field1,
+ TRY_CAST(field2 AS array) as field2,
+ ...
+
+ FROM myInput TIMESTAMP BY myTimestamp
+),
+
+triagedOK AS (
+ SELECT -- Only fields in their new expected type
+ field1,
+ field2,
+ ...
+ FROM preProcessingStage
+ WHERE ( ... ) -- Clauses make sure that the core business logic expectations are satisfied
+),
+
+triagedOut AS (
+ SELECT -- All fields to ease diagnostic
+ *
+ FROM preProcessingStage
+ WHERE NOT (...) -- Same clauses as triagedOK, opposed with NOT
+)
+
+-- Core business logic
+SELECT
+ ...
+INTO myOutput
+FROM triagedOK
+...
+
+-- Audit output. For human review, correction, and manual re-insertion downstream
+SELECT
+ *
+INTO BlobOutput -- To a storage adapter that doesn't require strong typing, here blob/adls
+FROM triagedOut
+```
+
+To see a comprehensive example of a query set up with input validation, see the section: [Example of query with input validation](#example-of-query-with-input-validation).
+
+This article illustrates how to implement this technique.
## Context
But the capabilities offered by dynamic schema handling come with a potential do
With input validation, we add preliminary steps to our query to handle such malformed events. We'll primarily use [WITH](/stream-analytics-query/with-azure-stream-analytics) and [TRY_CAST](/stream-analytics-query/try-cast-azure-stream-analytics) to implement it.
-## Problem statement
+## Scenario: input validation for unreliable event producers
We'll be building a new ASA job that will ingest data from a single event hub. As is most often the case, we aren't responsible for the data producers. Here the producers are IoT devices sold by multiple hardware vendors.
synapse-analytics Concepts Database Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/database-designer/concepts-database-templates.md
A foreign key is a column or a combination of columns whose values match a prima
## Composite key
-A composite key is one that is composed of two or more columns that are together required to uniquely identify a table. For example, in an Order table, both OrderNumber and ProductId may be required to uniquely identify a record.
+A composite key is one that is composed of two or more columns that are together required to uniquely identify a record in a table. For example, in an Order table, both OrderNumber and ProductId may be required to uniquely identify a record.
## Relationships
synapse-analytics Overview Database Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/database-designer/overview-database-templates.md
A typical database template addresses the core requirements of a specific indust
## Available database templates
-Currently there are six database templates available within Azure Synapse Studio that customers can use to start creating their lake database.
+Currently, you can choose from 11 database templates in Azure Synapse Studio to start creating your lake database:
+ - **Agriculture**ΓÇè-ΓÇèfor companies engaged in growing crops, raising livestock, and dairy production.
+ - **Banking** - for companies that analyze banking data.
- **Consumer Goods** - for manufacturers or producers of goods bought and used by consumers.
+ - **Energy & Commodity Trading**ΓÇè-ΓÇèfor traders of energy, commodities, or carbon credits.
+ - **Freight & Logistics**ΓÇè-ΓÇèfor companies that provide freight and logistics services.
+ - **Fund Management** - for companies that manage investment funds for investors.
+ - **Life Insurance & Annuities** - for companies that provide life insurance, sell annuities, or both.
+ - **Oil & Gas**ΓÇè-ΓÇèfor companies that are involved in various phases of the Oil & Gas value chain.
+ - **Property & Casualty Insurance** - for companies that provide insurance against risks to property and various forms of liability coverage.
- **Retail** - for sellers of consumer goods or services to customers through multiple channels.
+ - **Utilities**ΓÇè-ΓÇèfor gas, electric, and water utilities; power generators; and water desalinators.
As emission and carbon management is an important discussion in all industries, we've included those components in all the available database templates. These components make it easy for companies who need to track and report their direct and indirect greenhouse gas emissions.
synapse-analytics Get Started Analyze Spark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/get-started-analyze-spark.md
A serverless Spark pool is a way of indicating how a user wants to work with Spa
## Analyze NYC Taxi data with a Spark pool > [!NOTE]
-> Make sure you have [placed the sample data into the primary storage account](get-started-create-workspace.md#place-sample-data-into-the-primary-storage-account)
+> Make sure you have [placed the sample data in the primary storage account](get-started-create-workspace.md#place-sample-data-into-the-primary-storage-account).
+
+1. In Synapse Studio, go to the **Develop** hub.
+1. Create a new notebook.
+1. Create a new code cell and paste the following code in that cell:
-1. In Synapse Studio, go to the **Develop** hub
-2. Create a new Notebook
-3. Create a new code cell and paste the following code into that cell.
```py %%pyspark df = spark.read.load('abfss://users@contosolake.dfs.core.windows.net/NYCTripSmall.parquet', format='parquet') display(df.limit(10)) ```+
+1. Modify the load URI, so it references the sample file in your storage account according to the [abfss URI scheme](../storage/blobs/data-lake-storage-introduction-abfs-uri.md).
1. In the notebook, in the **Attach to** menu, choose the **Spark1** serverless Spark pool that we created earlier. 1. Select **Run** on the cell. Synapse will start a new Spark session to run this cell if needed. If a new Spark session is needed, initially it will take about two seconds to be created. 1. If you just want to see the schema of the dataframe run a cell with the following code:
synapse-analytics Security White Paper Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/guidance/security-white-paper-access-control.md
+
+ Title: "Azure Synapse Analytics security white paper: Access control"
+description: Use different approaches or a combination of techniques to control access to data with Azure Synapse Analytics.
+++++ Last updated : 01/14/2022++
+# Azure Synapse Analytics security white paper: Access control
++
+Depending on how the data has been modeled and stored, data governance and access control might require that developers and security administrators use different approaches, or combination of techniques, to implement a robust security foundation.
+
+Azure Synapse supports a wide range of capabilities to control who can access what data. These capabilities are built upon a set of advanced access control features, including:
+
+- [Object-level security](#object-level-security)
+- [Row-level security](#row-level-security)
+- [Column-level security](#column-level-security)
+- [Dynamic data masking](#dynamic-data-masking)
+- [Synapse role-based access control](#synapse-role-based-access-control)
+
+## Object-level security
+
+Every object in a dedicated SQL pool has associated permissions that can be granted to a principal. In the context of users and service accounts, that's how individual tables, views, stored procedures, and functions are secured. Object permissions, like SELECT, can be granted to user accounts (SQL logins, Azure Active Directory users or groups) and [database roles](/sql/relational-databases/security/authentication-access/database-level-roles?view=sql-server-ver15&preserve-view=true), which provides flexibility for database administrators. Further, permissions granted on tables and views can be combined with other access control mechanisms (described below), such as column-level security, row-level security, and dynamic data masking.
+
+In Azure Synapse, all permissions are granted to database-level users and roles. Additionally, any user granted the built-in [Synapse Administrator RBAC role](../security/synapse-workspace-synapse-rbac-roles.md) at the workspace level is automatically granted full access to all dedicated SQL pools.
+
+In addition to securing SQL tables in Azure Synapse, dedicated SQL pool (formerly SQL DW), serverless SQL pool, and Spark tables can be secured too. By default, users assigned to the **Storage Blob Data Contributor** role of data lakes connected to the workspace have READ, WRITE, and EXECUTE permissions on all Spark-created tables *when users interactively execute code in notebook*. It's called *Azure Active Directory (Azure AD) pass-through*, and it applies to all data lakes connected to the workspace. However, if the same user executes the same notebook *through a pipeline*, the workspace Managed Service Identity (MSI) is used for authentication. So, for the pipeline to successfully execute workspace MSI, it must also belong to the **Storage Blob Data Contributor** role of the data lake that's accessed.
+
+## Row-level security
+
+[Row-level security](/sql/relational-databases/security/row-level-security?view=azure-sqldw-latest&preserve-view=true) allows security administrators to establish and control fine grained access to specific table rows based on the profile of a user (or a process) running a query. Profile or user characteristics may refer to group membership or execution context. Row-level security helps prevent unauthorized access when users query data from the same tables but must see different subsets of data.
+
+> [!NOTE]
+> Row-level security is supported in Azure Synapse and dedicated SQL pool (formerly SQL DW), but it's not supported for Apache Spark pool and serverless SQL pool.
+
+## Column-level security
+
+[Column-level security](../sql-data-warehouse/column-level-security.md) allows security administrators to set permissions that limit who can access sensitive columns in tables. It's set at the database level and can be implemented without the need to change the design of the data model or application tier.
+
+> [!NOTE]
+> Column-level security is supported in Azure Synapse and dedicated SQL pool (formerly SQL DW), but it's not supported for Apache Spark pool and serverless SQL pool.
+
+## Dynamic data masking
+
+[Dynamic data masking](../../azure-sql/database/dynamic-data-masking-overview.md) allows security administrators to restrict sensitive data exposure by masking it on read to non-privileged users. It helps prevent unauthorized access to sensitive data by enabling administrators to determine how the data is displayed at query time. Based on the identity of the authenticated user and their group assignment in the SQL pool, a query returns either masked or unmasked data. Masking is always applied regardless of whether data is accessed directly from a table or by using a view or stored procedure.
+
+> [!NOTE]
+> Dynamic data masking is supported in Azure Synapse and dedicated SQL pool (formerly SQL DW), but it's not supported for Apache Spark pool and serverless SQL pool.
+
+## Synapse role-based access control
+
+Azure Synapse also includes [Synapse role-based access control (RBAC) roles](../security/synapse-workspace-understand-what-role-you-need.md) to manage different aspects of Synapse Studio. Leverage these built-in roles to assign permissions to users, groups, or other security principals to manage who can:
+
+- Publish code artifacts and list or access published code artifacts.
+- Execute code on Apache Spark pools and integration runtimes.
+- Access linked (data) services that are protected by credentials.
+- Monitor or cancel job executions, review job output and execution logs.
+
+## Next steps
+
+In the [next article](security-white-paper-authentication.md) in this white paper series, learn about authentication.
synapse-analytics Security White Paper Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/guidance/security-white-paper-authentication.md
+
+ Title: "Azure Synapse Analytics security white paper: Authentication"
+description: Implement authentication mechanisms with Azure Synapse Analytics.
+++++ Last updated : 01/14/2022++
+# Azure Synapse Analytics security white paper: Authentication
++
+Authentication is the process of proving the user is who they claim to be. Authentication activities can be logged with [Azure SQL Auditing](../../azure-sql/database/auditing-overview.md), and an IT administrator can configure reports and alerts whenever a login from a suspicious location is attempted.
+
+## Benefits
+
+Some of the benefits of these robust authentication mechanisms include:
+
+- Strong password policies to deter brute force attacks.
+- User password encryption.
+- [Firewall rules](../../azure-sql/database/firewall-configure.md).
+- SQL endpoints with [Multi-factor authentication](../sql/mfa-authentication.md).
+- Elimination of the need to manage credentials with [managed identity](../../data-factory/data-factory-service-identity.md).
+
+Azure Synapse, dedicated SQL pool (formerly SQL DW), and serverless SQL pool currently support [Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) authentication and [SQL authentication](../sql/sql-authentication.md), while Apache Spark pool supports only Azure AD authentication. Multi-factor authentication and managed identity are fully supported for Azure Synapse, dedicated SQL pool (formerly SQL DW), serverless SQL pool, and Apache Spark pool.
+
+## Next steps
+
+In the [next article](security-white-paper-network-security.md) in this white paper series, learn about network security.
synapse-analytics Security White Paper Data Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/guidance/security-white-paper-data-protection.md
+
+ Title: "Azure Synapse Analytics security white paper: Data protection"
+description: Protect data to comply with federal, local, and company guidelines with Azure Synapse Analytics.
+++++ Last updated : 01/14/2022++
+# Azure Synapse Analytics security white paper: Data protection
++
+## Data discovery and classification
+
+Organizations need to protect their data to comply with federal, local, and company guidelines to mitigate risks of data breach. One challenge organizations face is: *How do you protect the data if you don't know where it is?* Another is: *What level of protection is needed?*ΓÇöbecause some datasets require more protection than others.
+
+Imagine an organization with hundreds or thousands of files stored in their data lake, and hundreds or thousands of tables in their databases. It would benefit from a process that automatically scans every row and column of the file system or table and classifies columns as *potentially* sensitive data. This process is known as *data discovery*.
+
+Once the data discovery process is complete, it provides classification recommendations based on a predefined set of patterns, keywords, and rules. Someone can then review the recommendations and apply sensitivity-classification labels to appropriate columns. This process is known as *classification*.
+
+Azure Synapse provides two options for data discovery and classification:
+
+- [Data Discovery & Classification](../../azure-sql/database/data-discovery-and-classification-overview.md), which is built into Azure Synapse and dedicated SQL pool (formerly SQL DW).
+- [Azure Purview](https://azure.microsoft.com/services/purview/), which is a unified data governance solution that helps manage and govern on-premises, multicloud, and software-as-a-service (SaaS) data. It can automate data discovery, lineage identification, and data classification. By producing a unified map of data assets and their relationships, it makes data easily discoverable.
+
+> [!NOTE]
+> Azure Purview data discovery and classification is in public preview for Azure Synapse, dedicated SQL pool (formerly SQL DW), and serverless SQL pool. However, data lineage is currently not supported for Azure Synapse, dedicated SQL pool (formerly SQL DW), and serverless SQL pool. Apache Spark pool only supports [lineage tracking](../../purview/how-to-lineage-spark-atlas-connector.md).
+
+## Data encryption
+
+Data is encrypted at rest and in transit.
+
+### Data at rest
+
+By default, Azure Storage [automatically encrypts all data](../../storage/common/storage-service-encryption.md) using 256-bit Advanced Encryption Standard encryption (AES 256). It's one of the strongest block ciphers available and is FIPS 140-2 compliant. The platform manages the encryption key, and it forms the *first layer* of data encryption. This encryption applies to both user and system databases, including the **master** database.
+
+Enabling [Transparent Data Encryption](../../azure-sql/database/transparent-data-encryption-tde-overview.md) (TDE) can add a *second layer* of data encryption. It performs real-time I/O encryption and decryption of database files, transaction logs files, and backups at rest without requiring any changes to the application. By default, it uses AES 256.
+
+By default, TDE protects the database encryption key (DEK) with a built-in server certificate (service managed). There's an option to bring your own key (BYOK) that can be securely stored in [Azure Key Vault](../../key-vault/general/basic-concepts.md).
+
+Azure Synapse SQL serverless pool and Apache Spark pool are analytic engines that work directly on [Azure Data Lake Gen2](../../storage/blobs/data-lake-storage-introduction.md) (ALDS Gen2) or [Azure Blob Storage](../../storage/blobs/storage-blobs-introduction.md). These analytic runtimes don't have any permanent storage and rely on Azure Storage encryption technologies for data protection. By default, Azure Storage encrypts all data using [server-side encryption](../../storage/common/storage-service-encryption.md) (SSE). It's enabled for all storage types (including ADLS Gen2) and cannot be disabled. SSE encrypts and decrypts data transparently using AES 256.
+
+There are two SSE encryption options:
+
+- **Microsoft-managed keys:** Microsoft manages every aspect of the encryption key, including key storage, ownership, and rotations. It's entirely transparent to customers.
+- **Customer-managed keys:** In this case, the symmetric key used to encrypt data in Azure Storage is encrypted using a customer-provided key. It supports RSA and RSA-HSM (Hardware Security Modules) keys of sizes 2048, 3072, and 4096. Keys can be securely stored in [Azure Key Vault](../../key-vault/general/overview.md) or [Azure Key Vault Managed HSM](../../key-vault/managed-hsm/overview.md). It provides fine grain access control of the key and its management, including storage, backup, and rotations. For more information, see [Customer-managed keys for Azure Storage encryption](../../storage/common/customer-managed-keys-overview.md).
+
+While SSE forms the first layer of encryption, cautious customers can double encrypt by enabling a second layer of [256-bit AES encryption at the Azure Storage infrastructure layer](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption). Known as *infrastructure encryption*, it uses a platform-managed key together with a separate key from SSE. So, data in the storage account is encrypted twice; once at the service level and once at the infrastructure level with two different encryption algorithms and different keys.
+
+### Data in transit
+
+Azure Synapse, dedicated SQL pool (formerly SQL DW), and serverless SQL pool use the [Tabular Data Stream](/openspecs/windows_protocols/ms-tds/893fcc7e-8a39-4b3c-815a-773b7b982c50) (TDS) protocol to communicate between the SQL pool endpoint and a client machine. TDS depends on Transport Layer Security (TLS) for channel encryption, ensuring all data packets are secured and encrypted between endpoint and client machine. It uses a signed server certificate from the Certificate Authority (CA) used for TLS encryption, managed by Microsoft. Azure Synapse supports data encryption in transit with TLS v1.2, using AES 256 encryption.
+
+Azure Synapse leverages TLS to ensure data is encrypted in motion. SQL dedicated pools support TLS 1.0, TLS 1.1, and TLS 1.2 versions for encryption wherein Microsoft-provided drivers use TLS 1.2 by default. Serverless SQL pool and Apache Spark pool use TLS 1.2 for all outbound connections.
+
+## Next steps
+
+In the [next article](security-white-paper-access-control.md) in this white paper series, learn about access control.
synapse-analytics Security White Paper Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/guidance/security-white-paper-introduction.md
+
+ Title: Azure Synapse Analytics security white paper
+description: Overview of the Azure Synapse Analytics security white paper series of articles.
+++++ Last updated : 01/14/2022++
+# Azure Synapse Analytics security white paper: Introduction
+
+**Summary:** [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/) is a Microsoft limitless analytics platform that integrates enterprise data warehousing and big data processing into a single managed environment with no system integration required. Azure Synapse provides the end-to-end tools for your analytic life cycle with:
+
+- [Pipelines](../../data-factory/concepts-pipelines-activities.md?context=/azure/synapse-analytics/context/context&amp;tabs=synapse-analytics) for data integration.
+- [Apache Spark pool](../spark/apache-spark-overview.md) for big data processing.
+- [Data Explorer](../data-explorer/data-explorer-overview.md) for log and time series analytics.
+- [Serverless SQL pool](../sql/on-demand-workspace-overview.md) for data exploration over [Azure Data Lake](https://azure.microsoft.com/solutions/data-lake/).
+- [Dedicated SQL pool](../sql-data-warehouse/sql-data-warehouse-overview-what-is.md?context=/azure/synapse-analytics/context/context) (formerly SQL DW) for enterprise data warehousing.
+- Deep integration with [Power BI](https://powerbi.microsoft.com/), [Azure Cosmos DB](../../cosmos-db/synapse-link.md?context=/azure/synapse-analytics/context/context), and [Azure Machine Learning](../machine-learning/what-is-machine-learning.md).
+
+Azure Synapse data security and privacy are non-negotiable. The purpose of this white paper, then, is to provide a comprehensive overview of Azure Synapse security features, which are enterprise-grade and industry-leading. The white paper comprises a series of articles that cover the following five layers of security:
+
+- Data protection
+- Access control
+- Authentication
+- Network security
+- Threat protection
+
+This white paper targets all enterprise security stakeholders. They include security administrators, network administrations, Azure administrators, workspace administrators, and database administrators.
+
+**Writers:** Vengatesh Parasuraman, Fretz Nuson, Ron Dunn, Khendr'a Reid, John Hoang, Nithesh Krishnappa, Mykola Kovalenko, Brad Schacht, Pedro Matinez, and Mark Pryce-Maher.
+
+**Technical Reviewers:** Nandita Valsan, Rony Thomas, Daniel Crawford, and Tammy Richter Jones.
+
+**Applies to:** Azure Synapse Analytics, dedicated SQL pool (formerly SQL DW), serverless SQL pool, and Apache Spark pool.
+
+> [!IMPORTANT]
+> This white paper does not apply to Azure SQL Database, Azure SQL Managed Instance, Azure Machine Learning, or Azure Databricks.
+
+## Introduction
+
+Frequent headlines of data breaches, malware infections, and malicious code injection are among an extensive list of security concerns for companies looking to cloud modernization. The enterprise customer requires a cloud provider or service solution that can address their concerns as they can't afford to get it wrong.
+
+Some common security questions include:
+
+- How can I control who can see what data?
+- What are the options for verifying a user's identity?
+- How is my data protected?
+- What network security technology can I use to protect the integrity, confidentiality, and access of my networks and data?
+- What are the tools that detect and notify me of threats?
+
+The purpose of this white paper is to provide answers to these common security questions, and many others.
+
+## Security layers
+
+Azure Synapse implements a multi-layered security architecture for end-to-end protection of your data. There are five layers:
+
+- [**Data protection**](security-white-paper-data-protection.md) to identify and classify sensitive data, and encrypt data at rest and in motion.
+- [**Access control**](security-white-paper-access-control.md) to determine a user's right to interact with data.
+- [**Authentication**](security-white-paper-authentication.md) to prove the identity of users and applications.
+- [**Network security**](security-white-paper-network-security.md) to isolate network traffic with private endpoints and virtual private networks.
+- [**Threat protection**](security-white-paper-threat-protection.md) to identify potential security threats, such as unusual access locations, SQL injection attacks, authentication attacks, and more.
++
+## Next steps
+
+In the [next article](security-white-paper-data-protection.md) in this white paper series, learn about data protection.
synapse-analytics Security White Paper Network Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/guidance/security-white-paper-network-security.md
+
+ Title: "Azure Synapse Analytics security white paper: Network security"
+description: Manage secure network access with Azure Synapse Analytics.
+++++ Last updated : 01/14/2022++
+# Azure Synapse Analytics security white paper: Network security
++
+To secure Azure Synapse, there are a range of network security options to consider.
+
+## Network security terminology
+
+This opening section provides an overview and definitions of some of key Azure Synapse terms related to network security. Keep these definitions in mind while reading this article.
+
+### Synapse workspace
+
+A [*Synapse workspace*](../get-started-create-workspace.md) is a securable logical collection of all services offered by Azure Synapse. It includes dedicated SQL pools (formerly SQL DW), serverless SQL pools, Apache Spark pools, pipelines, and other services. Certain network configuration settings, such as IP firewall rules, managed virtual network, and approved tenants for exfiltration protection, are configured and secured at the workspace level.
+
+### Synapse workspace endpoints
+
+An endpoint is a point of an incoming connection to access a service. Each Synapse workspace has three distinct endpoints:
+
+- **Dedicated SQL endpoint** for accessing dedicated SQL pools.
+- **Serverless SQL endpoint** for accessing serverless SQL pools.
+- **Development endpoint** for accessing Apache Spark pools and pipeline resources in the workspace.
+
+These endpoints are automatically created when the Synapse workspace is created.
+
+### Synapse Studio
+
+[*Synapse Studio*](/learn/modules/explore-azure-synapse-studio/) is a secure web front-end development environment for Azure Synapse. It supports various roles, including the data engineer, data scientist, data developer, data analyst, and Synapse administrator.
+
+Use Synapse Studio to performing various data and management operations in Azure Synapse, such as:
+
+- Connecting to dedicated SQL pools, serverless SQL pools, and running SQL scripts.
+- Developing and running notebooks on Apache Spark pools.
+- Developing and running pipelines.
+- Monitoring dedicated SQL pools, serverless SQL pools, Apache Spark pools, and pipeline jobs.
+- Managing [Synapse RBAC permissions](../security/synapse-workspace-understand-what-role-you-need.md) of workspace items.
+- Creating [managed private endpoint connections](#managed-private-endpoint-connection) to data sources and sinks.
+
+Connections to workspace endpoints can be made using Synapse Studio. Also, it's possible to create [private endpoints](#private-endpoints) to ensure that communication to the workspace endpoints is private.
+
+## Public network access and firewall rules
+
+By default, the workspace endpoints are *public endpoints* when they're provisioned. Access to these workspace endpoints from any public network is enabled, including networks that are outside the customer's organization, without requiring a VPN connection or an ExpressRoute connection to Azure.
+
+All Azure services, including PaaS services like Azure Synapse, are protected by [DDoS basic protection](../../ddos-protection/ddos-protection-overview.md) to mitigate malicious attacks (active traffic monitoring, always on detection, and automatic attack mitigations).
+
+All traffic to workspace endpointsΓÇöeven via public networksΓÇöis encrypted and secured in transit by Transport Level Security (TLS) protocol.
+
+To protect any sensitive data, it's recommended to disable public access to the workspace endpoints entirely. By doing so, it ensures all workspace endpoints can only be accessed using [private endpoints](#private-endpoints).
+
+Disabling public access for all the Synapse workspaces in a subscription or a resource group is enforced by assigning an [Azure Policy](../../governance/policy/overview.md). It's also possible to disable public network access on per-workspace basis based on the sensitivity of data processed by the workspace.
+
+However, if public access needs to be enabled, it's highly recommended to configure the IP firewall rules to allow inbound connections only from the specified list of public IP addresses.
+
+Consider enabling public access when the on-premises environment doesn't have VPN access or ExpressRoute to Azure, and it requires access to the workspace endpoints. In this case, specify a list of public IP addresses of the on-premises data centers and gateways in the IP firewall rules.
+
+## Private endpoints
+
+An [Azure private endpoint](../../private-link/private-endpoint-overview.md) is a virtual network interface with a private IP address that's created in the customer's own [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md) (VNet) subnet. A private endpoint can be created for any Azure service that supports private endpoints, such as Azure Synapse, dedicated SQL pools (formerly SQL DW), Azure SQL Databases, Azure Storage, or any service in Azure powered by [Azure Private Link service](../../private-link/private-link-service-overview.md).
+
+It's possible to create private endpoints in the VNet for all three Synapse workspace endpoints, individually. This way, there could be three private endpoints created for three endpoints of a Synapse workspace: one for dedicated SQL pool, one for serverless SQL pool, and one for the development endpoint.
+
+Private endpoints have many security benefits compared to the public endpoints. Private endpoints in an Azure VNet can be accessed only from within:
+
+- The same VNet that contains this private endpoint.
+- Regionally or globally [peered](../../virtual-network/virtual-network-peering-overview.md) Azure VNets.
+- On-premises networks connected to Azure via [VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md) or ExpressRoute.
+
+The main benefit of private endpoints is that it's no longer necessary to expose workspace endpoints to the public internet. *The less exposure, the better.*
+
+The following diagram depicts private endpoints.
++
+The above diagram depicts the following key points:
+
+| **Item** | **Description** |
+| | |
+| ![Item 1.](media/common/icon-01-red-30x30.png) | Workstations from within the customer VNet access the Azure Synapse private endpoints. |
+| ![Item 2.](media/common/icon-02-red-30x30.png) | Peering between customer VNet and another VNet. |
+| ![Item 3.](media/common/icon-03-red-30x30.png) | Workstation from peered VNet access the Azure Synapse private endpoints. |
+| ![Item 4.](media/common/icon-04-red-30x30.png) | On-premises network access the Azure Synapse private endpoints through VPN or ExpressRoute. |
+| ![Item 5.](media/common/icon-05-red-30x30.png) | Workspace endpoints are mapped into customer's VNet through private endpoints using Azure Private Link service. |
+| ![Item 6.](media/common/icon-06-red-30x30.png) | Public access is disabled on the Synapse workspace. |
+
+In the following diagram, a private endpoint is mapped to an instance of a PaaS resource instead of the entire service. In the event of a security incident within the network, only the mapped resource instance is exposed, minimizing the exposure and threat of data leakage and exfiltration.
++
+The above diagram depicts the following key points:
+
+| **Item** | **Description** |
+| | |
+| ![Item 1.](media/common/icon-01-red-30x30.png) | The private endpoint in the customer VNet is mapped to a single dedicated SQL pool (formerly SQL DW) endpoint in Workspace A. |
+| ![Item 2.](media/common/icon-02-red-30x30.png) | Other SQL pool endpoints in the other workspaces (B and C) aren't accessible through this private endpoint, minimizing exposure. |
+
+Private endpoint works across Azure Active Directory (Azure AD) tenants and regions, so it's possible to create private endpoint connections to Synapse workspaces across tenants and regions. In this case, it goes through the [private endpoint connection approval workflow](../../private-link/private-endpoint-overview.md#access-to-a-private-link-resource-using-approval-workflow). The resource owner controls which private endpoint connections are approved or denied. The resource owner is in full control of who can connect to their workspaces.
+
+The following diagram depicts a private endpoint connection approval workflow.
++
+The above diagram depicts the following key points:
+
+| **Item** | **Description** |
+| | |
+| ![Item 1.](media/common/icon-01-red-30x30.png) | Dedicated SQL pool (formerly SQL DW) in Workspace A in Tenant A is accessed by a private endpoint in the customer VNet in Tenant A. |
+| ![Item 2.](media/common/icon-02-red-30x30.png) | The same dedicated SQL pool (formerly SQL DW) in Workspace A in Tenant A is accessed by a private endpoint in the customer VNet in Tenant B through a connection approval workflow. |
+
+## Managed VNet
+
+The [Synapse Managed VNet](../security/synapse-workspace-managed-vnet.md) feature provides a fully managed network isolation for the Apache Spark pool and pipeline compute resources between Synapse workspaces. It can be configured at workspace creation time. In addition, it also provides network isolation for Spark clusters within the same workspace. Each workspace has its own virtual network, which is fully managed by Synapse. The Managed VNet isn't visible to the users to make any modifications. Any pipeline or Apache Spark pool compute resources that are spun up by Azure Synapse in a Managed VNet gets provisioned inside its own VNet. This way, there's full network isolation from other workspaces.
+
+This configuration eliminates the need to create and manage VNets and network security groups for the Apache Spark pool and pipeline resources, as is typically done by [VNet Injection](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject).
+
+As such, multi-tenant services in a Synapse workspace, such as dedicated SQL pools and serverless SQL pools, are **not** provisioned inside the Managed VNet.
+
+The following diagram depicts network isolation between two Managed VNets of Workspaces A and B with their Apache Spark pools and pipeline resources inside the Managed VNets.
++
+## Managed private endpoint connection
+
+A [managed private endpoint connection](../security/synapse-workspace-managed-private-endpoints.md) enables connections to any Azure PaaS service (that supports Private Link), securely and seamlessly, without the need to create a private endpoint for that service from the customer's VNet. Synapse automatically creates and manages the private endpoint. These connections are used by the compute resources that are provisioned inside the Synapse Managed VNet, such as Apache Spark pools and pipeline resources, to connect to the Azure PaaS services *privately*.
+
+For example, if you want to connect to your Azure storage account *privately* from your pipeline, the usual approach is to create a private endpoint for the storage account and use a self-hosted integration runtime to connect to your storage private endpoint. With Synapse Managed VNets, you can privately connect to your storage account using Azure integration runtime simply by creating a managed private endpoint connection directly to that storage account. This approach eliminates the need to have a self-hosted integration runtime to connect to your Azure PaaS services privately.
+
+As such, multi-tenant services in a Synapse workspace, such as dedicated SQL pools and serverless SQL pools, are **not** provisioned inside the Managed VNet. So, they don't use the managed private endpoint connections created in the workspace for their outbound connectivity.
+
+The following diagram depicts a managed private endpoint connecting to an Azure storage account from a Managed VNet in Workspace A.
++
+## Advanced Spark security
+
+A Managed VNet also provides some added advantages for Apache Spark pool users. There's no need to worry about configuring a *fixed* subnet address space as would be done in [VNet Injection](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject). Azure Synapse automatically takes care of allocating these address spaces dynamically for workloads.
+
+In addition, Spark pools operate as a job cluster. It means each user gets their own Spark cluster when interacting with the workspace. Creating an Spark pool within the workspace is metadata information for what will be assigned to the user when executing Spark workloads. It means each user will get their own Spark cluster *in a dedicated subnet inside the Managed VNet* to execute workloads. Spark pool sessions from the same user execute on the same compute resources. By providing this functionality, there are three main benefits:
+
+- Greater security due to workload isolation based on the user.
+- Reduction of noisy neighbors.
+- Greater performance.
+
+## Data exfiltration protection
+
+Synapse workspaces with Managed VNet have an additional security feature called *[data exfiltration protection](../security/workspace-data-exfiltration-protection.md)*. It protects all egress traffic going out from Azure Synapse from all services, including dedicated SQL pools, serverless SQL pools, Apache spark pools, and pipelines. It's configured by enabling data exfiltration protection at the workspace level (at workspace creation time) to restrict the outbound connections to an allowed list of Azure Active Directory (Azure AD) tenants. By default, only the home tenant of the workspace is added to the list, but it's possible to add or modify the list of Azure AD tenants anytime after the workspace is created. Adding additional tenants is a highly privileged operation that requires the elevated role of [Synapse Administrator](../security/synapse-workspace-synapse-rbac-roles.md). It effectively controls exfiltration of data from Azure Synapse to other organizations and tenants, without the need to have complicated network security policies in place.
+
+For workspaces with data exfiltration protection enabled, Synapse pipelines and Apache Spark pools must use managed private endpoint connections for all their outbound connections.
+
+Dedicated SQL pool and serverless SQL pool don't use managed private endpoints for their outbound connectivity; however, any outbound connectivity from SQL pools can only be made to the *approved targets*, which are the targets of managed private endpoint connections.
+
+## Private link hubs for Synapse Studio
+
+[Azure Private Link Hubs](../security/synapse-private-link-hubs.md) allows securely connecting to Synapse Studio from the customer's VNet using Azure Private Link. This feature is useful for customers who want to access the Synapse workspace using the Synapse Studio from a controlled and restricted environment, where the outbound internet traffic is restricted to a limited set of Azure services.
+
+It's achieved by creating a private link hub resource and a private endpoint to this hub from the VNet. This private endpoint is then used to access the studio using its fully qualified domain name (FQDN), *web.azuresynapse.net*, with a private IP address from the VNet. The private link hub resource downloads the static contents of Synapse Studio over Azure Private Link to the user's workstation. In addition, separate private endpoints must be created for the individual workspace endpoints to ensure that communication to the workspace endpoints is private.
+
+The following diagram depicts private link hubs for Synapse Studio.
++
+The above diagram depicts the following key points:
+
+| **Item** | **Description** |
+| | |
+| ![Item 1.](media/common/icon-01-red-30x30.png) | The workstation in a restricted customer VNet accesses the Synapse Studio using a web browser. |
+| ![Item 2.](media/common/icon-02-red-30x30.png) | A private endpoint created for private link hubs resource is used to download the static studio contents using Azure Private Link. |
+| ![Item 3.](media/common/icon-03-red-30x30.png) | Private endpoints created for Synapse workspace endpoints access the workspace resources securely using Azure Private Links. |
+| ![Item 4.](media/common/icon-04-red-30x30.png) | Network security group rules in the restricted customer VNet allow outbound traffic over port 443 to a limited set of Azure services, such as Azure Resource Manager, Azure Front Door, and Azure Active Directory. |
+| ![Item 5.](media/common/icon-05-red-30x30.png) | Network security group rules in the restricted customer VNet deny all other outbound traffic from the VNet. |
+| ![Item 6.](media/common/icon-06-red-30x30.png) | Public access is disabled on the Synapse workspace. |
+
+## Dedicated SQL pool (formerly SQL DW)
+
+Prior to the Azure Synapse offering, an Azure SQL data warehouse product named SQL DW was offered. It's now renamed as [dedicated SQL pool (formerly SQL DW)](../sql-data-warehouse/sql-data-warehouse-overview-what-is.md).
+
+Dedicated SQL pool (formerly SQL DW) is created inside a logical Azure SQL server. It's a securable logical construct that acts as a central administrative point for a collection of databases including SQL DW and other Azure SQL databases.
+
+Most of the core network security features discussed in the previous sections of this article for Azure Synapse are also applicable to dedicated SQL pool (formerly SQL DW). They include:
+
+> [!div class="checklist"]
+> - IP firewall rules
+> - Disabling public network access
+> - Private endpoints
+> - Data exfiltration protection through outbound firewall rules
+
+Since dedicated SQL pool (formerly SQL DW) is a multi-tenant service, it's not provisioned inside a Managed VNet. It means some of the features, such as Managed VNet and managed private endpoint connections, aren't applicable to it.
+
+## Network security feature matrix
+
+The following comparison table provides a high-level overview of network security features supported across the Azure Synapse offerings:
+
+| **Feature** | **Azure Synapse: Apache Spark pool** | **Azure Synapse: Dedicated SQL pool** | **Azure Synapse: Serverless SQL pool** | **Dedicated SQL pool (formerly SQL DW)** |
+| | :-: | :-: | :-: | :-: |
+| IP firewall rules | Yes | Yes | Yes | Yes |
+| Disabling public access | Yes | Yes | Yes | Yes |
+| Private endpoints | Yes | Yes | Yes | Yes |
+| Data exfiltration protection | Yes | Yes | Yes | Yes |
+| Secure access using Synapse Studio | Yes | Yes | Yes | No |
+| Access from restricted network using Synapse private link hub | Yes | Yes | Yes | No |
+| Managed VNet and workspace-level network isolation | Yes | N/A | N/A | N/A |
+| Managed private endpoint connections for outbound connectivity | Yes | N/A | N/A | N/A |
+| User-level network isolation | Yes | N/A | N/A | N/A |
+
+## Next steps
+
+In the [next article](security-white-paper-threat-protection.md) in this white paper series, learn about threat protection.
synapse-analytics Security White Paper Threat Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/guidance/security-white-paper-threat-protection.md
+
+ Title: "Azure Synapse Analytics security white paper: Threat detection"
+description: Audit, protect, and monitor Azure Synapse Analytics.
+++++ Last updated : 01/14/2022++
+# Azure Synapse Analytics security white paper: Threat detection
++
+Azure Synapse provides SQL Auditing, SQL Threat Detection, and Vulnerability Assessment to audit, protect, and monitor databases.
+
+## Auditing
+
+[Auditing for Azure SQL Database](../../azure-sql/database/auditing-overview.md#overview) and Azure Synapse tracks database events and writes them to an audit log in an Azure storage account, Log Analytics workspace, or Event Hubs. For any database, auditing is important. It produces an audit trail over time to help understand database activity and gain insight into discrepancies and anomalies that could indicate business concerns or suspected security violations.
+Used with [Data discovery and classification](../../azure-sql/database/data-discovery-and-classification-overview.md), when any sensitive columns or tables are queried by users, entries will appear in a field named **data_sensitivity_information** of the **sql_audit_information** table.
+
+> [!NOTE]
+> Azure SQL Auditing applies to Azure Synapse, dedicated SQL pool (formerly SQL DW), and serverless SQL pool, but it doesn't apply to Apache Spark pool.
+
+## Threat detection
+
+[Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md) is a tool for security posture management and threat detection. It protects workloads running in Azure, including (but not exclusively) servers, app service, key vaults, Kubernetes services, storage accounts, and Azure SQL Databases.
+
+As one of the options available with Microsoft Defender for Cloud, [Microsoft Defender for SQL](../../azure-sql/database/azure-defender-for-sql.md) extends Defender for Cloud's data security package to secure databases. It can discover and mitigate potential database vulnerabilities by detecting anomalous activities that could be a potential threat to the database. Specifically, it continually monitors your database for:
+
+> [!div class="checklist"]
+> - Potential SQL injection attacks
+> - Anomalous database access and queries
+> - Suspicious database activity
+
+Alert notifications include details of the incident, and recommendations on how to investigate and remediate threats.
+
+> [!NOTE]
+> Microsoft Defender for SQL applies to Azure Synapse and dedicated SQL pool (formerly SQL DW). It doesn't apply to serverless SQL pool or Apache Spark pool.
+
+## Vulnerability assessment
+
+[SQL vulnerability assessment](../../azure-sql/database/sql-vulnerability-assessment.md) is part of the Microsoft Defender for SQL offering. It continually monitors the data warehouse, ensuring that databases are always maintained at a high level of security and that organizational policies are met. It provides a comprehensive security report along with actionable remediation steps for each issue found, making it easy to proactively manage database security stature even if you're not a security expert.
+
+> [!NOTE]
+> SQL vulnerability assessment applies to Azure Synapse and dedicated SQL pool (formerly SQL DW). It doesn't apply to serverless SQL pool or Apache Spark pool.
+
+## Compliance
+
+For an overview of Azure compliance offerings, download the latest version of the [Microsoft Azure Compliance Offerings](https://azure.microsoft.com/resources/microsoft-azure-compliance-offerings/) document.
+
+## Next steps
+
+For more information related to this white paper, check out the following resources:
+
+- [Azure Synapse Analytics Blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/bg-p/AzureSynapseAnalyticsBlog)
+- [Azure security baseline for Azure Synapse dedicated SQL pool (formerly SQL DW)](/security/benchmark/azure/baselines/synapse-analytics-security-baseline)
+- [Overview of the Azure Security Benchmark (v3)](/security/benchmark/azure/overview)
+- [Security baselines for Azure](/security/benchmark/azure/security-baselines-overview)
synapse-analytics Data Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/partner/data-integration.md
To create your data warehouse solution using the dedicated SQL pool in Azure Syn
| ![Dimodelo](./media/data-integration/dimodelo-logo.png) |**Dimodelo**<br>Dimodelo Data Warehouse Studio is a data warehouse automation tool for the Azure data platform. Dimodelo enhances developer productivity through a dedicated data warehouse modeling and ETL design tool, pattern-based best practice code generation, one-click deployment, and ETL orchestration. Dimodelo enhances maintainability with change propagation, allows developers to stay focused on business outcomes, and automates portability across data platforms.|[Product page](https://www.dimodelo.com/data-warehouse-studio-for-azure-synapse/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dimodelosolutions.dimodeloazurevs)<br> | | ![Fivetran](./media/data-integration/fivetran_logo.png) |**Fivetran**<br>Fivetran helps you centralize data from disparate sources. It features a zero maintenance, zero configuration data pipeline product with a growing list of built-in connectors to all the popular data sources. Setup takes five minutes after authenticating to data sources and target data warehouse.|[Product page](https://fivetran.com/)<br> | | ![HVR](./media/data-integration/hvr-logo.png) |**HVR**<br>HVR provides a real-time cloud data replication solution that supports enterprise modernization efforts. The HVR platform is a reliable, secure, and scalable way to quickly and efficiently integrate large data volumes in complex environments, enabling real-time data updates, access, and analysis. Global market leaders in various industries trust HVR to address their real-time data integration challenges and revolutionize their businesses. HVR is a privately held company based in San Francisco, with offices across North America, Europe, and Asia.|[Product page](https://www.hvr-software.com/solutions/azure-data-integration/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/hvr.hvr-for-azure?tab=Overview)<br>|
-| ![Incorta](./media/data-integration/incorta-logo.png) |**Incorta**<br>Incorta enables organizations to go from raw data to quickly discovering actionable insights in Azure by automating the various data preparation steps typically required to analyze complex data. which. Using a proprietary technology called Direct Data Mapping and Incorta's Blueprints (pre-built content library and best practices captured from real customer implementations), customers experience unprecedented speed and simplicity in accessing, organizing, and presenting data and insights for critical business decision-making.|[Product page](https://www.incorta.com/solutions/microsoft-azure-synapse)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/incorta.incorta?tab=Overview)<br>|
+| ![Incorta](./media/data-integration/incorta-logo.png) |**Incorta**<br>Incorta enables organizations to go from raw data to quickly discovering actionable insights in Azure by automating the various data preparation steps typically required to analyze complex data. which. Using a proprietary technology called Direct Data Mapping and Incorta's Blueprints (pre-built content library and best practices captured from real customer implementations), customers experience unprecedented speed and simplicity in accessing, organizing, and presenting data and insights for critical business decision-making.|[Product page](https://www.incorta.com/solutions/microsoft-azure-synapse)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/incorta.incorta_direct_data_platform)<br>|
| ![Informatica](./media/data-integration/informatica_logo.png) |**1.Informatica Cloud Services for Azure**<br> Informatica Cloud offers a best-in-class solution for self-service data migration, integration, and management capabilities. Customers can quickly and reliably import, and export petabytes of data to Azure from different kinds of sources. Informatica Cloud Services for Azure provides native, high volume, high-performance connectivity to Azure Synapse, SQL Database, Blob Storage, Data Lake Store, and Azure Cosmos DB. <br><br> **2.Informatica PowerCenter** PowerCenter is a metadata-driven data integration platform that jumpstarts and accelerates data integration projects to deliver data to the business more quickly than manual hand coding. It serves as the foundation for your data integration investments |**Informatica Cloud services for Azure**<br>[Product page](https://www.informatica.com/products/cloud-integration.html)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/informatica.iics-secure-agent)<br><br> **Informatica PowerCenter**<br>[Product page](https://www.informatica.com/products/data-integration/powercenter.html)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/informatica.powercenter-1041?tab=Overview)<br>| | ![Information Builders](./media/data-integration/informationbuilders_logo.png) |**Information Builders (Omni-Gen Data Management)**<br>Information Builder's Omni-Gen data management platform provides data integration, data quality, and master data management solutions. It makes it easy to access, move, and blend all data no matter the format, location, volume, or latency.|[Product page](https://www.informationbuilders.com/3i-platform) | | ![Loome](./media/data-integration/loome-logo.png) |**Loome**<br>Loome provides a unique governance workbench that seamlessly integrates with Azure Synapse. It allows you to quickly onboard your data to the cloud and load your entire data source into ADLS in Parquet format. You can orchestrate data pipelines across data engineering, data science and HPC workloads, including native integration with Azure Data Factory, Python, SQL, Synapse Spark, and Databricks. Loome allows you to easily monitor Data Quality exceptions reinforcing Synapse as your strategic Data Quality Hub. Loome keeps an audit trail of resolved issues, and proactively manages data quality with a fully automated data quality engine generating audience targeted alerts in real time.| [Product page](https://www.loomesoftware.com)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/bizdataptyltd1592265042221.loome?tab=Overview) |
synapse-analytics How To Set Up Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/how-to-set-up-access-control.md
Identify the following information about your storage:
| | | | Role | Storage Blob Data Contributor | | Assign access to |SERVICEPRINCIPAL |
- | Members |workspace1_SynapseAdmins, workspace1_SynapseContributors, and workspace1_SynapseComputeOperators|
+ | Members |workspace1_SynapseAdministrators, workspace1_SynapseContributors, and workspace1_SynapseComputeOperators|
![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
synapse-analytics Apache Spark 3 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-3-runtime.md
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This docume
| Apache Spark | 3.1 | | Operating System | Ubuntu 18.04 | | Java | 1.8.0_282 |
-| Scala | 2.12 |
+| Scala | 2.12.10 |
+| Hadoop | 3.1.1 |
| .NET Core | 3.1 | | .NET | 2.0.0 | | Delta Lake | 1.0 |
SparkCustomEvents_3.1.2-1.0.0.jar
TokenLibrary-assembly-1.0.jar
+VegasConnector-1.0.25.1_2.12.jar
+ accessors-smart-1.2.jar activation-1.1.1.jar
+adal4j-1.6.3.jar
+ aircompressor-0.10.jar algebra_2.12-2.0.0-M2.jar
avro-mapred-1.8.2-hadoop2.jar
aws-java-sdk-bundle-1.11.375.jar
-azure-eventhubs-3.2.2.jar
+azure-eventhubs-3.3.0.jar
-azure-eventhubs-spark_2.12-2.3.18.jar
+azure-eventhubs-spark_2.12-2.3.21.jar
azure-keyvault-core-1.0.0.jar azure-storage-7.0.1.jar
-azure-synapse-ml-pandas_2.12-0.1.1.jar
+azure-synapse-ml-pandas_2.12-1.0.0.jar
+
+azure-synapse-ml-predict_2.12-1.0.jar
bcpkix-jdk15on-1.60.jar
commons-net-3.1.jar
commons-pool-1.5.4.jar
+commons-pool2-2.6.2.jar
+ commons-text-1.6.jar compress-lzf-1.0.3.jar
config-1.3.4.jar
core-1.1.2.jar
-cosmos-analytics-spark-connector_3-1_2-12-assembly-3.0.3.jar
+cosmos-analytics-spark-connector_3-1_2-12-assembly-3.0.4.jar
+
+cudf-21.10.0-cuda11.jar
curator-client-2.12.0.jar
datanucleus-core-4.1.6.jar
datanucleus-rdbms-4.1.19.jar
-delta-core_2.12-1.0.0.0.jar
+delta-core_2.12-1.0.0.2b.jar
derby-10.12.1.1.jar
guice-4.0.jar
guice-servlet-4.0.jar
-hadoop-annotations-3.1.3.5.0-43944377.jar
+hadoop-annotations-3.1.1.5.0-50849917.jar
-hadoop-auth-3.1.3.5.0-43944377.jar
+hadoop-auth-3.1.1.5.0-50849917.jar
-hadoop-aws-3.1.3.5.0-43944377.jar
+hadoop-aws-3.1.1.5.0-50849917.jar
-hadoop-azure-3.1.3.5.0-43944377.jar
+hadoop-azure-3.1.1.5.0-50849917.jar
-hadoop-client-3.1.3.5.0-43944377.jar
+hadoop-client-3.1.1.5.0-50849917.jar
-hadoop-common-3.1.3.5.0-43944377.jar
+hadoop-common-3.1.1.5.0-50849917.jar
-hadoop-hdfs-client-3.1.3.5.0-43944377.jar
+hadoop-hdfs-client-3.1.1.5.0-50849917.jar
-hadoop-mapreduce-client-common-3.1.3.5.0-43944377.jar
+hadoop-mapreduce-client-common-3.1.1.5.0-50849917.jar
-hadoop-mapreduce-client-core-3.1.3.5.0-43944377.jar
+hadoop-mapreduce-client-core-3.1.1.5.0-50849917.jar
-hadoop-mapreduce-client-jobclient-3.1.3.5.0-43944377.jar
+hadoop-mapreduce-client-jobclient-3.1.1.5.0-50849917.jar
-hadoop-openstack-3.1.3.5.0-43944377.jar
+hadoop-openstack-3.1.1.5.0-50849917.jar
-hadoop-yarn-api-3.1.3.5.0-43944377.jar
+hadoop-yarn-api-3.1.1.5.0-50849917.jar
-hadoop-yarn-client-3.1.3.5.0-43944377.jar
+hadoop-yarn-client-3.1.1.5.0-50849917.jar
-hadoop-yarn-common-3.1.3.5.0-43944377.jar
+hadoop-yarn-common-3.1.1.5.0-50849917.jar
-hadoop-yarn-registry-3.1.3.5.0-43944377.jar
+hadoop-yarn-registry-3.1.1.5.0-50849917.jar
-hadoop-yarn-server-common-3.1.3.5.0-43944377.jar
+hadoop-yarn-server-common-3.1.1.5.0-50849917.jar
-hadoop-yarn-server-web-proxy-3.1.3.5.0-43944377.jar
+hadoop-yarn-server-web-proxy-3.1.1.5.0-50849917.jar
hdinsight-spark-metrics_3_1_2-1.0.0.jar
janino-3.0.16.jar
javassist-3.25.0-GA.jar
+javatuples-1.2.jar
+ javax.inject-1.jar javax.jdo-3.2.0-m3.jar
jta-1.1.jar
jul-to-slf4j-1.7.30.jar
+kafka-clients-2.4.1.5.0-50849917.jar
+ kerb-admin-1.0.1.jar kerb-client-1.0.1.jar
libfb303-0.9.3.jar
libthrift-0.12.0.jar
+libvegasjni.so
+ lightgbmlib-3.2.110.jar listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar
microsoft-spark.jar
minlog-1.3.0.jar
-mmlspark-1.0.0-rc3-150-8eda1df8-SNAPSHOT.jar
+mmlspark-1.0.0-rc3-179-327be83c-SNAPSHOT.jar
-mmlspark-cognitive-1.0.0-rc3-150-8eda1df8-SNAPSHOT.jar
+mmlspark-cognitive-1.0.0-rc3-179-327be83c-SNAPSHOT.jar
-mmlspark-core-1.0.0-rc3-150-8eda1df8-SNAPSHOT.jar
+mmlspark-core-1.0.0-rc3-179-327be83c-SNAPSHOT.jar
-mmlspark-deep-learning-1.0.0-rc3-150-8eda1df8-SNAPSHOT.jar
+mmlspark-deep-learning-1.0.0-rc3-179-327be83c-SNAPSHOT.jar
-mmlspark-lightgbm-1.0.0-rc3-150-8eda1df8-SNAPSHOT.jar
+mmlspark-lightgbm-1.0.0-rc3-179-327be83c-SNAPSHOT.jar
-mmlspark-opencv-1.0.0-rc3-150-8eda1df8-SNAPSHOT.jar
+mmlspark-opencv-1.0.0-rc3-179-327be83c-SNAPSHOT.jar
-mmlspark-vw-1.0.0-rc3-150-8eda1df8-SNAPSHOT.jar
+mmlspark-vw-1.0.0-rc3-179-327be83c-SNAPSHOT.jar
mssql-jdbc-8.4.1.jre8.jar
netty-all-4.1.51.Final.jar
nimbus-jose-jwt-4.41.1.jar
-notebook-utils-3.0.0-20210820.5.jar
+notebook-utils-3.0.0-20211110.6.jar
objenesis-2.6.jar
okhttp-2.7.5.jar
okio-1.14.0.jar
+onnxruntime_gpu-1.8.1.jar
+ opencsv-2.3.jar opencv-3.2.0-1.jar
postgresql-42.2.9.jar
protobuf-java-2.5.0.jar
-proton-j-0.33.4.jar
+proton-j-0.33.8.jar
py4j-0.10.9.jar pyrolite-4.30.jar
-qpid-proton-j-extensions-1.2.3.jar
+qpid-proton-j-extensions-1.2.4.jar
+
+rapids-4-spark_2.12-21.10.0.jar
re2j-1.1.jar
spark-3.1-rpc-history-server-app-listener_2.12-1.0.0.jar
spark-3.1-rpc-history-server-core_2.12-1.0.0.jar
-spark-avro_2.12-3.1.2.5.0-43944377.jar
+spark-avro_2.12-3.1.2.5.0-50849917.jar
+
+spark-catalyst_2.12-3.1.2.5.0-50849917.jar
-spark-catalyst_2.12-3.1.2.5.0-43944377.jar
+spark-cdm-connector-assembly-1.19.2.jar
-spark-core_2.12-3.1.2.5.0-43944377.jar
+spark-core_2.12-3.1.2.5.0-50849917.jar
-spark-enhancement_2.12-3.1.2.5.0-43944377.jar
+spark-enhancement_2.12-3.1.2.5.0-50849917.jar
spark-enhancementui_2.12-1.1.0.jar
-spark-graphx_2.12-3.1.2.5.0-43944377.jar
+spark-graphx_2.12-3.1.2.5.0-50849917.jar
-spark-hadoop-cloud_2.12-3.1.2.5.0-43944377.jar
+spark-hadoop-cloud_2.12-3.1.2.5.0-50849917.jar
-spark-hive-thriftserver_2.12-3.1.2.5.0-43944377.jar
+spark-hive-thriftserver_2.12-3.1.2.5.0-50849917.jar
-spark-hive_2.12-3.1.2.5.0-43944377.jar
+spark-hive_2.12-3.1.2.5.0-50849917.jar
spark-kusto-synapse-connector_3.1_2.12-1.0.0.jar
-spark-kvstore_2.12-3.1.2.5.0-43944377.jar
+spark-kvstore_2.12-3.1.2.5.0-50849917.jar
+
+spark-launcher_2.12-3.1.2.5.0-50849917.jar
+
+spark-microsoft-telemetry_2.12-3.1.2.5.0-50849917.jar
-spark-launcher_2.12-3.1.2.5.0-43944377.jar
+spark-microsoft-tools_2.12-3.1.2.5.0-50849917.jar
-spark-microsoft-telemetry_2.12-3.1.2.5.0-43944377.jar
+spark-mllib-local_2.12-3.1.2.5.0-50849917.jar
-spark-microsoft-tools_2.12-3.1.2.5.0-43944377.jar
+spark-mllib_2.12-3.1.2.5.0-50849917.jar
-spark-mllib-local_2.12-3.1.2.5.0-43944377.jar
+spark-mssql-connector-1.2.0.jar
-spark-mllib_2.12-3.1.2.5.0-43944377.jar
+spark-network-common_2.12-3.1.2.5.0-50849917.jar
-spark-network-common_2.12-3.1.2.5.0-43944377.jar
+spark-network-shuffle_2.12-3.1.2.5.0-50849917.jar
-spark-network-shuffle_2.12-3.1.2.5.0-43944377.jar
+spark-repl_2.12-3.1.2.5.0-50849917.jar
-spark-repl_2.12-3.1.2.5.0-43944377.jar
+spark-sketch_2.12-3.1.2.5.0-50849917.jar
-spark-sketch_2.12-3.1.2.5.0-43944377.jar
+spark-sql-kafka-0-10_2.12-3.1.2.5.0-50849917.jar
-spark-sql_2.12-3.1.2.5.0-43944377.jar
+spark-sql_2.12-3.1.2.5.0-50849917.jar
-spark-streaming_2.12-3.1.2.5.0-43944377.jar
+spark-streaming-kafka-0-10-assembly_2.12-3.1.2.5.0-50849917.jar
-spark-tags_2.12-3.1.2.5.0-43944377.jar
+spark-streaming-kafka-0-10_2.12-3.1.2.5.0-50849917.jar
-spark-unsafe_2.12-3.1.2.5.0-43944377.jar
+spark-streaming_2.12-3.1.2.5.0-50849917.jar
-spark-yarn_2.12-3.1.2.5.0-43944377.jar
+spark-tags_2.12-3.1.2.5.0-50849917.jar
-spark_diagnostic_cli-1.0.7_spark-3.1.2.jar
+spark-token-provider-kafka-0-10_2.12-3.1.2.5.0-50849917.jar
+
+spark-unsafe_2.12-3.1.2.5.0-50849917.jar
+
+spark-yarn_2.12-3.1.2.5.0-50849917.jar
+
+spark_diagnostic_cli-1.0.10_spark-3.1.2.jar
spire-macros_2.12-0.17.0-M1.jar
spire_2.12-0.17.0-M1.jar
spray-json_2.12-1.3.2.jar
-sqlanalyticsconnector_3.1.2-1.0.0.jar
+sqlanalyticsconnector_3.1.2-1.0.1.jar
stax-api-1.0.1.jar
structuredstreamforspark_2.12-3.0.1-2.1.3.jar
super-csv-2.2.0.jar
-synapse-spark-telemetry_2.12-0.0.4.jar
+synapse-spark-telemetry_2.12-0.0.6.jar
+
+synfs-3.0.0-20211110.6.jar
threeten-extra-1.5.0.jar
xbean-asm7-shaded-4.15.jar
xz-1.5.jar
-zookeeper-3.4.8.5.0-43944377.jar
+zookeeper-3.4.6.5.0-50849917.jar
zstd-jni-1.4.8-1.jar
synapse-analytics Apache Spark Machine Learning Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-machine-learning-concept.md
Every Apache Spark pool in Azure Synapse Analytics comes with a set of pre-loade
- [XGBoost](https://xgboost.readthedocs.io/en/latest/) is a popular machine learning library that contains optimized algorithms for training decision trees and random forests. -- [PyTorch](https://pytorch.org/) & [Tensorflow](https://www.tensorflow.org/) are powerful Python deep learning libraries. Within an Apache Spark pool in Azure Synapse Analytics, you can use these libraries to build single-machine models by setting the number of executors on your pool to zero. Even though Apache Spark is not functional under this configuration, it is a simple and cost-effective way to create single-machine models.
+- [PyTorch](https://pytorch.org/) & [TensorFlow](https://www.tensorflow.org/) are powerful Python deep learning libraries. Within an Apache Spark pool in Azure Synapse Analytics, you can use these libraries to build single-machine models by setting the number of executors on your pool to zero. Even though Apache Spark is not functional under this configuration, it is a simple and cost-effective way to create single-machine models.
## Track model development [MLFlow](https://www.mlflow.org/) is an open-source library for managing the life cycle of your machine learning experiments. MLFlow Tracking is a component of MLflow that logs and tracks your training run metrics and model artifacts. To learn more about how you can use MLFlow Tracking through Azure Synapse Analytics and Azure Machine Learning, visit this tutorial on [how to use MLFlow](../../machine-learning/how-to-use-mlflow.md).
synapse-analytics Apache Spark Secure Credentials With Tokenlibrary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-secure-credentials-with-tokenlibrary.md
print(connection_string)
```csharp using Microsoft.Spark.Extensions.Azure.Synapse.Analytics.Utils;
-string connectionString = TokenLibrary.getSecret("<AZURE KEY VAULT NAME>", "<SECRET KEY>", "<LINKED SERVICE NAME>");
+string connectionString = TokenLibrary.GetSecret("<AZURE KEY VAULT NAME>", "<SECRET KEY>", "<LINKED SERVICE NAME>");
Console.WriteLine(connectionString); ```
synapse-analytics Synapse File Mount Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/synapse-file-mount-api.md
Assuming you have one gen2 storage account named **storegen2** and the account h
![Screenshot of gen2 storage account](./media/synapse-file-mount-api/gen2-storage-account.png)
-To mount container **mycontainer**, mssparkutils need to check whether you have the permission to access the container at first, currently we support three authentication methods to trigger mount operation, **LinkeService**, **accountKey**, and **sastoken**.
+To mount container **mycontainer**, mssparkutils need to check whether you have the permission to access the container at first, currently we support three authentication methods to trigger mount operation, **LinkedService**, **accountKey**, and **sastoken**.
### Via Linked Service (recommend):
mssparkutils.fs.unmount("/test")
+ Mounting ADLS Gen1 storage account is not supported for now.
-
+
synapse-analytics Release Notes 10 0 10106 0 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/release-notes-10-0-10106-0.md
Previously updated : 4/30/2020 Last updated : 1/13/2022
tags: azure-synapse
This article summarizes the new features and improvements in the recent releases of [dedicated SQL pool (formerly SQL DW)](sql-data-warehouse-overview-what-is.md) in Azure Synapse Analytics. The article also lists notable content updates that aren't directly related to the release but published in the same time frame. For improvements to other Azure services, see [Service updates](https://azure.microsoft.com/updates).
+> [!NOTE]
+> For the newest release updates on Azure Synapse Analytics, including dedicated SQL pools, please refer to the [Azure Synapse Analytics blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/bg-p/AzureSynapseAnalyticsBlog/label-name/Monthly%20Update), [What's new in Azure Synapse Analytics?](../whats-new.md), or the Synapse Studio homepage in the Azure portal.
+ ## Check your dedicated SQL pool (formerly SQL DW) version As new features are rolled out to all regions, check the version deployed to your instance and the latest release notes for feature availability. To check the version, connect to your dedicated SQL pool (formerly SQL DW) via SQL Server Management Studio (SSMS) and run `SELECT @@VERSION;` to return the current version. Use this version to confirm which release has been applied to your dedicated SQL pool (formerly SQL DW). The date in the output identifies the month for the release applied to your dedicated SQL pool (formerly SQL DW). This only applies to service-level improvements.
For tooling improvements, make sure you have the correct version installed speci
## Next steps -- [create a dedicated SQL pool(formerly SQL DW)](create-data-warehouse-portal.md)
+- [Create a dedicated SQL pool(formerly SQL DW)](create-data-warehouse-portal.md)
## More information
synapse-analytics What Is A Data Warehouse Unit Dwu Cdwu https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/what-is-a-data-warehouse-unit-dwu-cdwu.md
Increasing DWUs:
- Linearly changes performance of the system for scans, aggregations, and CTAS statements - Increases the number of readers and writers for PolyBase load operations-- Increases the maximum number of concurrent queries and concurrency slots.
+- Increases the maximum number of concurrent queries and concurrency slots
## Service Level Objective
Each performance tier uses a slightly different unit of measure for their data w
Both DWUs and cDWUs support scaling compute up or down, and pausing compute when you don't need to use the data warehouse. These operations are all on-demand. Gen2 uses a local disk-based cache on the compute nodes to improve performance. When you scale or pause the system, the cache is invalidated and so a period of cache warming is required before optimal performance is achieved.
-Each SQL server (for example, myserver.database.windows.net) has a [Database Transaction Unit (DTU)](../../azure-sql/database/service-tiers-dtu.md) quota that allows a specific number of data warehouse units. For more information, see the [workload management capacity limits](sql-data-warehouse-service-capacity-limits.md#workload-management).
- ## Capacity limits Each SQL server (for example, myserver.database.windows.net) has a [Database Transaction Unit (DTU)](../../azure-sql/database/service-tiers-dtu.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json) quota that allows a specific number of data warehouse units. For more information, see the [workload management capacity limits](sql-data-warehouse-service-capacity-limits.md#workload-management).
synapse-analytics Overview Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/overview-features.md
Consumption models in Synapse SQL enable you to use different database objects.
| | Dedicated | Serverless | | | | |
-| **Tables** | [Yes](/sql/t-sql/statements/create-table-azure-sql-data-warehouse?view=azure-sqldw-latest&preserve-view=true) | No, the in-database tables are not supported. Serverless SQL pool can query only [external tables](develop-tables-external-tables.md?tabs=native) that reference data placed on [Azure Storage](#storage-options) |
+| **Tables** | [Yes](/sql/t-sql/statements/create-table-azure-sql-data-warehouse?view=azure-sqldw-latest&preserve-view=true) | No, the in-database tables are not supported. Serverless SQL pool can query only [external tables](develop-tables-external-tables.md?tabs=native) that reference data placed on [Azure Storage](#data-access) |
| **Views** | [Yes](/sql/t-sql/statements/create-view-transact-sql?view=azure-sqldw-latest&preserve-view=true). Views can use [query language elements](#query-language) that are available in dedicated model. | [Yes](/sql/t-sql/statements/create-view-transact-sql?view=azure-sqldw-latest&preserve-view=true), you can create views over [external tables](develop-tables-external-tables.md?tabs=native) and other views. Views can use [query language elements](#query-language) that are available in serverless model. | | **Schemas** | [Yes](/sql/t-sql/statements/create-schema-transact-sql?view=azure-sqldw-latest&preserve-view=true) | [Yes](/sql/t-sql/statements/create-schema-transact-sql?view=azure-sqldw-latest&preserve-view=true), schemas are supported. |
-| **Temporary tables** | [Yes](../sql-data-warehouse/sql-data-warehouse-tables-temporary.md?context=/azure/synapse-analytics/context/context) | No, temporary tables might be used just to store some information from system views. |
-| **User defined procedures** | [Yes](/sql/t-sql/statements/create-procedure-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, stored procedures can be placed in any user databases (not `master` database). |
+| **Temporary tables** | [Yes](../sql-data-warehouse/sql-data-warehouse-tables-temporary.md?context=/azure/synapse-analytics/context/context) | Temporary tables might be used just to store some information from the system views, literals, or other temp tables. UPDATE/DELETE on temp table is also supported. You can join temp tables with the system views. You cannot select data from an external table to insert it into temp table or join temp table with external table - these operations will fail because external data and temp-tables cannot be mixed in the same query. |
+| **User defined procedures** | [Yes](/sql/t-sql/statements/create-procedure-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, stored procedures can be placed in any user databases (not `master` database). Procedures can just read external data and use [query language elements](#query-language) that are available in serverless pool. |
| **User defined functions** | [Yes](/sql/t-sql/statements/create-function-sql-data-warehouse?view=azure-sqldw-latest&preserve-view=true) | Yes, only inline table-valued functions. Scalar user-defined functions are not supported. | | **Triggers** | No | No, serverless SQL pools do not allow changing data, so the triggers cannot react on data changes. |
-| **External tables** | [Yes](/sql/t-sql/statements/create-external-table-transact-sql?view=azure-sqldw-latest&preserve-view=true). See supported [data formats](#data-formats). | [Yes](/sql/t-sql/statements/create-external-table-transact-sql?view=azure-sqldw-latest&preserve-view=true). See the supported [data formats](#data-formats). |
-| **Caching queries** | Yes, multiple forms (SSD-based caching, in-memory, resultset caching). In addition, Materialized View are supported | No. Only file statistics are cached. |
+| **External tables** | [Yes](/sql/t-sql/statements/create-external-table-transact-sql?view=azure-sqldw-latest&preserve-view=true). See supported [data formats](#data-formats). | Yes, [external tables](/sql/t-sql/statements/create-external-table-transact-sql?view=azure-sqldw-latest&preserve-view=true) are available. See the supported [data formats](#data-formats). |
+| **Caching queries** | Yes, multiple forms (SSD-based caching, in-memory, resultset caching). In addition, Materialized View are supported | No, only the file statistics are cached. |
| **Table variables** | [No](/sql/t-sql/data-types/table-transact-sql?view=azure-sqldw-latest&preserve-view=true), use temporary tables | No, table variables are not supported. | | **[Table distribution](../sql-data-warehouse/sql-data-warehouse-tables-distribute.md?context=/azure/synapse-analytics/context/context)** | Yes | No, table distributions are not supported. | | **[Table indexes](../sql-data-warehouse/sql-data-warehouse-tables-index.md?context=/azure/synapse-analytics/context/context)** | Yes | No, indexes are not supported. |
-| **Table partitioning** | [Yes](../sql-data-warehouse/sql-data-warehouse-tables-partition.md?context=/azure/synapse-analytics/context/context). | No. You can partition files using Hive-partition folder structure and create partitioned tables in Spark. The Spark partitioning will be [synchronized with the serverless pool](../metadat#partitioned-views) on folder partition structure, but external tables cannot be created on partitioned folders. |
+| **Table partitioning** | [Yes](../sql-data-warehouse/sql-data-warehouse-tables-partition.md?context=/azure/synapse-analytics/context/context). | External tables do not support partitioning. You can partition files using Hive-partition folder structure and create partitioned tables in Spark. The Spark partitioning will be [synchronized with the serverless pool](../metadat#partitioned-views) on folder partition structure, but external tables cannot be created on partitioned folders. |
| **[Statistics](develop-tables-statistics.md)** | Yes | Yes, statistics are [created on external files](develop-tables-statistics.md#statistics-in-serverless-sql-pool). | | **Workload management, resource classes, and concurrency control** | Yes, see [workload management, resource classes, and concurrency control](../sql-data-warehouse/resource-classes-for-workload-management.md?context=/azure/synapse-analytics/context/context). | No, serverless SQL pool automatically manages the resources. | | **Cost control** | Yes, using scale-up and scale-down actions. | Yes, using [the Azure portal or T-SQL procedure](./data-processed.md#cost-control). |
Query languages used in Synapse SQL can have different supported features depend
| | Dedicated | Serverless | | | | |
-| **SELECT statement** | Yes. Transact-SQL query clauses [FOR XML/FOR JSON](/sql/t-sql/queries/select-for-clause-transact-sql?view=azure-sqldw-latest&preserve-view=true), [MATCH](/sql/t-sql/queries/match-sql-graph?view=azure-sqldw-latest&preserve-view=true), OFFSET/FETCH are not supported. | Yes. Transact-SQL query clauses [FOR XML](/sql/t-sql/queries/select-for-clause-transact-sql?view=azure-sqldw-latest&preserve-view=true), [MATCH](/sql/t-sql/queries/match-sql-graph?view=azure-sqldw-latest&preserve-view=true), [PREDICT](/sql/t-sql/queries/predict-transact-sql?view=azure-sqldw-latest&preserve-view=true), GROUPNG SETS, and query hints are not supported. |
-| **INSERT statement** | Yes | No, upload new data to Data lake using other tools. |
-| **UPDATE statement** | Yes | No, but data updated using Spark is automatically available in serverless pool. |
-| **DELETE statement** | Yes | No, but data deleted using Spark is automatically available in serverless pool. |
-| **MERGE statement** | Yes ([preview](/sql/t-sql/statements/merge-transact-sql?view=azure-sqldw-latest&preserve-view=true)) | No, but data merged using Spark is automatically available in serverless pool. |
+| **SELECT statement** | Yes. `SELECT` statement is supported, but some Transact-SQL query clauses, such as [FOR XML/FOR JSON](/sql/t-sql/queries/select-for-clause-transact-sql?view=azure-sqldw-latest&preserve-view=true), [MATCH](/sql/t-sql/queries/match-sql-graph?view=azure-sqldw-latest&preserve-view=true), OFFSET/FETCH are not supported. | Yes, `SELECT` statement is supported, but some Transact-SQL query clauses like [FOR XML](/sql/t-sql/queries/select-for-clause-transact-sql?view=azure-sqldw-latest&preserve-view=true), [MATCH](/sql/t-sql/queries/match-sql-graph?view=azure-sqldw-latest&preserve-view=true), [PREDICT](/sql/t-sql/queries/predict-transact-sql?view=azure-sqldw-latest&preserve-view=true), GROUPNG SETS, and query hints are not supported. |
+| **INSERT statement** | Yes | No, upload new data to Data lake using Spark or other tools. Use Cosmos DB with the analytical storage for highly transactional workloads. |
+| **UPDATE statement** | Yes | No, update Parquet/CSV data using Spark and the changes will be automatically available in serverless pool. Use Cosmos DB with the analytical storage for highly transactional workloads. |
+| **DELETE statement** | Yes | No, delete Parquet/CSV data using Spark and the changes will be automatically available in serverless pool. Use Cosmos DB with the analytical storage for highly transactional workloads.|
+| **MERGE statement** | Yes ([preview](/sql/t-sql/statements/merge-transact-sql?view=azure-sqldw-latest&preserve-view=true)) | No, merge Parquet/CSV data using Spark and the changes will be automatically available in serverless pool. |
| **[Transactions](develop-transactions.md)** | Yes | Yes, applicable only on the meta-data objects. | | **[Labels](develop-label.md)** | Yes | No | | **Data load** | Yes. Preferred utility is [COPY](/sql/t-sql/statements/copy-into-transact-sql?view=azure-sqldw-latest&preserve-view=true) statement, but the system supports both BULK load (BCP) and [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true) for data loading. | No, you can initially load data into an external table using CETAS statement. | | **Data export** | Yes. Using [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true). | Yes. Using [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true). |
-| **Types** | Yes, all Transact-SQL types except [cursor](/sql/t-sql/data-types/cursor-transact-sql?view=azure-sqldw-latest&preserve-view=true), [hierarchyid](/sql/t-sql/data-types/hierarchyid-data-type-method-reference?view=azure-sqldw-latest&preserve-view=true), [ntext, text, and image](/sql/t-sql/data-types/ntext-text-and-image-transact-sql?view=azure-sqldw-latest&preserve-view=true), [rowversion](/sql/t-sql/data-types/rowversion-transact-sql?view=azure-sqldw-latest&preserve-view=true), [Spatial Types](/sql/t-sql/spatial-geometry/spatial-types-geometry-transact-sql?view=azure-sqldw-latest&preserve-view=true), [sql\_variant](/sql/t-sql/data-types/sql-variant-transact-sql?view=azure-sqldw-latest&preserve-view=true), and [xml](/sql/t-sql/xml/xml-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, all Transact-SQL types except [cursor](/sql/t-sql/data-types/cursor-transact-sql?view=azure-sqldw-latest&preserve-view=true), [hierarchyid](/sql/t-sql/data-types/hierarchyid-data-type-method-reference?view=azure-sqldw-latest&preserve-view=true), [ntext, text, and image](/sql/t-sql/data-types/ntext-text-and-image-transact-sql?view=azure-sqldw-latest&preserve-view=true), [rowversion](/sql/t-sql/data-types/rowversion-transact-sql?view=azure-sqldw-latest&preserve-view=true), [Spatial Types](/sql/t-sql/spatial-geometry/spatial-types-geometry-transact-sql?view=azure-sqldw-latest&preserve-view=true), [sql\_variant](/sql/t-sql/data-types/sql-variant-transact-sql?view=azure-sqldw-latest&preserve-view=true), [xml](/sql/t-sql/xml/xml-transact-sql?view=azure-sqldw-latest&preserve-view=true), and Table type |
-| **Cross-database queries** | No | Yes, 3-part-name references are supported including [USE](/sql/t-sql/language-elements/use-transact-sql?view=azure-sqldw-latest&preserve-view=true) statement. |
+| **Types** | Yes, all Transact-SQL types except [cursor](/sql/t-sql/data-types/cursor-transact-sql?view=azure-sqldw-latest&preserve-view=true), [hierarchyid](/sql/t-sql/data-types/hierarchyid-data-type-method-reference?view=azure-sqldw-latest&preserve-view=true), [ntext, text, and image](/sql/t-sql/data-types/ntext-text-and-image-transact-sql?view=azure-sqldw-latest&preserve-view=true), [rowversion](/sql/t-sql/data-types/rowversion-transact-sql?view=azure-sqldw-latest&preserve-view=true), [Spatial Types](/sql/t-sql/spatial-geometry/spatial-types-geometry-transact-sql?view=azure-sqldw-latest&preserve-view=true), [sql\_variant](/sql/t-sql/data-types/sql-variant-transact-sql?view=azure-sqldw-latest&preserve-view=true), and [xml](/sql/t-sql/xml/xml-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, all Transact-SQL types are supported, except [cursor](/sql/t-sql/data-types/cursor-transact-sql?view=azure-sqldw-latest&preserve-view=true), [hierarchyid](/sql/t-sql/data-types/hierarchyid-data-type-method-reference?view=azure-sqldw-latest&preserve-view=true), [ntext, text, and image](/sql/t-sql/data-types/ntext-text-and-image-transact-sql?view=azure-sqldw-latest&preserve-view=true), [rowversion](/sql/t-sql/data-types/rowversion-transact-sql?view=azure-sqldw-latest&preserve-view=true), [Spatial Types](/sql/t-sql/spatial-geometry/spatial-types-geometry-transact-sql?view=azure-sqldw-latest&preserve-view=true), [sql\_variant](/sql/t-sql/data-types/sql-variant-transact-sql?view=azure-sqldw-latest&preserve-view=true), [xml](/sql/t-sql/xml/xml-transact-sql?view=azure-sqldw-latest&preserve-view=true), and Table type. See how to [map Parquet column types to SQL types here](develop-openrowset.md#type-mapping-for-parquet). |
+| **Cross-database queries** | No | Yes, 3-part-name references are supported including [USE](/sql/t-sql/language-elements/use-transact-sql?view=azure-sqldw-latest&preserve-view=true) statement. The queries can reference the serverless SQL databases or the Lake databases in the same workspace. |
| **Built-in/system functions (analysis)** | Yes, all Transact-SQL [Analytic](/sql/t-sql/functions/analytic-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), Conversion, [Date and Time](/sql/t-sql/functions/date-and-time-data-types-and-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), Logical, [Mathematical](/sql/t-sql/functions/mathematical-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true) functions, except [CHOOSE](/sql/t-sql/functions/logical-functions-choose-transact-sql?view=azure-sqldw-latest&preserve-view=true) and [PARSE](/sql/t-sql/functions/parse-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, all Transact-SQL [Analytic](/sql/t-sql/functions/analytic-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), Conversion, [Date and Time](/sql/t-sql/functions/date-and-time-data-types-and-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), Logical, [Mathematical](/sql/t-sql/functions/mathematical-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true) functions. | | **Built-in/system functions ([string](/sql/t-sql/functions/string-functions-transact-sql))** | Yes. All Transact-SQL [String](/sql/t-sql/functions/string-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), [JSON](/sql/t-sql/functions/json-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), and Collation functions, except [STRING_ESCAPE](/sql/t-sql/functions/string-escape-transact-sql?view=azure-sqldw-latest&preserve-view=true) and [TRANSLATE](/sql/t-sql/functions/translate-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes. All Transact-SQL [String](/sql/t-sql/functions/string-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), [JSON](/sql/t-sql/functions/json-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), and Collation functions. | | **Built-in/system functions ([Cryptographic](/sql/t-sql/functions/cryptographic-functions-transact-sql))** | Some | `HASHBYTES` is the only supported cryptographic function in serverless SQL pools. |
-| **Built-in/system table-value functions** | Yes, [Transact-SQL Rowset functions](/sql/t-sql/functions/functions?view=azure-sqldw-latest&preserve-view=true#rowset-functions), except [OPENXML](/sql/t-sql/functions/openxml-transact-sql?view=azure-sqldw-latest&preserve-view=true), [OPENDATASOURCE](/sql/t-sql/functions/opendatasource-transact-sql?view=azure-sqldw-latest&preserve-view=true), [OPENQUERY](/sql/t-sql/functions/openquery-transact-sql?view=azure-sqldw-latest&preserve-view=true), and [OPENROWSET](/sql/t-sql/functions/openrowset-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, [Transact-SQL Rowset functions](/sql/t-sql/functions/functions?view=azure-sqldw-latest&preserve-view=true#rowset-functions), except [OPENXML](/sql/t-sql/functions/openxml-transact-sql?view=azure-sqldw-latest&preserve-view=true), [OPENDATASOURCE](/sql/t-sql/functions/opendatasource-transact-sql?view=azure-sqldw-latest&preserve-view=true), and [OPENQUERY](/sql/t-sql/functions/openquery-transact-sql?view=azure-sqldw-latest&preserve-view=true) |
-| **Built-in/system aggregates** | Transact-SQL built-in aggregates except, except [CHECKSUM_AGG](/sql/t-sql/functions/checksum-agg-transact-sql?view=azure-sqldw-latest&preserve-view=true) and [GROUPING_ID](/sql/t-sql/functions/grouping-id-transact-sql?view=azure-sqldw-latest&preserve-view=true) | All Transact-SQL built-in [aggregates](/sql/t-sql/functions/aggregate-functions-transact-sql?view=sql-server-ver15) are supported. |
+| **Built-in/system table-value functions** | Yes, [Transact-SQL Rowset functions](/sql/t-sql/functions/functions?view=azure-sqldw-latest&preserve-view=true#rowset-functions), except [OPENXML](/sql/t-sql/functions/openxml-transact-sql?view=azure-sqldw-latest&preserve-view=true), [OPENDATASOURCE](/sql/t-sql/functions/opendatasource-transact-sql?view=azure-sqldw-latest&preserve-view=true), [OPENQUERY](/sql/t-sql/functions/openquery-transact-sql?view=azure-sqldw-latest&preserve-view=true), and [OPENROWSET](/sql/t-sql/functions/openrowset-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, all [Transact-SQL Rowset functions](/sql/t-sql/functions/functions?view=azure-sqldw-latest&preserve-view=true#rowset-functions) are supported, except [OPENXML](/sql/t-sql/functions/openxml-transact-sql?view=azure-sqldw-latest&preserve-view=true), [OPENDATASOURCE](/sql/t-sql/functions/opendatasource-transact-sql?view=azure-sqldw-latest&preserve-view=true), and [OPENQUERY](/sql/t-sql/functions/openquery-transact-sql?view=azure-sqldw-latest&preserve-view=true) |
+| **Built-in/system aggregates** | Transact-SQL built-in aggregates except, except [CHECKSUM_AGG](/sql/t-sql/functions/checksum-agg-transact-sql?view=azure-sqldw-latest&preserve-view=true) and [GROUPING_ID](/sql/t-sql/functions/grouping-id-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, all Transact-SQL built-in [aggregates](/sql/t-sql/functions/aggregate-functions-transact-sql?view=sql-server-ver15) are supported. |
| **Operators** | Yes, all [Transact-SQL operators](/sql/t-sql/language-elements/operators-transact-sql?view=azure-sqldw-latest&preserve-view=true) except [!>](/sql/t-sql/language-elements/not-greater-than-transact-sql?view=azure-sqldw-latest&preserve-view=true) and [!<](/sql/t-sql/language-elements/not-less-than-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, all [Transact-SQL operators](/sql/t-sql/language-elements/operators-transact-sql?view=azure-sqldw-latest&preserve-view=true) | | **Control of flow** | Yes. All [Transact-SQL Control-of-flow statement](/sql/t-sql/language-elements/control-of-flow?view=azure-sqldw-latest&preserve-view=true) except [CONTINUE](/sql/t-sql/language-elements/continue-transact-sql?view=azure-sqldw-latest&preserve-view=true), [GOTO](/sql/t-sql/language-elements/goto-transact-sql?view=azure-sqldw-latest&preserve-view=true), [RETURN](/sql/t-sql/language-elements/return-transact-sql?view=azure-sqldw-latest&preserve-view=true), [USE](/sql/t-sql/language-elements/use-transact-sql?view=azure-sqldw-latest&preserve-view=true), and [WAITFOR](/sql/t-sql/language-elements/waitfor-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes. All [Transact-SQL Control-of-flow statement](/sql/t-sql/language-elements/control-of-flow?view=azure-sqldw-latest&preserve-view=true) SELECT query in `WHILE (...)` condition |
-| **DDL statements (CREATE, ALTER, DROP)** | Yes. All Transact-SQL DDL statement applicable to the supported object types | Yes. All Transact-SQL DDL statement applicable to the supported object types |
+| **DDL statements (CREATE, ALTER, DROP)** | Yes. All Transact-SQL DDL statement applicable to the supported object types | Yes, all Transact-SQL DDL statement applicable to the supported object types are supported. |
## Security
Synapse SQL pools enable you to use built-in security features to secure your da
| | Dedicated | Serverless | | | | |
-| **Logins** | N/A (only contained users are supported in databases) | Yes server-level Azure AD and SQL logins are supported. |
-| **Users** | N/A (only contained users are supported in databases) | Yes |
-| **[Contained users](/sql/relational-databases/security/contained-database-users-making-your-database-portable?view=azure-sqldw-latest&preserve-view=true)** | Yes. **Note:** only one Azure AD user can be unrestricted admin | No |
+| **Logins** | N/A (only contained users are supported in databases) | Yes, server-level Azure AD and SQL logins are supported. |
+| **Users** | N/A (only contained users are supported in databases) | Yes, database users are supported. |
+| **[Contained users](/sql/relational-databases/security/contained-database-users-making-your-database-portable?view=azure-sqldw-latest&preserve-view=true)** | Yes. **Note:** only one Azure AD user can be unrestricted admin | No, the contained users are not supported. |
| **SQL username/password authentication**| Yes | Yes, users can access serverless SQL pool using their usernames and passwords. | | **Azure Active Directory (Azure AD) authentication**| Yes, Azure AD users | Yes, Azure AD logins and users can access serverless SQL pools using their Azure AD identities. |
-| **Storage Azure Active Directory (Azure AD) passthrough authentication** | Yes | [Yes](develop-storage-files-storage-access-control.md?tabs=user-identity#supported-storage-authorization-types), applicable to Azure AD logins. The identity of the Azure AD user is passed to the storage if a credential is not specified. Azure AD passthrough authentication is not available for the SQL users. |
+| **Storage Azure Active Directory (Azure AD) passthrough authentication** | Yes | Yes, [Azure AD passthrough authentication](develop-storage-files-storage-access-control.md?tabs=user-identity#supported-storage-authorization-types) is applicable to Azure AD logins. The identity of the Azure AD user is passed to the storage if a credential is not specified. Azure AD passthrough authentication is not available for the SQL users. |
| **Storage SAS token authentication** | No | Yes, using [DATABASE SCOPED CREDENTIAL](/sql/t-sql/statements/create-database-scoped-credential-transact-sql?view=azure-sqldw-latest&preserve-view=true) in [EXTERNAL DATA SOURCE](/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest&preserve-view=true) or instance-level [CREDENTIAL](/sql/t-sql/statements/create-credential-transact-sql?view=azure-sqldw-latest&preserve-view=true). | | **Storage Access Key authentication** | Yes, using [DATABASE SCOPED CREDENTIAL](/sql/t-sql/statements/create-database-scoped-credential-transact-sql?view=azure-sqldw-latest&preserve-view=true) in [EXTERNAL DATA SOURCE](/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest&preserve-view=true) | No |
-| **Storage [Managed Identity](../../data-factory/data-factory-service-identity.md?context=/azure/synapse-analytics/context/context&tabs=synapse-analytics) authentication** | Yes, using [Managed Service Identity Credential](../../azure-sql/database/vnet-service-endpoint-rule-overview.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&preserve-view=true&toc=%2fazure%2fsynapse-analytics%2ftoc.json&view=azure-sqldw-latest&preserve-view=true) | Yes, using [Managed Identity](develop-storage-files-storage-access-control.md?tabs=managed-identity#database-scoped-credential) credential. |
-| **Storage Application identity authentication** | [Yes](/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, you can create a [credential](develop-storage-files-storage-access-control.md?tabs=service-principal#database-scoped-credential) with a [service principal application ID](develop-storage-files-storage-access-control.md?tabs=service-principal#supported-storage-authorization-types) that will be used to authenticate on the storage. |
+| **Storage [Managed Identity](../../data-factory/data-factory-service-identity.md?context=/azure/synapse-analytics/context/context&tabs=synapse-analytics) authentication** | Yes, using [Managed Service Identity Credential](../../azure-sql/database/vnet-service-endpoint-rule-overview.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&preserve-view=true&toc=%2fazure%2fsynapse-analytics%2ftoc.json&view=azure-sqldw-latest&preserve-view=true) | Yes, The query can access the storage using the workspace [Managed Identity](develop-storage-files-storage-access-control.md?tabs=managed-identity#database-scoped-credential) credential. |
+| **Storage Application identity/Service principal (SPN) authentication** | [Yes](/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, you can create a [credential](develop-storage-files-storage-access-control.md?tabs=service-principal#database-scoped-credential) with a [service principal application ID](develop-storage-files-storage-access-control.md?tabs=service-principal#supported-storage-authorization-types) that will be used to authenticate on the storage. |
| **Server-level roles** | No | Yes, sysadmin, public, and other server-roles are supported. | | **SERVER SCOPED CREDENTIAL** | No | Yes, the server scoped credentials are used by the `OPENROWSET` function that do not uses explicit data source. | | **Permissions - [Server-level](/sql/relational-databases/security/authentication-access/server-level-roles)** | No | Yes, for example, `CONNECT ANY DATABASE` and `SELECT ALL USER SECURABLES` enable a user to read data from any databases. | | **Database-scoped roles** | Yes | Yes, you can use `db_owner`, `db_datareader` and `db_ddladmin` roles. | | **DATABASE SCOPED CREDENTIAL** | Yes, used in external data sources. | Yes, used in external data sources. | | **Permissions - [Database-level](/sql/relational-databases/security/authentication-access/database-level-roles?view=azure-sqldw-latest&preserve-view=true)** | Yes | Yes |
-| **Permissions - Schema-level** | Yes, including ability to GRANT, DENY, and REVOKE permissions to users/logins on the schema | Yes, including ability to GRANT, DENY, and REVOKE permissions to users/logins on the schema |
-| **Permissions - Object-level** | Yes, including ability to GRANT, DENY, and REVOKE permissions to users | Yes, including ability to GRANT, DENY, and REVOKE permissions to users/logins on the system objects that are supported |
-| **Permissions - [Column-level security](../sql-data-warehouse/column-level-security.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json)** | Yes | Yes |
-| **Built-in/system security &amp; identity functions** | Some Transact-SQL security functions and operators: `CURRENT_USER`, `HAS_DBACCESS`, `IS_MEMBER`, `IS_ROLEMEMBER`, `SESSION_USER`, `SUSER_NAME`, `SUSER_SNAME`, `SYSTEM_USER`, `USER`, `USER_NAME`, `EXECUTE AS`, `OPEN/CLOSE MASTER KEY` | Some Transact-SQL security functions and operators: `CURRENT_USER`, `HAS_DBACCESS`, `HAS_PERMS_BY_NAME`, `IS_MEMBER', 'IS_ROLEMEMBER`, `IS_SRVROLEMEMBER`, `SESSION_USER`, `SESSION_CONTEXT`, `SUSER_NAME`, `SUSER_SNAME`, `SYSTEM_USER`, `USER`, `USER_NAME`, `EXECUTE AS`, and `REVERT`. Security functions cannot be used to query external data (store the result in variable that can be used in the query). |
+| **Permissions - Schema-level** | Yes, including ability to GRANT, DENY, and REVOKE permissions to users/logins on the schema | Yes, you can specify schema-level permissions including ability to GRANT, DENY, and REVOKE permissions to users/logins on the schema |
+| **Permissions - Object-level** | Yes, including ability to GRANT, DENY, and REVOKE permissions to users | Yes, you can GRANT, DENY, and REVOKE permissions to users/logins on the system objects that are supported |
+| **Permissions - [Column-level security](../sql-data-warehouse/column-level-security.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json)** | Yes | Yes, column-level security is supported in serverless SQL pools. |
+| **Built-in/system security &amp; identity functions** | Some Transact-SQL security functions and operators: `CURRENT_USER`, `HAS_DBACCESS`, `IS_MEMBER`, `IS_ROLEMEMBER`, `SESSION_USER`, `SUSER_NAME`, `SUSER_SNAME`, `SYSTEM_USER`, `USER`, `USER_NAME`, `EXECUTE AS`, `OPEN/CLOSE MASTER KEY` | Some Transact-SQL security functions and operators are supported: `CURRENT_USER`, `HAS_DBACCESS`, `HAS_PERMS_BY_NAME`, `IS_MEMBER', 'IS_ROLEMEMBER`, `IS_SRVROLEMEMBER`, `SESSION_USER`, `SESSION_CONTEXT`, `SUSER_NAME`, `SUSER_SNAME`, `SYSTEM_USER`, `USER`, `USER_NAME`, `EXECUTE AS`, and `REVERT`. Security functions cannot be used to query external data (store the result in variable that can be used in the query). |
| **Row-level security** | [Yes](/sql/relational-databases/security/row-level-security?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) | No built-in support. Use custom views as a [workaround](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/how-to-implement-row-level-security-in-serverless-sql-pools/ba-p/2354759). | | **Transparent Data Encryption (TDE)** | [Yes](../../azure-sql/database/transparent-data-encryption-tde-overview.md) | No | | **Data Discovery & Classification** | [Yes](../../azure-sql/database/data-discovery-and-classification-overview.md) | No | | **Vulnerability Assessment** | [Yes](../../azure-sql/database/sql-vulnerability-assessment.md) | No |
-| **Advanced Threat Protection** | [Yes](../../azure-sql/database/threat-detection-overview.md)
+| **Advanced Threat Protection** | [Yes](../../azure-sql/database/threat-detection-overview.md) | No |
| **Auditing** | [Yes](../../azure-sql/database/auditing-overview.md) | [Yes](../../azure-sql/database/auditing-overview.md) |
-| **[Firewall rules](../security/synapse-workspace-ip-firewall.md)**| Yes | Yes |
-| **[Private endpoint](../security/synapse-workspace-managed-private-endpoints.md)**| Yes | Yes |
+| **[Firewall rules](../security/synapse-workspace-ip-firewall.md)**| Yes | Yes, the firewall rules can be set on serverless SQL endpoint. |
+| **[Private endpoint](../security/synapse-workspace-managed-private-endpoints.md)**| Yes | Yes, the private endpoint can be set on serverless SQL pool. |
Dedicated SQL pool and serverless SQL pool use standard Transact-SQL language to query data. For detailed differences, look at the [Transact-SQL language reference](/sql/t-sql/language-reference).
You can use various tools to connect to Synapse SQL to query data.
| | Dedicated | Serverless | | | | |
-| **Synapse Studio** | Yes, SQL scripts | Yes, SQL scripts |
+| **Synapse Studio** | Yes, SQL scripts | Yes, SQL scripts. Use SSMS or ADS instead of Synapse Studio if you are returning a large amount of data as a result. |
| **Power BI** | Yes | [Yes](tutorial-connect-power-bi-desktop.md) | | **Azure Analysis Service** | Yes | Yes |
-| **Azure Data Studio** | Yes | Yes, version 1.18.0 or higher. SQL scripts and SQL Notebooks are supported. |
-| **SQL Server Management Studio** | Yes | Yes, version 18.5 or higher |
+| **Azure Data Studio** | Yes | [Yes](get-started-azure-data-studio.md), version 1.18.0 or higher. SQL scripts and SQL Notebooks are supported. |
+| **SQL Server Management Studio** | Yes | [Yes](get-started-ssms.md), version 18.5 or higher |
> [!NOTE] > You can use SSMS to connect to serverless SQL pool and query. It is partially supported starting from version 18.5, you can use it to connect and query only. Most of the applications use standard Transact-SQL language can query both dedicated and serverless consumption models of Synapse SQL.
-## Storage options
+## Data access
Data that is analyzed can be stored on various storage types. The following table lists all available storage options:
Data that is analyzed can be stored in various storage formats. The following ta
| **Hive RC** | [Yes](/sql/t-sql/statements/create-external-file-format-transact-sql?view=azure-sqldw-latest&preserve-view=true) | No | | **JSON** | Yes | [Yes](query-json-files.md) | | **Avro** | No | No |
-| **[Delta Lake](https://delta.io/)** | No | [Yes](query-delta-lake-format.md) |
+| **[Delta Lake](https://delta.io/)** | No | [Yes](query-delta-lake-format.md), including files with [nested types](query-parquet-nested-types.md) |
| **[CDM](/common-data-model/)** | No | No | ## Next steps
virtual-desktop Safe Url List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/safe-url-list.md
The Azure virtual machines you create for Azure Virtual Desktop must have access
|*eh.servicebus.windows.net|443|Agent traffic|AzureCloud| |*xt.table.core.windows.net|443|Agent traffic|AzureCloud| |*xt.queue.core.windows.net|443|Agent traffic|AzureCloud|
-|catalogartifact.azureedge.net|443|Azure Marketplace|AzureCloud|
+|catalogartifact.azureedge.net|443|Azure Marketplace|AzureFrontDoor.Frontend|
|kms.core.windows.net|1688|Windows activation|Internet| |mrsglobalsteus2prod.blob.core.windows.net|443|Agent and SXS stack updates|AzureCloud| |wvdportalstorageblob.blob.core.windows.net|443|Azure portal support|AzureCloud|
virtual-machines Disks Shared Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-shared-enable.md
description: Configure an Azure managed disk with shared disks so that you can s
Previously updated : 09/01/2021 Last updated : 01/13/2022
Before using the following template, replace `[parameters('dataDiskName')]`, `[r
+## Share an existing disk
+
+To share an existing disk, or update how many VMs it can mount to, set the `maxShares` parameter with either the Azure PowerShell module or Azure CLI. You can also set `maxShares` to 1, if you want to disable sharing.
+
+> [!IMPORTANT]
+> The value of `maxShares` can only be set or changed when a disk is unmounted from all VMs. See the [Disk sizes](#disk-sizes) for the allowed values for `maxShares`.
+> Before detaching a disk, record the LUN ID for when you re-attach it.
+
+### PowerShell
+
+```azurepowershell
+$datadiskconfig = Get-AzDisk -DiskName "mySharedDisk"
+$datadiskconfig.maxShares = 3
+
+Update-AzDisk -ResourceGroupName 'myResourceGroup' -DiskName 'mySharedDisk' -Disk $datadiskconfig
+```
+
+### CLI
+
+```azurecli
+#Modifying a disk to enable or modify sharing configuration
+
+az disk update --name mySharedDisk --max-shares 5
+```
+ ## Using Azure shared disks with your VMs Once you've deployed a shared disk with `maxShares>1`, you can mount the disk to one or more of your VMs.
virtual-machines Disks Shared https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-shared.md
description: Learn about sharing Azure managed disks across multiple Linux VMs.
Previously updated : 09/03/2021 Last updated : 01/13/2022
Some popular applications running on WSFC include:
Azure shared disks are supported on: - [SUSE SLE HA 15 SP1 and above](https://www.suse.com/c/azure-shared-disks-excercise-w-sles-for-sap-or-sle-ha/) - [Ubuntu 18.04 and above](https://discourse.ubuntu.com/t/ubuntu-high-availability-corosync-pacemaker-shared-disk-environments/14874)-- [RHEL developer preview on any RHEL 8 version](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/deploying_red_hat_enterprise_linux_8_on_public_cloud_platforms/index?lb_target=production#azure-configuring-shared-block-storage_configuring-rhel-high-availability-on-azure)
+- [RHEL 8.3 and above](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/deploying_red_hat_enterprise_linux_8_on_public_cloud_platforms/index?lb_target=production#azure-configuring-shared-block-storage_configuring-rhel-high-availability-on-azure)
+ - It may be possible to use RHEL 7 or an older version of RHEL 8 with shared disks, contact SharedDiskFeedback @microsoft.com
- [Oracle Enterprise Linux](https://docs.oracle.com/en/operating-systems/oracle-linux/8/availability/hacluster-1.html) Linux clusters can use cluster managers such as [Pacemaker](https://wiki.clusterlabs.org/wiki/Pacemaker). Pacemaker builds on [Corosync](http://corosync.github.io/corosync/), enabling cluster communications for applications deployed in highly available environments. Some common clustered filesystems include [ocfs2](https://oss.oracle.com/projects/ocfs2/) and [gfs2](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/global_file_system_2/ch-overview-gfs2). You can use SCSI Persistent Reservation (SCSI PR) and/or STONITH Block Device (SBD) based clustering models for arbitrating access to the disk. When using SCSI PR, you can manipulate reservations and registrations using utilities such as [fence_scsi](http://manpages.ubuntu.com/manpages/eoan/man8/fence_scsi.8.html) and [sg_persist](https://linux.die.net/man/8/sg_persist).
virtual-machines Hpc Compute Infiniband Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/hpc-compute-infiniband-linux.md
vm-linux Previously updated : 10/14/2021 Last updated : 1/13/2022
# InfiniBand Driver Extension for Linux
-This extension installs InfiniBand OFED drivers on InfiniBand and SR-IOV-enabled ('r' sizes) [H-series](../sizes-hpc.md) and [N-series](../sizes-gpu.md) VMs running Linux. Depending on the VM family, the extension installs the appropriate drivers for the Connect-X NIC.
+This extension installs InfiniBand OFED drivers on InfiniBand and SR-IOV-enabled ('r' sizes) [H-series](../sizes-hpc.md) and [N-series](../sizes-gpu.md) VMs running Linux. Depending on the VM family, the extension installs the appropriate drivers for the Connect-X NIC. It does not install the InfiniBand ND drivers on the non-SR-IOV enabled [H-series](../sizes-hpc.md) and [N-series](../sizes-gpu.md) VMs.
Instructions on manual installation of the OFED drivers are available in [Enable InfiniBand on HPC VMs](../workloads/hpc/enable-infiniband.md#manual-installation).
An extension is also available to install InfiniBand drivers for [Windows VMs](h
This extension supports the following OS distros, depending on driver support for specific OS version.
-| Distribution | Version |
-|||
-| Linux: Ubuntu | 16.04 LTS, 18.04 LTS, 20.04 LTS |
-| Linux: CentOS | 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8.1, 8,2 |
-| Linux: Red Hat Enterprise Linux | 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8.1, 8,2 |
+| Distribution | Version | InfiniBand NIC drivers |
+||||
+| Ubuntu | 16.04 LTS, 18.04 LTS, 20.04 LTS | CX3-Pro, CX5, CX6 |
+| CentOS | 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8.1, 8,2 | CX3-Pro, CX5, CX6 |
+| Red Hat Enterprise Linux | 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8.1, 8,2 | CX3-Pro, CX5, CX6 |
+
+For latest list of supported OS and driver versions, refer to [resources.json](https://github.com/Azure/azhpc-extensions/blob/master/InfiniBand/resources.json)
### Internet connectivity
virtual-machines Hpc Compute Infiniband Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/hpc-compute-infiniband-windows.md
vm-windows Previously updated : 10/14/2021 Last updated : 1/13/2022
This extension supports the following OS distros, depending on driver support fo
| Windows Server 2012 R2 | CX3-Pro, CX5, CX6 | | Windows Server 2012 | CX3-Pro, CX5, CX6 |
+For latest list of supported OS and driver versions, refer to [resources.json](https://github.com/Azure/azhpc-extensions/blob/master/InfiniBand/resources.json)
+ ### Internet connectivity The Microsoft Azure Extension for InfiniBand Drivers requires that the target VM is connected to and has access to the internet.
virtual-machines Windows Desktop Multitenant Hosting Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/windows-desktop-multitenant-hosting-deployment.md
For more information, see [Multitenant Hosting for Windows 10](https://www.micro
## Subscription Licenses that qualify for Multitenant Hosting Rights
-Using the [Microsoft admin center](/microsoft-365/admin/admin-overview/about-the-admin-center), you can confirm if a user has been assigned a Windows 10 supported license.
+For more details about subscription licenses that qualify to run Windows 10 on Azure, download the [Windows 10 licensing breif for Virtual Desktops](https://download.microsoft.com/download/3/D/4/3D42BDC2-6725-4B29-B75A-A5B04179958B/Licensing_brief_PLT_Windows_10_licensing_for_Virtual_Desktops.pdf)
> [!IMPORTANT] > Users **must** have one of the below subscription licenses in order to use Windows 10 images in Azure for any production workload. If you do not have one of these subscription licenses, they can be purchased through your [Cloud Service Partner](https://azure.microsoft.com/overview/choosing-a-cloud-service-provider/) or directly through [Microsoft](https://www.microsoft.com/microsoft-365?rtc=1).
vpn-gateway Vpn Gateway About Vpngateways https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-about-vpngateways.md
# Customer intent: As someone with a basic network background, but is new to Azure, I want to understand the capabilities of Azure VPN Gateway so that I can securely connect to my Azure virtual networks. Previously updated : 07/08/2021 Last updated : 01/12/2022
When you create a VPN gateway, gateway VMs are deployed to the gateway subnet an
A VPN gateway connection relies on multiple resources that are configured with specific settings. Most of the resources can be configured separately, although some resources must be configured in a certain order.
-### <a name="diagrams"></a>Design
+### <a name="connectivity"></a> Connectivity
-It's important to know that there are different configurations available for VPN gateway connections. You need to determine which configuration best fits your needs. For example, Point-to-Site, Site-to-Site, and coexisting ExpressRoute/Site-to-Site connections all have different instructions and configuration requirements. For information about design and to view connection topology diagrams, see [Design](design.md).
+Because you can create multiple connection configurations using VPN Gateway, you need to determine which configuration best fits your needs. Point-to-Site, Site-to-Site, and coexisting ExpressRoute/Site-to-Site connections all have different instructions and configuration requirements. For connection diagrams and corresponding links to configuration steps, see [VPN Gateway design](design.md).
+
+* [Site-to-Site VPN connections](design.md#s2smulti)
+* [Point-to-Site VPN connections](design.md#P2S)
+* [VNet-to-VNet VPN connections](design.md#V2V)
### <a name="planningtable"></a>Planning table
-The following table can help you decide the best connectivity option for your solution.
+The following table can help you decide the best connectivity option for your solution. Note that ExpressRoute is not a part of VPN Gateway, but is included in the table.
[!INCLUDE [cross-premises](../../includes/vpn-gateway-cross-premises-include.md)]
When you create a virtual network gateway, you specify the gateway SKU that you
## <a name="availability"></a>Availability Zones
-VPN gateways can be deployed in Azure Availability Zones. This brings resiliency, scalability, and higher availability to virtual network gateways. Deploying gateways in Azure Availability Zones physically and logically separates gateways within a region, while protecting your on-premises network connectivity to Azure from zone-level failures. see [About zone-redundant virtual network gateways in Azure Availability Zones](about-zone-redundant-vnet-gateways.md).
+VPN gateways can be deployed in Azure Availability Zones. This brings resiliency, scalability, and higher availability to virtual network gateways. Deploying gateways in Azure Availability Zones physically and logically separates gateways within a region, while protecting your on-premises network connectivity to Azure from zone-level failures. See [About zone-redundant virtual network gateways in Azure Availability Zones](about-zone-redundant-vnet-gateways.md).
## <a name="pricing"></a>Pricing
vpn-gateway Vpn Gateway About Vpngateways https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vs-azure-tools-storage-explorer-blobs.md
The following steps illustrate how to manage (add and remove) access policies fo
* **Edit an access policy** - Make any desired edits, and select **Save**. * **Remove an access policy** - Select **Remove** next to the access policy you wish to remove.
+> [!NOTE]
+> Modifying immutability policies is not supported from Storage Explorer.
+ ## Set the Public Access Level for a blob container By default, every blob container is set to "No public access".
The following steps illustrate how to manage the blobs (and folders) within a bl
[16]: ./media/vs-azure-tools-storage-explorer-blobs/blob-upload-files-options.png [17]: ./media/vs-azure-tools-storage-explorer-blobs/blob-upload-folder-menu.png [18]: ./media/vs-azure-tools-storage-explorer-blobs/blob-upload-folder-options.png
-[19]: ./media/vs-azure-tools-storage-explorer-blobs/blob-container-open-editor-context-menu.png
+[19]: ./media/vs-azure-tools-storage-explorer-blobs/blob-container-open-editor-context-menu.png